New England Computer Vision Workshop

MIT, Boston, MA

Friday 9th December 2022



The New England Computer Vision Workshop (NECV) brings together researchers in computer vision and related areas for an informal exchange of ideas through a full day of presentations and posters. NECV typically attracts around 100 people from universities and industry research labs in New England. As in previous years, the workshop will focus on graduate student presentations.

Welcome!

- Phillip Isola and Pulkit Agrawal


Registration

Participation is free for all researchers at academic institutions. Academic researchers should register here.

For our industry friends, a limited number of registrations are available for a fee. Please contact Samson Timoner - samson@ai.mit.edu for details.


Submission

Please submit a one-page PDF abstract using the CVPR 2023 rebuttal template by email to necv2022mit@gmail.com. Abstracts are due by 11:59pm on Mon Nov 21st, 2022. Oral decisions will be released by Nov 28th..

You may present work that has already been published, or work that is in progress. All submissions will be granted a poster presentation, and selected submissions from each institution will be granted 12-minute oral presentations. Post-docs and faculty may submit for poster presentations, but oral presentations are reserved for graduate students.

There will be no publications resulting from the workshop, so presentations will not be considered "prior peer-reviewed work" according to any definition we are aware of. Thus, work presented at NECV can be subsequently submitted to other venues without citation.

The workshop is after the CVPR supplemental deadline, so come and show off your new work in a friendly environment.


Logistics

Tentative Schedule

9:30-10:00 Coffee, snacks, poster setup
10:00-10:15 Welcome
10:15-11:30 Oral presentations 1
  1. Semantic Attention Flow Fields for Dynamic Scene Decomposition
    Yiqing Liang, Eliot Laidlaw, Alexander Meyerowitz, Srinath Sridhar, James Tompkin (Brown)
  2. Discretization Invariant Learning on Neural Fields
    Clinton Wang, Polina Golland (MIT)
  3. Image as Set of Points
    Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, Yun Fu (Northeastern)
  4. Designing Perceptual Puzzles by Differentiating Probabilistic Programs
    Kartik Chandra, Tzu-Mao Li, Josh Tenenbaum, Jonathan Ragan-Kelley (MIT)
  5. StegaPos: Preventing Unwanted Crops and Replacements with Imperceptible Positional Embeddings
    Gokhan Egri, Todd Zickler (Harvard)
11:30-11:45 Sponsor talks
11:45-13:00 Lunch (on your own)
13:00-14:30 Poster presentations
  • Adaptive Trajectory Prediction via Transferable GNN
    Yi Xu, Lichen Wang, Yizhou Wang, Yun Fu (Northeastern)
  • Medical Image Representation Learning via Mutual Information Maximization
    Sidong Zhang, Madalina Fiterau (Umass Amherst)
  • Rethinking 3DMM-Conditioned Face Synthesis
    Yiwen Huang, Zhiqiu Yu, Xinjie Yi, James Tompkin (Brown)
  • GPU-HC: GPU-Based Homotopy Continuation Solver for Minimal Problems in Computer Vision
    Chiang-Heng Chien, Hongyi Fan, Ahmad Abdelfattah, Elias Tsigaridas, Stanimire Tomov, Benjamin Kimia (Brown)
  • Learning Object-Centric Dynamic Modes from Video and Emerging Properties
    Armand Comas, Christian Fernandez, Sandesh Ghimire, Haolin Li, Mario Sznaier, Octavia Camps (Northeastern)
  • Comparing Correspondences: Video Prediction with Correspondence-wise Losses
    Daniel Geng, Max Hamilton, Andrew Owens (Umass Amherst)
  • EVAL: Explainable Video Anomaly Localization
    Ashish Singh, Michael J. Jones, Erik Learned-Miller (Umass Amherst)
  • Multiview Curve Correspondence for Curve Grouping and Reconstruction
    Yilin Zheng, Chiang-Heng Chien, Benjamin Kimia (Brown)
  • Exploring visual prompts for adapting large-scale models
    Hyojin Bahng, Ali Jahanian, Swami Sankaranarayanan, Phillip Isola​ (MIT)
  • On the capability of humans and reinforcement learning agents to generalize across noisy worlds
    Serena Bono, Spandan Madan, Ishaan Grover, Hanspeter Pfister, Gabriel Kreiman (Harvard)
  • Is that Pruning Experiment Really Fair? – On the Role of Trainability in Network Pruning
    Huan Wang, Can Qin, Yue Bai, and Yun Fu (Northeastern)
  • Mitral Regurgitation Detection Using Cardiac Imaging Data
    Ke Xiao, James Priest, Erik Learned-Miller, Madalina Fiterau (Umass Amherst)
  • Using spatio-temporal information in weather radar data to detect and track communal bird roosts
    Gustavo Perez, Wenlong Zhao, Zezhou Cheng, Maria Carolina T. D. Belotti, Yuting Deng, Victoria F. Simons, Elske Tielens, Jeffrey F. Kelly, Kyle G. Horton, Subhransu Maji, Daniel Sheldon (Umass Amherst)
  • ConDor: Self-Supervised Canonicalization of 3D Pose for Partial Shapes
    Rahul Sajnani, Adrien Poulenard, Jivitesh Jain, Radhika Dua, Leonidas J. Guibas, Srinath Sridhar (Brown)
  • Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting
    Jie Ji, Gen Li, Xiaolong Ma (Northeastern)
  • Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4
    William Berrios, Arturo Deza (MIT)
  • Divide and Compose with Score Based Generative Models
    Sandesh Ghimire, Armand Comas, Davin Hill, Aria Masoomi, Octavia Camps*, Jennifer Dy* (Northeastern)
  • UniverSeg: Universal Medical Image Segmentation
    Victor Ion Butoi, Jose J. Ortiz, Tianyu Ma, John Guttag, Mert R. Sabuncu, Adrian V. Dalca (MIT)
  • Image to Sphere: Learning Equivariant Features for Efficient Pose Prediction
    David Klee, Ondrej Biza, Robert Platt, and Robin Walters (Northeastern)
  • Accidental Turntables: Learning 3D Pose by Watching Objects Turn
    Zezhou Cheng, Matheus Gadelha, Subhransu Maji (Umass Amherst)
  • Using 3D Models in Virtual Reality to Address Small Data Challenges in Human/Animal Pose Estimation
    Max Leblang, Le Jiang, Xiaofei Huang, Sarah Ostadabbas (Northeastern)
  • PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos
    Yiming Xie, Matheus Gadelha, Fengting Yang, Xiaowei Zhou, Huaizu Jiang (Northeastern)
  • Persistent Nature: A Generative Model of Unbounded 3D Worlds
    Lucy Chai, Richard Tucker, Zhengqi Li, Phillip Isola, Noah Snavely (MIT)
  • ToRF++: 3D reconstruction and novel view synthesis for fast motion using Time-of-Flight cameras
    Mikhail Okunev, Benjamin Attal, Marc Mapeke, Christian Richardt, Matthew O’Toole, James Tompkin (Brown)
  • Skeleton-based 3D shape generation and editing
    Dmitrii Petrov, Vikas Thamizharasan, Matheus Gadelha, Vova Kim, Siddhartha Chaudhuri, Evangelos Kalogerakis (Umass Amherst)
  • Cross-view Action Recognition via Contrastive View-invariant Representations
    Yuexi Zhang, Dan Luo, Balaji Sundareshan, Octavia Camps, Mario Sznaier (Northeastern)
  • PARTICLE: Part Discovery and Contrastive Learning for Fine-grained Recognition
    Oindrila Saha, Subhransu Maji (Umass Amherst)
  • Generalized Relative Neighborhood Graph (GRNG) for Similarity Search
    Cole Foster, Berk Sevilmis, Benjamin Kimia (Brown)
  • Spatio-Visual Fusion-Based Person Re-Identification for Overhead Fisheye Images
    Mertcan Cokbas, Prakash Ishwar, Janusz Konrad (BU)
  • Natural Language Descriptions of Deep Visual Features
    Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas (MIT)
  • Unsupervised feature correlation network for localizing breast cancer using prior mammograms
    Jun Bai, Annie Jin, Madison Admas, Shanglin Zhou, Caiwen Ding, Clifford Yang, and Sheida Nabavi (UConn)
  • Leveraging Temporal Context in Low Representational Power Regimes
    Camilo Fosco, Souyoung Jin, Emilie Josephs, Aude Oliva (MIT)
  • Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!
    Zaid Khan, Vijay Kumar BG, Samuel Schulter, Xiang Yu, Yun Fu, Manmohan Chandraker (Northeastern)
  • Parameter-Efficient Masking Networks
    Yue Bai, Huan Wang, Xu Ma, Yitian Zhang, Zhiqiang Tao, Yun Fu (Northeastern)
  • Analysis of Saliency Frameworks on Fine Grained Image Classification
    Rangel Daroya, Aaron Sun, and Subhransu Maji (Umass Amherst)
  • Look More but Care Less in Video Recognition
    Yitian Zhang, Yue Bai, Huan Wang, Yi Xu, Yun Fu (Northeastern)
14:30-15:45 Oral presentations 2
  1. ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model
    Rao Fu, Xiao Zhan, Yiwen Chen, Daniel Ritchie, Srinath Sridhar (Brown)
  2. Exploring Consistency in Cross-Domain Transformer for Domain Adaptive Semantic Segmentation
    Kaihong Wang, Donghyun Kim, Rogerio Feris, Kate Saenko, Margrit Betke (BU)
  3. Learning Regular Rearrangements of Objects in Rooms
    Qiuhong Anna Wei, Sijie Ding, Jeong Joon Park, Rahul Sajnani, Adrien Poulenard, Srinath Sridhar, Leonidas Guibas (Brown)
  4. Analysis of Explainability Frameworks on Fine Grained Image Classification
    Rangel Daroya, Aaron Sun, Subhransu Maji (UMass Amherst)
  5. Momentum is All You Need for Adaptive Optimization
    Yizhou Wang, Yue Kang, Can Qin, Huan Wang, Yi Xu, Yulun Zhang, Yun Fu (Northeastern)
15:45-16:00 Coffee
16:00-17:15 Oral presentations 3
  1. Robust Frame-to-Frame Camera Rotation Estimation in Crowded Scenes
    Fabien Delattre, David Dirnfeld, Phat Nguyen, Stephen Scarano, Pedro Miraldo, Michael J. Jones, Erik Learned-Miller (UMass Amherst)
  2. Mod-Squad: Designing Mixture of Experts As Modular Multi-Task Learners
    Zitian Chen, Yikang Shen, Mingyu Ding, Zhenfang Chen, Hengshuang Zhao, Erik Learned-Miller, Chuang Gan (UMass Amherst)
  3. Compound Tokens: Channel Fusion for Vision-Language Representation Learning
    Maxwell Aladago, AJ Piergiovanni (Dartmouth)
  4. Diagnosing Error in Human-object Interaction Detectors
    Fangrui Zhu, Weidi Xie, Yiming Xie, Huaizu Jiang (Northeastern)
  5. Towards using Adversarially Robust Features as alternative features for rendering of Full-Field Foveated Metamers
    Raphaela Kang, Arturo Deza (MIT)

Getting here

The workshop will be held at the MIT-IBM Watson AI Lab (314 Main Street, Cambridge, MA, 02142).

You will need a photo ID to get in the building (a driver's license or any other photo ID is fine).

Getting there: The building is right above the Kendall T stop. There are also several (expensive) parking garages nearby, one of the closest is 33 Amherst St, Cambridge MA 02142, which costs $45 per day. Street parking is not easy to get in the Kendall area.


Sponsorship

We are grateful to the MIT-IBM Watson AI Lab for providing the venue and logistical support. We thank Boston Dynamics, Shipin.ai, Google for providing funding.






Acknowledgements

Thank you to Samson Timoner and Luke Inglis for helping us arrange NECV 2022. Thank you also to the steering committee: James Tompkin, Benjamin Kimia, Todd Zickler, Yun Raymond Fu, Octavia Camps, Kate Saenko, Erik Learned-Miller, and Subhransu Maji.

Past Years