SAMIBC2026 Presentation Announcement Slide for A SAM International Business Conference presentation explores feature hierarchy dynamics in pre-trained CNNs and offers data driven insights for improving transfer learning efficiency and adaptability.

Deep convolutional neural networks have become foundational to modern computer vision, powering applications from autonomous systems to large scale image retrieval. Yet while performance benchmarks continue to improve, important questions remain about how learned feature representations evolve during transfer learning and how those internal dynamics influence adaptability, efficiency, and deployment. This scholarly research presentation offers a systematic investigation into the behavior of pre-trained CNNs under varying training conditions.

Presented by Mayank Jha of Amazon Robotics, this virtual session within the Information Systems and Operations Management track examines how feature hierarchies change across layers during pre-training and fine-tuning. Through controlled experiments in classification and object detection tasks, the research analyzes how varying the duration of large scale pre-training influences downstream performance. Findings show that extended pre-training consistently improves transferability without inducing overfitting, suggesting that networks progressively learn more generic and reusable representations over time.

The session also explores how fine-tuning strategies affect performance under different data availability regimes. When target datasets are limited, adapting higher level layers provides significant gains, while full network fine-tuning offers additional improvements in data rich environments. These results provide practical guidance for balancing computational cost and performance in real world transfer learning workflows.

A central contribution of the research lies in characterizing the nature of internal feature representations. Through layer-wise entropy analysis and filter selectivity studies, the findings demonstrate that CNNs primarily develop distributed encodings rather than highly specialized “grandmother-cell” detectors. Only a small subset of object categories triggers highly selective filter responses. This distributed structure helps explain the robustness and generalization properties that make CNNs effective across diverse tasks.

The study further distinguishes the roles of feature magnitude and spatial information. Controlled ablation experiments show that classification performance depends more on activation patterns than precise feature magnitudes, enabling compact representations with minimal degradation. In contrast, object detection tasks rely heavily on spatial organization, underscoring the importance of preserving spatial structure in detection pipelines and edge deployment systems.

From an information systems and operations management perspective, these insights provide actionable guidance for designing adaptable and resource-efficient vision models. Understanding how feature hierarchies evolve allows practitioners to optimize pre-training duration, fine-tuning depth, and model compression strategies without sacrificing performance.

Author and Affiliation
Mayank Jha, Amazon Robotics

Delivered virtually at the SAM International Business Conference, this session offers both conceptual clarity and applied recommendations for researchers and practitioners working in deep learning, transfer learning, and scalable vision systems. If you are designing adaptable computer vision pipelines or deploying models in resource constrained environments, this research provides a practical blueprint for improving transferability and efficiency. Learn more about this presentation and register to attend at www.samnational.org/conference.