Data on Demand
From Synthetic Data to Reality Capture for the AI Revolution
Thursday, 7 March 2024
While pre-trained AI models provide a convenient starting point, their performance is limited by the quality and coverage of their training data. This webinar explores how techniques beyond using real data alone can unlock enable training and mitigate bias.
Overview
While pre-trained AI models provide a convenient starting point, their performance is limited by the quality and coverage of their training data. This webinar explores how techniques beyond using real data alone can unlock enable training and mitigate bias.
Pre-trained models often fail on edge cases without sufficient real-world data and won’t work for domains that don’t offer adequate sources of publicly available data. Users who rely on pretrained models alone not realize that barriers to innovation or model performance inefficiency may be overcome by combining real data with synthetic derived through generative and physics-based techniques to supplement data and tune models. Better data curation and generation can lead to development of diverse, representative data that promises both more ethical and private data sources and lower levels of real data collection.
​
Key Takeaways:
Attend this webinar to learn:
​
-
How insufficient data leads to edge case failures and exclusion of underrepresented demographics.
-
Options for processing and augmenting real datasets to improve model training, including through use of synthetic data.
-
Considerations when relying on off the shelf datasets and models as foundations for production use of AI.
You’ll leave equipped to enhance your AI with custom datasets tailored to your needs. The future of AI relies on data - help it reach its full potential.
Who Should Attend:
​​
-
AI and machine learning researchers/engineers looking to improve training data and models
-
Data scientists seeking to optimize data collection and model performance
-
Computer vision specialists working on object, facial, and gesture recognition
-
Robotics engineers needing robust real-world training data
-
Natural language processing experts focused on speech and dialog systems
-
Autonomous vehicle developers requiring diverse driving data
-
Retailers and marketers using AI for demand forecasting and recommendations
-
Security firms deploying AI for threat detection, surveillance and access control
-
Manufacturers training AI on custom data for defect detection and predictive maintenance
-
Government agencies evaluating ethical uses of emerging AI capabilities
-
Nonprofits using AI for social good initiatives in fairness and inclusion
-
Academic researchers studying data-driven AI and human-AI interaction
-
Investors and venture capitalists tracking the latest AI opportunities and applications
​
Attending this webinar will offer these professionals a comprehensive insight into the power of custom data and how AI can be utilized.
AGENDA
Thursday, 7 March 2024
10:00am - 11:30am
All times in US Denver Mountain time
10:10am
Data Quality Assessment for Computer Vision Model Training
Marc Bosch, Computer Vision Science Director and Managing Director, Accenture Federal Services
A major challenge in CV/ML is the quality of the training data. Training data used in current supervised deep learning techniques is, almost exclusively, responsible for the success or failure of the system when is designed to solve very specific tasks. The ability to capture the true distribution of data is still more alchemy than science. In this talk, Dr. Bosch will discuss several strategies to overcome this during data collection, curation and/or refinement.
10:25am
Collecting Custom Data
Peter Atalla, Founder and CEO, Voxelmaps Inc
-
Discuss approaches for targeted data collection:
-
2D images, properly sampled and labeled
-
3D scans to provide depth information
-
Soundscapes to improve voice recognition
-
-
Data diversity and coverage is crucial.
-
Privacy and ethics considerations for data collection.
10:40am
Unlimited Synthetic Data for Training and Tuning AI
Chris Andrews, COO and Head of Product, Rendered.ai
Chris Andrews will discuss the how data scientists, data engineers, and developers can use the Software as a Service to create and deploy unlimited, customized synthetic data generation for computer vision-related machine learning and artificial intelligence workflows, reducing expense, closing gaps, and overcoming bias, security, and privacy issues when compared with the use or acquisition of real data. He will cover how this makes it easier for users to create synthetic data for enterprise workflows by providing a collaborative environment, samples, and cloud resources to quickly get started defining new data generation applications, creating datasets in high performance compute environments, and comparing existing and synthetic datasets to optimize AI training and validation.
10:55am
Panel Discussion / Audience Q&A
Panelists:
Marc Bosch, Computer Vision Science Director and Managing Director, Accenture Federal Services
Peter Atalla, Founder and CEO, Voxelmaps Inc
Chris Andrews, COO and Head of Product, Rendered.ai
A panel discussion of data on demand, covering synthetic data to reality capture for the AI revolution, with live audience question-and-answers.
Conference Format
Hosted on an exciting and interactive virtual event platform, this event series features a virtual auditorium, plus audience interaction via Q&A and Polls.
This event may qualify for GIS Certification Institute continuing education credits.
To submit for GISP Points, visit www.gisci.org to self-submit the event curriculum for approval
SPEAKERS
Nadine Alameh
Executive Director, Taylor Geospatial Institute
Nadine Alameh is the Executive Director of the Taylor Geospatial Institute, a position she assumed in September 2023. A world-renowned geospatial expert, Nadine was previously the CEO and president of the Open Geospatial Consortium. She is also an appointed member of the U.S. Department of Interior’s National Geospatial Advisory Committee and a board member of the United Nations Geospatial Global Information Management Private Sector Network. Before taking the helm at OGC, Nadine held various roles in industry, from the chief architect for innovation in Northrop Grumman’s Civil Solutions Unit, to CEO of an aviation data exchange startup, to senior technical advisor to NASA’s Applied Science Program. In the early 2000s, she launched and led several successful startups. Nadine has received numerous honors during her career, including the 2019 Geomatics Canada Leadership in Diversity Award, the 2022 Geospatial World Diversity Champion of the Year Award, and the 2023 Women in Technology Leadership Award in the nonprofit and academia category. Nadine earned a doctorate in computer and information systems engineering from the Massachusetts Institute of Technology, where she also earned master’s degrees in civil and environmental engineering and city planning. She earned a bachelor’s degree in engineering from the American University of Beirut.
Marc Bosch
Computer Vision Science Director and Managing Director
Accenture Federal Services
Marc Bosch is a Computer Vision lead and Managing Director at Accenture. He is currently serving as a principal investigator in several IARPA and DARPA programs. Dr. Bosch’s research interest include image/video processing, computer vision, machine learning, and computational photography. In 2012 he joined Texas Instruments as a computer vision/computational photography engineer. From 2013-16 he was a senior video engineer at Qualcomm, Inc. From 2016-19 he was at Johns Hopkins University APL. He received a degree in Telecommunications engineering from Technical University of Catalonia (UPC), Barcelona, Spain, in 2007 and a M.S. and Ph.D. degrees in electrical and computer engineering from Purdue University, West Lafayette, IN in 2009 and 2012 respectively.
Chris Andrews
COO and Head of Product, Rendered.ai
Chris Andrews is COO and Head of Product at Rendered.ai, helping customers overcome the costs and limitations of using real-world data to train AI and ML systems. Chris previously led a team at Esri responsible for 3D, Defense, Urban Planning, and AEC products. Prior to Esri, Chris was the lead product manager for Autodesk’s InfraWorks.
Peter Atalla
Founder and CEO, Voxelmaps
​Peter Atalla is CEO and Founder of Voxelmaps Inc, a leading GIS mapping company building 4D Maps for Machines. He has been in the mapping industry for 17 years and has led large scale international mapping projects for some of the biggest technology companies in the world. Previously he was CEO and founder of Navmii, a navigation and mapping company with over 30 million users that mapped 180 countries. Peter is a technology entrepreneur with 2 successful exits.
SPONSORS and SUPPORTERS
Voxelmaps is building the world’s most accurate 4D volumetric model of the earth, combining high resolution scans using the latest LiDAR and HD imaging sensors, fused with temporal data. The result is a new form of mapping which provides superior levels of accuracy and information of the areas mapped. Voxelmaps has developed a technology that splits the planet into a dense matrix of multi-resolution voxels, each voxel has a permanent location and address. The automated feature extraction is performed using Voxelmaps’ developed AI software tools. Voxelmaps Inc was originally a spin-off from Navmii, one of the leading navigation and mapping companies in the world. Voxelmaps mission is to build a true 4D volumetric model of the planet, combining visual, spatial and temporal data to create the most detailed map of the world. It does this using a unique patent-pending technology using MRVOGs (Multi-Resolution, Voxel Occupancy Grids). The company is headquartered in Austin, Texas, USA and works with some of the world’s largest companies across North America and Europe.
Rendered.ai was established after the realization that many industries were about to explode with massive investments in hardware-intensive imagery collection and analysis. Without the ability to access data during the design and development process, organizations are unable to validate analysis pipelines and business models before launching expensive hardware, sometimes literally, into the market. From space-based satellite imaging to manufacturing and security inspection, computer vision hardware and applications are proliferating across every industry. Relying on collected data alone carries risks and costs due to dataset biases and real data is simply not available for new sensors and platforms. Simulating sensor behavior and data output is a well-established technique used during the design and inception process for new equipment, but historically was not done at a scale sufficient to generate annotated data for the purpose of training computer vision algorithms. Rendered.ai was founded to connect simulation with data generation for computer vision and the team quickly demonstrated the potential for using simulated data to train Artificial Intelligence and Machine Learning systems with customers in the geospatial industry. Along the way, the team observed that simulated, or synthetic, data required an iterative workflow best supported by a platform that could encompass simulation tools, compute management, and domain-specific content. The Rendered.ai platform as a service has been in production since late 2021 and is helping customers across multiple industries and countries to reduce bias and overcome cost and availability issues when training algorithms to solve critical problems.
AssetMapping and AI/ML Events
are produced by
* ConnectMii Events will use your submission and/or event registration to provide you with information about upcoming mapping industry events. We occasionally share this information with our event partners, who may use your contact information to provide you with information about their company and/or products.
© 2024 ConnectMii Events Inc.