About
Activity
-
Shaped (YC W22) has raised $8 million in Series A funding to make AI-powered personalization radically easier for businesses. "Consumers expect…
Shaped (YC W22) has raised $8 million in Series A funding to make AI-powered personalization radically easier for businesses. "Consumers expect…
Liked by Sammy Sidhu
-
We are thrilled to announce that Shaped has secured $8 million USD in Series A funding to make AI-powered personalization easier. The round was led…
We are thrilled to announce that Shaped has secured $8 million USD in Series A funding to make AI-powered personalization easier. The round was led…
Liked by Sammy Sidhu
-
Check out TechCrunch's coverage of our $8 million Series A round: "Whether it’s an online marketplace, store, or social media platform, virtually…
Check out TechCrunch's coverage of our $8 million Series A round: "Whether it’s an online marketplace, store, or social media platform, virtually…
Liked by Sammy Sidhu
Experience & Education
Publications
-
Scalable Primitives for Generalized Sensor Fusion in Autonomous Vehicles
NeurIPS 2021, Machine Learning for Autonomous Driving Workshop
In autonomous driving, there has been an explosion in the use of deep neural networks for perception, prediction and planning tasks. As autonomous vehicles (AVs) move closer to production, multi-modal sensor inputs and heterogeneous vehicle fleets with different sets of sensor platforms are becoming increasingly common in the industry. However, neural network architectures typically target specific sensor platforms and are not robust to changes in input, making the problem of scaling and model…
In autonomous driving, there has been an explosion in the use of deep neural networks for perception, prediction and planning tasks. As autonomous vehicles (AVs) move closer to production, multi-modal sensor inputs and heterogeneous vehicle fleets with different sets of sensor platforms are becoming increasingly common in the industry. However, neural network architectures typically target specific sensor platforms and are not robust to changes in input, making the problem of scaling and model deployment particularly difficult. Furthermore, most players still treat the problem of optimizing software and hardware as entirely independent problems. We propose a new end to end architecture, Generalized Sensor Fusion (GSF), which is designed in such a way that both sensor inputs and target tasks are modular and modifiable. This enables AV system designers to easily experiment with different sensor configurations and methods and opens up the ability to deploy on heterogeneous fleets using the same models that are shared across a large engineering organization. Using this system, we report experimental results where we demonstrate near-parity of an expensive high-density (HD) LiDAR sensor with a cheap low-density (LD) LiDAR plus camera setup in the 3D object detection task. This paves the way for the industry to jointly design hardware and software architectures as well as large fleets with heterogeneous configurations.
Other authorsSee publication -
DSCnet: Replicating Lidar Point Clouds With Deep Sensor Cloning
Computer Vision and Pattern Recognition (CVPR)
Convolutional neural networks (CNNs) have become increasingly popular for solving a variety of computer vision tasks, ranging from image classification to image segmentation. Recently, autonomous vehicles have created a demand for depth information, which is often obtained using hardware sensors such as Light detection and ranging (LIDAR). Although it can provide precise distance measurements, most LIDARs are still far too expensive to sell in mass-produced consumer vehicles, which has…
Convolutional neural networks (CNNs) have become increasingly popular for solving a variety of computer vision tasks, ranging from image classification to image segmentation. Recently, autonomous vehicles have created a demand for depth information, which is often obtained using hardware sensors such as Light detection and ranging (LIDAR). Although it can provide precise distance measurements, most LIDARs are still far too expensive to sell in mass-produced consumer vehicles, which has motivated methods to generate depth information from commodity automotive sensors like cameras. In this paper, we propose an approach called Deep Sensor Cloning (DSC). The idea is to use Convolutional Neural Networks in conjunction with inexpensive sensors to replicate the 3D point-clouds that are created by expensive LIDARs. To accomplish this, we develop a new dataset (DSCdata) and a new family of CNN architectures (DSCnets). While previous tasks such as KITTI depth prediction use interpolated RGB-D images as ground-truth for training, we instead use DSCnets to directly predict LIDAR point-clouds. When we compare the output of our models to a 75,000 LIDAR, we find that our most accurate DSCnet achieves a relative error of 5.77% using a single camera and 4.69% using stereo cameras.
Other authorsSee publication -
Squeezenas: Fast neural architecture search for faster semantic segmentation
International Conference on Computer Vision
For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort to use NAS to optimize DNN architectures for other vision tasks. In this work, we present what we believe to be the first proxyless…
For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort to use NAS to optimize DNN architectures for other vision tasks. In this work, we present what we believe to be the first proxyless hardware-aware search targeted for dense semantic segmentation. With this approach, we advance the state-of-the-art accuracy for latency-optimized networks on the Cityscapes semantic segmentation dataset. Our latency-optimized small SqueezeNAS network achieves 68.02% validation class mIOU with less than 35 ms inference times on the NVIDIA Xavier. Our latency-optimized large SqueezeNAS network achieves 73.62% class mIOU with less than 100 ms inference times. We demonstrate that significant performance gains are possible by utilizing NAS to find networks optimized for both the specific task and inference hardware. We also present detailed analysis comparing our networks to recent state-of-the-art architectures. The SqueezeNAS models are available for download here: https://github.com/ashaw596/squeezenas
Other authorsSee publication -
ICU Artifact Detection via Bayesian Inference
MUCMD
We explore the use of probabilistic mod- eling to analyze data, specifically arterial blood pressure (ABP) and intracranial pressure (ICP) of head trauma patients, from hospital intensive care units (ICUs). Conventional bedside instruments using data-driven algorithms to track the status of a patient can yield a false alarm rate as high as 90% [Aleks et al., 2009]. Expanding upon the existing work of Norm Aleks, Stuart Russell, Michael G. Madden, and others, we use Dynamic Bayesian Networks…
We explore the use of probabilistic mod- eling to analyze data, specifically arterial blood pressure (ABP) and intracranial pressure (ICP) of head trauma patients, from hospital intensive care units (ICUs). Conventional bedside instruments using data-driven algorithms to track the status of a patient can yield a false alarm rate as high as 90% [Aleks et al., 2009]. Expanding upon the existing work of Norm Aleks, Stuart Russell, Michael G. Madden, and others, we use Dynamic Bayesian Networks (DBN) and Sequential Monte Carlo (SMC) methods to reduce false alarm rates by inferring various artifacts in ABP and ICP data
Other authorsSee publication
Patents
-
Systems and methods for training machine models with augmented data
Issued US 16598956
Systems and methods for training machine models with augmented data. An example method includes identifying a set of images captured by a set of cameras while affixed to one or more image collection systems. For each image in the set of images, a training output for the image is identified. For one or more images in the set of images, an augmented image for a set of augmented images is generated. Generating an augmented image includes modifying the image with an image manipulation function that…
Systems and methods for training machine models with augmented data. An example method includes identifying a set of images captured by a set of cameras while affixed to one or more image collection systems. For each image in the set of images, a training output for the image is identified. For one or more images in the set of images, an augmented image for a set of augmented images is generated. Generating an augmented image includes modifying the image with an image manipulation function that maintains camera properties of the image. The augmented training image is associated with the training output of the image. A set of parameters of the predictive computer model are trained to predict the training output based on an image training set including the images and the set of augmented images.
Other inventorsSee patent -
Multi-channel sensor simulation for autonomous control systems
Issued US 15855749
An autonomous control system combines sensor data from multiple sensors to simulate sensor data from high-capacity sensors. The sensor data contains information related to physical environments surrounding vehicles for autonomous guidance. For example, the sensor data may be in the form of images that visually capture scenes of the surrounding environment, geo-location of the vehicles, and the like. The autonomous control system simulates high-capacity sensor data of the physical environment…
An autonomous control system combines sensor data from multiple sensors to simulate sensor data from high-capacity sensors. The sensor data contains information related to physical environments surrounding vehicles for autonomous guidance. For example, the sensor data may be in the form of images that visually capture scenes of the surrounding environment, geo-location of the vehicles, and the like. The autonomous control system simulates high-capacity sensor data of the physical environment from replacement sensors that may each have lower capacity than high-capacity sensors. The high-capacity sensor data may be simulated via one or more neural network models. The autonomous control system performs various detection and control algorithms on the simulated sensor data to guide the vehicle autonomously.
Other inventorsSee patent -
Data synthesis for autonomous control systems
Issued US 10678244
An autonomous control system generates synthetic data that reflect simulated environments. Specifically, the synthetic data is a representation of sensor data of the simulated environment from the perspective of one or more sensors. The system generates synthetic data by introducing one or more simulated modifications to sensor data captured by the sensors or by simulating the sensor data for a virtual environment. The autonomous control system uses the synthetic data to train computer models…
An autonomous control system generates synthetic data that reflect simulated environments. Specifically, the synthetic data is a representation of sensor data of the simulated environment from the perspective of one or more sensors. The system generates synthetic data by introducing one or more simulated modifications to sensor data captured by the sensors or by simulating the sensor data for a virtual environment. The autonomous control system uses the synthetic data to train computer models for various detection and control algorithms. In general, this allows autonomous control systems to augment training data to improve performance of computer models, simulate scenarios that are not included in existing training data, and/or train computer models that remove unwanted effects or occlusions from sensor data of the environment.
Other inventorsSee patent -
Neural networks for embedded devices
Filed US 16559483
A neural network architecture is used that reduces the processing load of implementing the neural network. This network architecture may thus be used for reduced-bit processing devices. The architecture may limit the number of bits used for processing and reduce processing to prevent data overflow at individual calculations of the neural network. To implement this architecture, the number of bits used to represent inputs at levels of the network and the related filter masks may also be modified…
A neural network architecture is used that reduces the processing load of implementing the neural network. This network architecture may thus be used for reduced-bit processing devices. The architecture may limit the number of bits used for processing and reduce processing to prevent data overflow at individual calculations of the neural network. To implement this architecture, the number of bits used to represent inputs at levels of the network and the related filter masks may also be modified to ensure the number of bits of the output does not overflow the resulting capacity of the reduced-bit processor. To additionally reduce the load for such a network, the network may implement a “starconv” structure that permits the incorporation of nearby nodes in a layer to balance processing requirements and permit the network to learn from context of other nodes.
Other inventorsSee patent -
Optimizing neural network structures for embedded systems
Filed US 16522411
A model training and implementation pipeline trains models for individual embedded systems. The pipeline iterates through multiple models and estimates the performance of the models. During a model generation stage, the pipeline translates the description of the model together with the model parameters into an intermediate representation in a language that is compatible with a virtual machine. The intermediate representation is agnostic or independent to the configuration of the target…
A model training and implementation pipeline trains models for individual embedded systems. The pipeline iterates through multiple models and estimates the performance of the models. During a model generation stage, the pipeline translates the description of the model together with the model parameters into an intermediate representation in a language that is compatible with a virtual machine. The intermediate representation is agnostic or independent to the configuration of the target platform. During a model performance estimation stage, the pipeline evaluates the performance of the models without training the models. Based on the analysis of the performance of the untrained models, a subset of models is selected. The selected models are then trained and the performance of the trained models are analyzed. Based on the analysis of the performance of the trained models, a single model is selected for deployment to the target platform.
Other inventorsSee patent
Courses
-
Advanced Algorithms and Intractable Problems (CS Theory)
CS170
-
Advanced Topics in Computer Systems
CS262
-
Artificial Intelligence
CS188
-
Computer Architecture
CS61C
-
Data Structures
CS61B
-
Databases
CS186
-
Discrete Math and Probability Theory
CS70
-
Economics
ECON1
-
Financial Economics
ECON136
-
Machine Learning
CS189
-
Microelectronics
EE40
-
Operating Systems
CS162
-
Real Time Learning Systems
CS294
-
Signal Processing and Systems
EE20
-
Structure and Interpretation of Computer Programs
CS61A
Languages
-
English
-
Organizations
-
Computer Science Mentors
Head Data Structures Advisor (CS61B)
- Present -
Berkeley Finance Club
Leadership
- Present
More activity by Sammy
-
We're looking for a Developer Relations Manager at Eventual to help manage and grow the Daft community! We're looking for folks that are: 🔥…
We're looking for a Developer Relations Manager at Eventual to help manage and grow the Daft community! We're looking for folks that are: 🔥…
Shared by Sammy Sidhu
-
🌟 Week 7 Recap: Migrating to Unity Catalog 🌟 This week, we delved into migrating to Databricks Unity Catalog from Hive Metastore. Highlights…
🌟 Week 7 Recap: Migrating to Unity Catalog 🌟 This week, we delved into migrating to Databricks Unity Catalog from Hive Metastore. Highlights…
Liked by Sammy Sidhu
-
Fireworks is raising $52M, led by Sequoia Capital! "We’re using the funding to make a shift towards compound AI systems that can orchestrate across…
Fireworks is raising $52M, led by Sequoia Capital! "We’re using the funding to make a shift towards compound AI systems that can orchestrate across…
Liked by Sammy Sidhu
-
If you find yourself constantly tweaking instructions to maintain your AI model's output quality, you're likely grappling with a common yet maddening…
If you find yourself constantly tweaking instructions to maintain your AI model's output quality, you're likely grappling with a common yet maddening…
Liked by Sammy Sidhu
-
A cool engine I came across a while ago. Clearing my Linkedin Draft.
A cool engine I came across a while ago. Clearing my Linkedin Draft.
Liked by Sammy Sidhu
-
There are many ways to use Delta Lake without #Spark. Dedicated Delta Connectors let you use Delta Lake from engines like #Flink, #Hive, #Trino…
There are many ways to use Delta Lake without #Spark. Dedicated Delta Connectors let you use Delta Lake from engines like #Flink, #Hive, #Trino…
Liked by Sammy Sidhu
-
RunLLM is seriously impressive! Our entire team was huddled around a laptop trying to make it hallucinate but we were unsuccessful! Really cool…
RunLLM is seriously impressive! Our entire team was huddled around a laptop trying to make it hallucinate but we were unsuccessful! Really cool…
Shared by Sammy Sidhu
-
RunLLM is now live on the Daft (Eventual) docs site. Thanks to Sammy Sidhu for the help. Check it out! 🙂
RunLLM is now live on the Daft (Eventual) docs site. Thanks to Sammy Sidhu for the help. Check it out! 🙂
Liked by Sammy Sidhu
-
`unitycatalog-python` is a great Python package to start building on top of the Unity Catalog REST API. With this library you can easily: - create…
`unitycatalog-python` is a great Python package to start building on top of the Unity Catalog REST API. With this library you can easily: - create…
Liked by Sammy Sidhu
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Sammy Sidhu
-
Sammy Sidhu
Fabrication Engineer
-
Sammy Sidhu
lukin for u to be at my company
-
sammy sidhu
--
-
samuel sidhu
HR Manager at CMC Hospital
10 others named Sammy Sidhu are on LinkedIn
See others named Sammy Sidhu