Recently I interviewed Nathaniel Simard, founder & CEO of Tracel Technologies and inventor of the burn ML framework: Jan: Nathaniel, we're excited to have you as our guest today! To kick off our conversation, could you please introduce yourself and share a bit about your background and what got you into ML? Nathaniel: Sure, I'm the creator of Burn, a deep learning framework written in Rust, and the founder of Tracel AI. I started coding in the first year of university, where I was studying mechanical engineering, but I quickly switched to software engineering after, since I instantly fell in love with programming. Then I explored different facets of the field from backend to frontend development, and I decided to start my career as a consultant focused on software quality. After some time, I wanted to go deeper into AI, since I was always interested in the process of learning, so I enrolled for a master's degree at MILA. J: According to Github you started to work on burn in summer 2022. Since then has already earned over 7000 stars on GH. What was your initial motivation to develop a new ML framework? N: I always had a side project going on, for fun mostly and to learn new things. I wanted to explore asynchronous and sparse neural network architectures, where each sub-network can learn and interact with other sub-networks asynchronously and independently. I wasn't able to actually create something useful because I needed fine control over the gradients and the concurrency primitives, which is not easily done with Python and PyTorch. At the same time, I was working on machine translation models at my current job, and it was quite painful to put models into production. I decided to switch my side project to a new deep learning framework, with more flexibility regarding gradients and concurrency primitives as well as being more reliable and easier to deploy on any system. J: Amazing to see that this was born out of a side project! I feel the struggle with concurrency in Python and this is something where Rust really shines. What are some of the other key features that makes burn special? N: I think there are two things that really set Burn apart. First, almost all neural network structures are generic over the backend. The goal is that you can ship your model with almost no dependency, and anybody can run it on their hardware with the most appropriate backend, even embedded devices without an operating system. Second, Burn really tries to push the boundaries of what is possible in terms of performance and flexibility. It offers a fully eager API, but also operation fusion and other optimizations that are normally only found in static graph frameworks. The objective is that you don't have to choose between portability, flexibility, and performance; you can have it all! To read the rest of the interview with Nathaniel's plans for burn & trends in AI here: https://lnkd.in/dKv9ztMm #ml #ai
Jan Schulte’s Post
More Relevant Posts
-
Full Stack Developer: Crafting Responsive Websites and APIs | Offering free consultations till 15th May
Hey there, fellow software developers! Generative AI is shaping the future of software development, and when it comes to this exciting realm, Python is our trusted companion. But, with a sea of libraries out there, it can be a tad overwhelming to navigate the choices. That's why I'm excited to share my top 10 Python libraries for Generative AI. TensorFlow - This open-source gem is a powerhouse for machine learning and Generative AI. It's known for its ease of use and boasts a thriving developer community. PyTorch - Another big player in the machine learning arena, PyTorch stands out for its simplicity and user-friendly approach. Keras - If you love high-level neural networks and want something that plays well with TensorFlow or PyTorch, Keras is your go-to. GANbreeder - Dive into GANs (Generative Adversarial Networks) with this user-friendly library. Its feature set is impressive. Deep Dream Generator - Explore surreal imagery using Deep Dream technology with this captivating library. It's a creative playground! Artbreeder - Let your artistic side shine with this library, which allows you to evolve original artwork using GANs. Music Generation - Have you ever dreamt of composing your own music? This library lets you generate melodies and beats using GANs. Text Generation - Need unique content for your projects? This library generates text with a creative twist. Image Generation - Craft visuals like a digital Picasso! This library lets you generate striking images using GANs. Video Generation - Take your creations to the next level by generating videos. It's a fantastic way to tell stories in a fresh way! These libraries are your tools for crafting amazing Generative AI models. So, why wait? Dive in and unleash your creativity. Give these libraries a spin, and watch your AI dreams come to life! Don't forget to follow me for more captivating content on software development and technology. Let's keep the tech conversations going! #GenerativeAI #Python #TensorFlow #PyTorch #Keras #AIInnovation #TechArtistry #SoftwareDevelopment #AI #TechWorld 🚀💬🌐
To view or add a comment, sign in
-
🚀 Generative AI - Day 19 - How to run your own Generative AI Model 👨🚀 Hola, my LinkedIn fam! 👋 We have traversed various applications of Generative AI, from its impact on healthcare to its transformative role in agriculture. Now, it's time to empower ourselves by directly engaging with the technology. If you possess a computer, an internet connection, and a willingness to explore, then you're already well-equipped to start. Let's begin! So there are two ways to do this, one is the technical way where you actually use python, its different libraries and other tools to make this work. And if you are just inquisitive about using a model then I'll share two of my favourite ones where you can experiment according to your requirements. 1. The Technical Way 💻 Step 1: Preliminary Requirements A computer with a stable internet connection Python programming environment installed An inquisitive mindset Step 2: Initialise Your Development Environment We will be using Jupyter Notebook as our code editor. It's a user-friendly interface, ideal for code-based projects. You can download it via a simple search and installation process. Step 3: Install the Necessary Software Packages In your newly installed Jupyter Notebook, input the command pip install tensorflow keras. Consider this akin to setting up your workspace with essential tools before commencing any project. Step 4: Import the Pre-Trained Model Code - from tensorflow.keras.models import load_model model = load_model('path/to/your/model') Think of this as utilizing a template. This pre-trained model serves as the foundation upon which we'll build. Step 5: Generate Text Input Initialisation: We will input a 'starter sentence' to begin text generation. The following code will assist in this endeavour: starter = "Once upon a time" generated_text = model.predict(starter) print(f"Your AI-generated narrative: {starter} {generated_text}") Step 6: Iterative Enhancement Refinement Process: If the generated text does not meet your expectations, feel free to revise your starter sentence and re-execute the code. Experimentation is key to optimisation. 2. The Not so technical way Below are two of my personal favourite websites for generating text using the power of Generative AI - a. AI Dungeon: AI Dungeon is a website that allows you to generate text in the form of a story. You can start by typing in a prompt, and AI Dungeon will generate the rest of the story for you. b. GPT-3 Playground: GPT-3 Playground is a website that allows you to generate text in a variety of formats, including poems, code, scripts, musical pieces, email, letters, and more. If you are satisfied with the generated text, I encourage you to share your results. It offers an opportunity for collective learning and celebration of individual achievements in the realm of Generative AI. P.s The below image is a story I built with the help of AI Dungeon. #generativeai #artificialintelligence
To view or add a comment, sign in
-
Today we’re excited to announce the Daily + Cerebrium partnership: 🛠️ Build interactive voice, video, and AI with Daily’s AI Toolkit, daily-python 🛠️ Easily deploy on Cerebrium’s serverless AI/ML infrastructure 🛠️ Leverage a wide range of models, perform low-latency inference, scale efficiently Daily CPTO Varun S. takes a look. Learn more https://lnkd.in/dNRHkhJi #AI #ML #LLMs #WebRTC #llama2 #LLM #OpenAI #GPT4 #python
Cerebrium + Daily: Simplifying deployments for your AI-powered voice and video apps
daily.co
To view or add a comment, sign in
-
Lead Design Engineer | BIW structure | Machine Learning | Deep Learning | Computer Vision | NLP | Statistics | Tableau | PowerBI | Science engineer | Researcher
The Impact of Exception Logging in Debugging: Insights from a Machine Learning Project - Introduction: I want to share a recent experience from my end-to-end machine learning project. It involves two distinct approaches to handling exceptions in a data ingestion pipeline, offering lessons about the necessity of transparency and logging. - The First Approach (Code 01): In the initial stage, I set up a data ingestion process in my ML pipeline. The code was structured to manage exceptions by logging them using `logger.exception(e)` before re-raising them with `raise e`. This approach ensured comprehensive recording of any exceptions, offering valuable insights for understanding and troubleshooting the process. - The Revised Approach (Code 02): Later, I noticed an oversight. In the revised code, exceptions were re-raised immediately using `raise e` without prior logging. This change, although small, had a significant consequence: the absence of detailed exception logs. This meant potential difficulties in efficiently diagnosing and resolving issues. - The Lesson Learned: This scenario highlighted the critical role of detailed logging in machine learning pipelines. Despite both code versions functionally achieving the same goal, the second approach's lack of detailed logging for exceptions revealed a potential issue in diagnosing and understanding problems. In machine learning, where complexity is a given, comprehensive logging is essential. - Key Takeaway: It's crucial to maintain thorough logging, particularly when dealing with exceptions in ML pipelines. Effective logging is about more than just recording errors; it provides a clear, traceable record that aids in understanding and refining your models. -Reflection : This situation clearly showed the significant impact that detailed logging can have in ML #pipelines, proving to be invaluable for troubleshooting and enhancing both transparency and efficiency in our models. #machinelearning #datascience #ai #artificialintelligence #python
To view or add a comment, sign in
-
Generative AI & Chatbots | Full Stack Web App Developer | Python | Typescript | Next.js | FASTAPI | Automation & Web Scraping | WordPress
Let’s break down the code line by line 1-Importing the google.generativeai module: The first line import google.generativeai as genai imports the google.generativeai module and assigns it an alias genai. This module provides access to Google’s state-of-the-art generative AI models. 2-Configuring the API key: The second line genai.configure(api_key="Your Api Key") sets up the API key for accessing the generative AI models. Replace "Your Api Key" with your actual API key obtained from AI Studio. 3-Creating a generative model instance: The third line model = genai.GenerativeModel(model_name="gemini-pro") creates an instance of the generative model named “gemini-pro”. This model is part of the Google AI Python SDK and is designed for generating text. 4-Defining a prompt: The fourth line prompt_parts = ["who is quaid-e-azam?"] defines a list containing a single prompt: “who is quaid-e-azam?” This prompt will be used to generate a response. 5-Generating content: The fifth line response = model.generate_content(prompt_parts) sends the prompt to the generative model and retrieves the generated content. In this case, it will provide information related to Quaid-e-Azam (Muhammad Ali Jinnah). 6-Printing the response: The sixth line print(response.text) displays the generated content (response) in the console. Checkout Full Repo: https://lnkd.in/ePakGK3Y Follow me on LinkedIn: https://lnkd.in/d8mfnDcd #generatieveai #gemini
To view or add a comment, sign in
-
Probabilistic Machine Learning for Finance and Investing: A Primer to Generative AI with Python There are several reasons why probabilistic machine learning represents the next-generation ML framework and technology for finance and investing. This generative ensemble learns continually from small and noisy financial datasets while seamlessly enabling probabilistic inference, retrodiction, prediction, and counterfactual reasoning. Probabilistic ML also lets you systematically encode personal, empirical, and institutional knowledge into ML models. Whether they're based on academic theories or ML strategies, all financial models are subject to modeling errors that can be mitigated but not eliminated. Probabilistic ML systems treat uncertainties and errors of financial and investing systems as features, not bugs. And they quantify uncertainty generated from inexact inputs and outputs as probability distributions, not point estimates. This makes for realistic financial inferences and predictions that are useful for decision-making and risk management. Unlike conventional AI, these systems are capable of warning us when their inferences and predictions are no longer useful in the current market environment. By moving away from flawed statistical methodologies and a restrictive conventional view of probability as a limiting frequency, you’ll move toward an intuitive view of probability as logic within an axiomatic statistical framework that comprehensively and successfully quantifies uncertainty. This book shows you how. https://lnkd.in/dznnap9c
To view or add a comment, sign in
-
This makes it even easier to connect AI/ML with interactive video and voice 🚀
Today we’re excited to announce the Daily + Cerebrium partnership: 🛠️ Build interactive voice, video, and AI with Daily’s AI Toolkit, daily-python 🛠️ Easily deploy on Cerebrium’s serverless AI/ML infrastructure 🛠️ Leverage a wide range of models, perform low-latency inference, scale efficiently Daily CPTO Varun S. takes a look. Learn more https://lnkd.in/dNRHkhJi #AI #ML #LLMs #WebRTC #llama2 #LLM #OpenAI #GPT4 #python
Cerebrium + Daily: Simplifying deployments for your AI-powered voice and video apps
daily.co
To view or add a comment, sign in
-
New Post: Guide to using TensorFlow in Rust - https://lnkd.in/gCjKSxNy - TensorFlow, a powerful open source machine learning framework developed by the Google Brain team, has become a cornerstone in artificial intelligence. While traditionally associated with languages like Python, the advent of Rust, a systems programming language valued for its performance and safety, has opened new avenues for TensorFlow enthusiasts. In this guide, we will explore the fusion of TensorFlow and Rust, delving into how we can integrate these two technologies to harness the strengths of both. Setting up our TensorFlow boilerplate All the code discussed in this article is available, and ready to run, in this GitHub repository. The boilerplate for TensorFlow is simple — add the following dependency in the Cargo.toml file: tensorflow = "0.21.0" In case you want to use the GPU, just use the tensorflow_gpu feature in your Cargo.toml: tensorflow = \{ version = "0.21.0", features = \} This is the only dependency we will need for the examples in the following sections. Just to verify that everything works, check the following program \(you will find it in the directory tf-example1 in the repository\): extern crate tensorflow; use tensorflow::Tensor; fn main\(\) \{ let mut x = Tensor::new\(&\); x = 3.0f32; x = 2.0f32; println!\("\{:?\}", x\); \} The program is simple but useful to check that everything is in place. Let’s take a deeper look at it. First we declare an external crate named tensorflow, indicating that the program will use the TensorFlow crate. We then import the Tensor type from the TensorFlow crate. A tensor in TensorFlow represents a multidimensional array and is a fundamental data structure for computations. For a general introduction to TensorFlow concepts, you can refer to the official documentation. The main function creates a new mutable tensor x with a one-dimensional vector with two elements. In TensorFlow, the shape of a tensor specifies the number of elements in each dimension: if you specify, for example, , the tensor will \(unsurprisingly \) have the shape of a 2×3 matrix. Lastly, we just assign values to the elements of the tensor x. In this case, it sets the first element to 3.0 and the second element to 2.0. Finally, the program prints the tensor x using the println! macro. \{:?\} is a formatting specifier to print the tensor in a debug format. Project overview and understanding the XOR function Training a neural network to learn the XOR \(exclusive OR\) function is a classic example that highlights the capability of neural networks to learn complex relationships in data. XOR is a binary operation that outputs true \(1\) only when the number of true inputs is odd. No matter how simple it is conceptually, the XOR example will help show us all the necessary steps to design, train, and use a model. Learning the XOR table of truth justifies the use of a hidden layer in the neural network; inde
Guide to using TensorFlow in Rust
shipwr3ck.com
To view or add a comment, sign in
-
Generative AI applications might seem easy to use, but under the hood lies a complex ecosystem. If you have a basic understanding of Python, this FREE workshop is your chance to learn directly from industry experts, not just a trainer. ✅ 𝗥𝗦𝗩𝗣 Here - https://brij.guru/learn Why You Can't Afford to Miss This: 𝟭𝟬𝟬% 𝗛𝗮𝗻𝗱𝘀-𝗢𝗻 & 𝗘𝗻𝗴𝗮𝗴𝗶𝗻𝗴: No dry lectures, just practical learning through a 𝗝𝘂𝗽𝘆𝘁𝗲𝗿 𝗡𝗼𝘁𝗲𝗯𝗼𝗼𝗸 environment (make sure to install it beforehand!). Here's a breakdown of the key players: 𝟭. 𝗨𝘀𝗲𝗿 𝗜𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲 (𝗙𝗿𝗼𝗻𝘁𝗲𝗻𝗱): This is where you interact with the AI, like through a chatbot or a web/app platform. It provides a seamless experience, but the magic happens behind the scenes. 𝟮. 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗟𝗮𝘆𝗲𝗿: This critical component acts like a conductor, ensuring user requests are routed correctly and efficiently to the right AI services. 𝟯. 𝗕𝗮𝗰𝗸𝗲𝗻𝗱: Here's where the data lives! Databases and caching systems like Redis and SQLite store and retrieve information quickly, keeping the AI responsive to your needs. 𝟰. 𝗔𝗣𝗜𝘀 & 𝗛𝗼𝘀𝘁𝗶𝗻𝗴: This layer connects developers to the AI models they need. Open-source and proprietary APIs allow developers to integrate AI capabilities into their applications. 𝟱. 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 𝗧𝗼𝗼𝗹𝘀: A toolbox is essential! These tools help create prompts, embed AI models, perform validations, and even enhance functionality and reliability through developer plugins. 𝟲. 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲: The invisible backbone! This includes hardware and cloud services (like powerful GPUs) that provide the computing muscle needed to run complex AI models. 𝗧𝗵𝗲 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆? While the user experience is designed for ease, the backend is a symphony of services and infrastructure working together. Each component plays a vital role in delivering the seamless AI experience we've come to expect. Have I overlooked anything? Please share your thoughts—your insights are priceless to me. 📸 Pic by @tingyi-le !
To view or add a comment, sign in
-
Regression: Enhanced Explanations with Calibration. - arXiv:2308.16245v1 - cs.LG - - Artificial Intelligence - AI - plays a crucial role in modern decision support systems - DSSs -. However, the lack of transparency in the best-performing AI models used in DSSs is a significant challenge. Explainable Artificial Intelligence - XAI - addresses this challenge by aiming to develop AI systems that can explain their reasoning to human users. In XAI, local explanations are used to provide information about the factors that contribute to individual predictions. One limitation of existing local explanation methods is their inability to quantify the uncertainty associated with the importance of each factor. This paper introduces an extension of a feature importance explanation method called Calibrated Explanations - CE -. Initially designed for classification, CE now also supports standard regression and probabilistic regression, which involves determining the probability that the target value exceeds a certain threshold. The extension for regression retains the benefits of CE, such as confidence intervals for prediction calibration, uncertainty quantification of feature importance, and the ability to provide both factual and counterfactual explanations. CE for standard regression offers fast, reliable, stable, and robust explanations. CE for probabilistic regression introduces a novel approach to generating probabilistic explanations from any ordinary regression model, with the flexibility to dynamically select thresholds. The performance of CE for probabilistic regression in terms of stability and speed is comparable to that of LIME. The method is model agnostic and employs easily understandable conditional rules. A Python implementation of CE is freely available on GitHub and can be easily installed using pip, ensuring replicability of the results presented in this paper. https://lnkd.in/dFspyWUM
Regression: Enhanced Explanations with Calibration. (arXiv:2308.16245v1 [cs.LG])
https://instadatahelpainews.com
To view or add a comment, sign in