Calda AI

Calda AI

Forskning

AI for heavy industry.

Om oss

We are on a mission to reduce the environmental impact of heavy industry through intelligent use of data. At our core, we do R&D at the interface of physical modelling, artificial intelligence, and statistics: our core science team of world-leading researchers have decades of experience solving the hardest problems in the physical sciences. Calda AI draws on this unique talent to deploy breakthrough machine learning technology for process optimization, control, and decision making. We have world-leading expertise and technology in a number of key areas: ✷ Using AI to learn physical models from sparse, noisy data In many situations, you need a physics model to make predictions about your process, but building a physical model from scratch is unfeasible: the system may simply be too complex, or you may lack the data streams you need. At Calda we leverage AI to learn the physics of your processes from the data you have available, allowing you to make sound predictions even for the most complex systems. ✷ Massive speed-up of physical simulations with deep learning emulators Often you have simulations for making predictions about your process (eg., finite element or hydrodynamic simulations), but they are too slow to be useful for making decisions in real time. We leverage AI to make your simulations so fast that they can be used for real-time decision making and process optimization. ✷ Extracting tiny signals from hard-to-reach places Detecting problems early - when signals are tiny - means you can be proactive rather than reactive, and is key to extracting value. We are at the forefront of developing sophisticated techniques in data representation for extracting the tiniest signals from complex, noisy datasets. ✷ Optimal decision making in the face of complex uncertainties We have leading techniques for making optimal decisions when faced with complex uncertainties.

Bransch
Forskning
Företagsstorlek
2–10 anställda
Huvudkontor
Stockholm
Typ
Privatägt företag

Adresser

Anställda på Calda AI

Uppdateringar

  • Visa organisationssidan för Calda AI, grafik

    192 följare

    Large time-series models (LTMs) are to time-series data what LLMs are to language. LTMs are on the cusp of delivering state-of-the-art forecasting capability across a wide range of domains, changing the way we analyse time-series data forever. Calda AI researchers Justin Alsing and Benjamin Wandelt (with collaborators from Johns Hopkins, University of Amsterdam and Capital One) made an important step forward in the pursuit of LTMs last week, demonstrating favourable performance scaling with model size, data, and compute. Watch this space for breakthrough foundation models for time-series forecasting!

    Visa profilen för Justin Alsing, grafik

    Founder at Calda AI | Physicist | Machine Learning Researcher

    Large time-series models (LTMs) enjoy similar power-law scaling behaviour to LLMs. We just put out a paper (https://lnkd.in/givY528D) establishing power-law like scaling-laws for large time-series models as a function of data, compute, and model size. Similar scaling-laws for LLMs (from the landmark Kaplan et al. paper https://lnkd.in/g9KHYN9u) have provided key guidance in allocating enormous resources for predictable - and eventually breakthrough - performance gains. The demonstration of similarly favourable scaling behaviour for large time-series models provides both a motivation and guide in the pursuit of foundation models for time-series forecasting. Foundation models for time-series are coming (with enough data and compute). Thanks to Thomas Edwards, James Alvey, Benjamin Wandelt and Nam Nguyen for the hard work!

    • Ingen alternativ bildtext i den här bilden
  • Visa organisationssidan för Calda AI, grafik

    192 följare

    Calda AI scientists are pushing the limits of what’s possible with Graph Neural Networks (GNNs). Graph Neural Networks are one of the pillars of modern physics x AI modelling, which is steadily revolutionising industrial process optimisation. At their core, GNNs rely on aggregating together lots of little bits of information from across your system or process. But how do you aggregate all of your information in an optimal way, to get the most out of your data? In a recent paper, Calda scientists Justin Alsing and Benjamin Wandelt (together with collaborator Lucas Makinen from Imperial College London) developed “Fishnets” — information-theoretically optimal aggregation for GNNs. By optimizing GNN aggregation we are able to build more accurate GNNs, from less data and smaller architectures, while being significantly more robust when your data in deployment do not follow the same distribution as your training data. Better aggregation -> More accurate predictions, from less data and smaller architectures -> Better process optimisation. Check out the paper here: https://lnkd.in/gPkrkjx2 

    Fishnets: Information-Optimal, Scalable Aggregation for Sets and Graphs

    Fishnets: Information-Optimal, Scalable Aggregation for Sets and Graphs

    arxiv.org

  • Visa organisationssidan för Calda AI, grafik

    192 följare

    Making complex decisions in the face of limited data and uncertainty is the cornerstone of industrial optimisation, from process optimisation through strategic planning and decision-making. Calda AI scientists recently published a key breakthrough in leveraging machine-learning to solve complex decision-making tasks, efficiently and optimally. With our new framework, you can get optimal decision recommendations in the most complex situations, covering many situations that were previously impossible with existing methods. In cases where existing approaches were possible (in theory), our approach is up to 1000x faster than the previous state-of-the-art. Get in touch if you are interested in partnering with us at Calda AI to deploy these new capabilities in your business, to crack your hardest decision-making and planning problems. Quick summary and link to the paper below: -------------------------------------- Making optimal decisions in the face of uncertainty is the end game of reasoning from data, and arguably the most important basic task we undertake as humans. Bayesian decision-theory provides the unique mathematical framework for computing optimal decisions from limited data, assuming some reasonable axioms about what characterises the "rational decision maker". In this framework, you properly account for all of your sources of uncertainty, and find the action that optimises your expected gains while mitigating risks. However, the calculations involved in computing optimal decisions are typically prohibitively expensive - approximations lead to sub-optimal decisions, and poorer business outcomes. We introduce a new, machine-learning based system for calculating optimal Bayesian decisions, which bypasses the expensive (inference, integration, and optimisation) steps in the traditional approach. Our approach relies only on the ability to simulate your problem, making it very generally applicable (for both tractable and intractable likelihoods). It is also incredibly efficient - we need a factor of 100-1000 fewer simulations than you would typically require using standard methods - and it works on problems that were previously impossible due to their complexity (ie., where likelihoods are intractable). This opens up exciting new capabilities in Bayesian decision-making, particularly in the previously challenging regime where likelihoods are intractable and simulation expensive. https://lnkd.in/d5raFatu

    Optimal simulation-based Bayesian decisions

    Optimal simulation-based Bayesian decisions

    arxiv.org

  • Visa organisationssidan för Calda AI, grafik

    192 följare

    Leveraging AI to learn the physics governing a process - directly from the data - is one of the key ways we at Calda AI are building groundbreaking new capabilities in process optimisation and control. Physics-based AI systems allow us to model and optimise processes that were previously too complex, or measurements too indirect or messy. By combining the principles of physical modelling and AI together, we can build process models that are both more accurate and more robust, from less (and lower quality) data than before. In a recent landmark paper, Calda AI scientist Niall Jeffrey and collaborators used Graph Neural Networks and symbolic regression to re-learn the exact formalism for Newtonian celestial mechanics from orbital solar system data. Breakthroughs like these are changing the face of what's possible in industrial process optimisation! Get in touch to find out how Calda AI can help transform your process optimisation capabilities. https://lnkd.in/dCiwYzhF

    Rediscovering orbital mechanics with machine learning

    Rediscovering orbital mechanics with machine learning

    iopscience.iop.org

Liknande sidor