Zindi's top takeaways from the AI Hardware and Edge AI Summit
Image: Dall.E

Zindi's top takeaways from the AI Hardware and Edge AI Summit

In June, Zindi co-founder Megan Yates attended the AI Hardware and Edge AI Summit Europe. Below we share Megan’s top takeaways and insights from the conference which can help you on your own AI journey:

  • Efficiency and Scalability: Across many talks, there was an emphasis on achieving efficiency in AI through optimised hardware and data centre co-design, with the need to move away from brute-force scaling to more effective solutions using GPU clusters and specialised hardware.
  • Edge AI and On-device Computing: We are seeing the rise of offline inference and edge AI, with practical applications such as AI on laptops. There is no doubt LLMs will be integrated everywhere in our world, from browsers, phones, TVs, and appliances, to increasingly simpler devices as the tech is perfected.

Cost and Sustainability Concerns: Three hard-hitting quotes mentioned in the conference that give some perspective into the scale of the problem:

“80-90% of the AI inference cost lies in ongoing tuning, validation, and inference processing after a model has been developed” Nvidia Analysis
“By 2030 AI data centres could consume as much as 25% of all American electricity, up from 4% or less today” CEO, Arm
“Incorrect AI model selection, improper RAG tuning, and inadequate AI pipeline optimisation can inflate AI inference costs by as much as 1500%” Leading UK bank

  • Open Source, Collaboration, and Standardisation: There was a strong focus on open AI systems, including models like Llama, and PyTorch frameworks, and initiatives for standardising hardware and system management. Companies like Meta and Google are pushing for open and collaborative AI development.
  • Practical Challenges and Industry Needs: We need to address the gap between proof of concept and production. The need for cost reduction, sustainability, and IP ownership is critical for corporate adoption. Banks and large corporations are cautious but see potential in tailored AI solutions.
  • Create Better AI Solutions by bringing together data scientists and domain experts. While data scientists can build and deploy models, domain experts have undocumented expertise, think in real-world cases, and can work on immediate real-time fixes.
  • Guardrails as a Gateway: Large companies using LLMs in client-interacting use cases are employing Guardrails as a gateway for integration between the various top GPT services available. This serves as a middleware layer that manages, routes, and secures interactions between client applications and multiple GPT models.

These points reflect the key discussions and insights gathered from the conference. Chat to Zindi to learn more about our products and how our community of skilled AI professionals can help your business.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics