Magnifico

Magnifico

Software Development

Cambridge, Massachusetts 205 followers

Implementing AI for enterprise companies.

About us

Magnifico is a software-as-a-service platform for product improvement using Artificial Intelligence. We use large language models (LLMs) to capture product requirements.

Website
https://magnifico.ai/
Industry
Software Development
Company size
1 employee
Headquarters
Cambridge, Massachusetts
Type
Privately Held
Founded
2023
Specialties
AI, SaaS, Software, Product Design, and Product Development

Locations

Employees at Magnifico

Updates

  • View organization page for Magnifico, graphic

    205 followers

    Excited to learn more about NeMo and possibly implement it for customers!

  • View organization page for Magnifico, graphic

    205 followers

    This is amazing!

    View profile for Anthony Robbins, graphic

    Watch the keynote from our most recent GTC to see NVIDIA CEO Jensen Huang share the AI technologies affecting every industry—and our everyday lives.

    BREAKING NEWS from #awsreinvent2023 Amazon Web Services (AWS) and NVIDIA Announce Strategic Collaboration to Offer New Supercomputing Infrastructure, Software and Services for Generative AI November 28, 2023 -> AWS to offer first cloud AI supercomputer with NVIDIA Grace Hopper Superchip and AWS UltraCluster scalability -> NVIDIA DGX Cloud—first to feature NVIDIA GH200 NVL32—coming to AWS -> Companies partner on Project Ceiba—the world’s fastest GPU-powered AI supercomputer and newest NVIDIA DGX Cloud supercomputer for NVIDIA AI R&D and custom model development -> New Amazon EC2 instances powered by NVIDIA GH200, H200, L40S and L4 GPUs supercharge generative AI, HPC, design and simulation workloads -> NVIDIA software on AWS—NeMo LLM framework, NeMo Retriever and BioNeMo—to boost generative AI development for custom models, semantic retrieval and drug discovery “Amazon Web Services (AWS) and NVIDIA have collaborated for more than 13 years, beginning with the world’s first #GPU cloud instance. Today, we offer the widest range of NVIDIA #GPU solutions for workloads including #graphics, #gaming, #highperformancecomputing, #machinelearning, and now, generative ai,” said Adam Selipsky, #CEO at Amazon Web Services (AWS). “We continue to innovate with NVIDIA to make AWS the best place to run GPUs, combining next-gen NVIDIA Grace Hopper Superchips with AWS’s EFA powerful networking, EC2 UltraClusters’ hyper-scale clustering, and Nitro’s advanced virtualization capabilities.” “#generativeai is transforming #cloud workloads and putting #acceleratedcomputing at the foundation of diverse content generation,” said Jensen Huang, #founder and #CEO of NVIDIA. “Driven by a common mission to deliver cost-effective state-of-the-art #generativeai to every customer, NVIDIA and AWS are collaborating across the entire computing stack, spanning AI infrastructure, acceleration libraries, foundation models, to generative AI services.” Bill Vass | Rich Geraffo | David Appel | Kim Majerus | Ray Falcione | Jim Young | Rebecca Wetherly | Rima Olinger | Heidi Buck | Mary Alexander | Ash Thankey | Amy Belcher | Kyle Johnson | Phil Goldstein | Iram A. Ali | Matthew Briggs | David Rubal, CISSP, NREMT | Renzo Rodriguez | Christian Hoff | Debra Goldfarb | Robin Goad | Dominic Delmolino | Brian Pickering United States Department of Defense | Defense Information Systems Agency | Defense Advanced Research Projects Agency (DARPA) | Lockheed Martin | Raytheon | Northrop Grumman | Huntington Ingalls Industries, Inc. | MITRE

    AWS and NVIDIA Announce Strategic Collaboration to Offer New Supercomputing Infrastructure, Software and Services for Generative AI

    AWS and NVIDIA Announce Strategic Collaboration to Offer New Supercomputing Infrastructure, Software and Services for Generative AI

    nvidianews.nvidia.com

  • Magnifico reposted this

    View profile for Jim Fan, graphic

    NVIDIA Senior Research Manager & Lead of Embodied AI (GEAR Group). Stanford Ph.D. Building Humanoid robot and gaming foundation models. OpenAI's first intern. Sharing insights on the bleeding edge of AI.

    The AI pin from hu.ma.ne is out, the first LLM-native consumer hardware device. I think it's a great stride towards "ambient intelligence", where AI fades into the background and emerges naturally when you need it. I can imagine having the GPT app store streamed to the device, switching agents depending on the multimodal context around you at the moment. The note-taking feature is awesome: I want to remember important conversations and contacts at conferences without explicitly typing notes on my phone. The privacy concerns are huge though, despite the safeguard mechanism. Google Glass was dead partially because of the social stigma. How will Humane AI pin perform in the mass market? I'm curious what you all think!

  • View organization page for Magnifico, graphic

    205 followers

    👏

    View profile for Gabriele Venturi, graphic

    Building PandasAI, the library to extract value from your data

    OpenAI is not the death of startups, it's a Wakeup Call! In the past days, many fellow entrepreneurs have reached out to me asking if I'm concerned about OpenAI's new offerings. There's a narrative spreading that thousands of startups will be killed by capabilities like custom chatbots and text generation revealed at DevDay. Like them, I've been following the OpenAI announcements closely. The launches of tools like custom GPTs have no doubt led some to predict the impending downfall of companies built on conversational AI. However, I believe this view misses the mark. The startups destined for disruption by OpenAI's offerings were likely already on borrowed time. Building a thin wrapper on top of an owned technology like GPT-4 was never going to be a sustainable business in the long-run. The value lies in building differentiated products with unique data and capabilities. There are a few key points we should keep in mind: ✅ Generative AI is more than just conversational interfaces. For the first time, we have technology mimicking human reasoning and creativity. The possibilities extend far beyond chatbots. ✅ Conversational interfaces are not necessarily the future. OpenAI's own usage data for ChatGPT shows declines after initial hype. A clickable UI can often be more efficient than typing sentences. ✅ There's a massive difference between a basic integration of LLMs and building an actual product. True startups solve real problems and meet needs. An intelligent interface is just one piece. For many companies, this moment is an opportunity, not a death sentence. The time has come to stop relying on third-party technology and double down on unique data, industry expertise, and product-market fit. The fundamentals haven't changed. Building a startup today still means making something people want. OpenAI expands what's possible, but ultimately we still need to identify real problems and develop complete solutions. Rather than the end, I see this as a wakeup call. A nudge to build differentiated products on owned technology, not thin layers on leased foundations. The startups that survive will be those that embrace OpenAI as an enabler, not a crutch. An amazing new technology to create value, not a shortcut. This is an exciting time full of possibility. OpenAI has raised the bar, but also opened up many new avenues. For startups willing to learn and adapt, the opportunities are endless. The only true death will come to those who fail to evolve. #OpenAI #startup #GenAI

    • No alternative text description for this image
  • Magnifico reposted this

    View profile for Philipp Schmid, graphic

    Technical Lead & LLMs at Hugging Face 🤗 | AWS ML HERO 🦸🏻♂️

    Can we pre-train LLMs with Retrieval Augmentation? 🤔 RETRO was a research by Google DeepMind, which included retrieval into the pre-trainng process. Now NVIDIA continues this research by scaling RETRO to 48B, where they continued pretraining a 43B GPT model on an additional 100 billion tokens using the Retrieval augmentation method by retrieving from 1.2 trillion tokens. 🤯 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻: 1️⃣ Pretrain an LLM using the next token prediction on 1.1T token 2️⃣ Continue pretraining of the LLM with retrieval augmentation (Retro-fitting) on an additional 100 billion tokens while retrieving from whole pretraining dataset. 3️⃣ Instruction tune the model, with retrieval augmentation. Only update the weights of the decoder. 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀: 📈 Continued pretraining of LLMs with retrieval yields better decoders for QA. 🚀 Up to 10% improvements doing retrieval augmented pretraining 📚 Pretraining with retrieval improves the incorporating of context 🧮 Retrieval database had 19 billion chunks, with each chunk containing 64 tokens 🐌 Retrieval pretraining is slower and more complex due index and retrieval step Check out the full paper: https://lnkd.in/d2R4zsQ5 Remember that these are just my personal findings. Make sure always to conduct your own research and analysis. 🤗

    • No alternative text description for this image
  • Magnifico reposted this

    View profile for Pau Labarta Bajo, graphic
    Pau Labarta Bajo Pau Labarta Bajo is an Influencer

    I build real-world ML products. And then help you do the same 🚀

    Wanna 𝗱𝗲𝗽𝗹𝗼𝘆 an 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠? 🚀 ↓ ---- Hi there! It's Pau 👋 Every week I share free, hands-on content, on production-grade ML, to help you build real-world ML products. 𝗙𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 and 𝗰𝗹𝗶𝗰𝗸 𝗼𝗻 𝘁𝗵𝗲 🔔 so you don't miss what's coming next #machinelearning #mlops #realworldml #llm #llmops

  • Magnifico reposted this

    View profile for Joel Caruso, graphic

    Account Manager for Strategic Start Ups

    NVIDIA AI TensorRT-LLM just went live on GitHub! 🔥   Let us know via the repo if there are any questions you have about the new toolchain. Models you would like supported, features you're looking for, bugs you run into, and we'd love to hear about your experience working with the toolchain after you've been able to run your experiments. TLDR: TensorRT-LLM is a opensource acceleration engine for LLM Models with support for Multi-GPU & Multi-node. Support for Inflight Batching, Quantization, mixed precision on NVIDIA #Ampere and #Hopper H100 #GPUs. Looking forward to hearing about your experience! TensorRT-LLM https://lnkd.in/gYiP4N_S Triton Inference Server Backend for TensorRT-LLM https://lnkd.in/garWpi3g TensorRT-LLM Documentation: https://lnkd.in/gTUr3cB7 The quantization toolkit for TensorRT-LLM Ammo: https://lnkd.in/gWbFYmTt

    GitHub - NVIDIA/TensorRT-LLM: TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

    GitHub - NVIDIA/TensorRT-LLM: TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

    github.com

  • Magnifico reposted this

    View profile for Tony Seale, graphic

    The Knowledge Graph Guy

    🔵 How Graphs Could Shape the Future of Vector Search 🔵 With ongoing advancements in Large Language Models (LLMs) such as ChatGPT, vector-based search mechanisms are rapidly transitioning from being auxiliary features to core functionalities in many platforms. Vector search is now found not only in specialised stores like Pinecone and Weaviate but also in search platforms such as Elasticsearch and databases like MongoDB. Notably, both these platforms have employed an algorithm called Hierarchical Navigable Small Worlds (HNSW) to deliver efficient vector search. HNSW is a graph-based algorithm, its power lies in the ability to transform continuous embedding vectors into a discrete, layered graph. 🔵 Discrete and Continuous Semantics Traditionally, fuzzy matching strategies are often implemented in conjunction with discrete filters. In search, this is referred to as 'faceting' (think of searching for 'shiny black shoes' on eBay and then selecting a specific brand from a dropdown menu). This hybrid approach has proven effective and is being widely adopted for vector search as well. For example, one might restrict documents based on geographical origin or timeframe and then use vector-based search to gauge sentiment only within that subset. 🔵 A Graph-Based Revolution Traditional filtering is typically based on tabular (rows in a database) or tree-like (JSON documents) data formats. The landscape changes significantly when the data itself is structured as a graph. When employing HNSW in a graph-based setup, both continuous vectors and discrete facets become vertices in the same graph. This allows for more nuanced relationships and more efficient alignment. Furthermore, the upper layers within HNSW represent a form of compression. With your data in a graph, you can move beyond the classic HNSW node-degree compression algorithms to consider more semantic forms of compression, which take domain-specific ontologies into account. This could prove to be very powerful. 🔵 Key Takeaways for Organisations I posit that transitioning to graph-based data structures is the next logical step in the evolution of search and knowledge representation. Therefore, my advice to organisations looking to stay ahead in the data management and analytics game is to transition as much of their core data into a graph structure as quickly as possible. ⭕ HNSW: https://lnkd.in/eH7JqEyZ ⭕ Continuous and Discrete: https://lnkd.in/ex8HA_Nj ⭕ Embrace Complexity: https://lnkd.in/ejZikEGp ⭕ Semantic Router: https://lnkd.in/eucZUjrV

    • No alternative text description for this image
  • View organization page for Magnifico, graphic

    205 followers

    There's a lot to be optimistic about with AI.

    View profile for Shahid Azim, graphic

    Co-Founder and CEO, C10 Labs I Serial Health Tech Entrepreneur

    Back from Ted AI, here some interesting snippets from an exciting couple of days. Broadly, though we are seeing hype cycles in some segments, there is a massive societal scale shift underway which touches every profession and every sector. #c10labs C10 Labs also hosted its first west coast AI Salon which was attended by some amazing minds! #ai4impact With God like powers , comes a need for god like wisdom. Don’t hate the tech players, but change the rules of the game! ai is not just a tool but a ladder for us. English is the most common programming language now! We are going to have an agent for everything Personals agent for everyone. “Having your bit flipped !!! ( seeing AI perform !) “, Ried Hoffman’s quote on seeing chatGPT perform for the first time in a private setting with Gates. Do not panic. Line of sight medical assistants for every one, Tutors for everyone. Potential emergence of world of possibilities and abundance. Navigate better outcomes for humanity with AI. Human ingenuity like never before with AI. Ai allows for Gift of time for the patient doctor relationship. Ai is a technology of abundance and we should not approach it with a scarcity mindset. Ramesh Raskar Patricia Geli Muntazir Mehdi Ahmer Inam George K. Beth Porter #TEDAI

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image

Similar pages