Is Django the Best Backend Framework for AI Products? Hi, this is Ben 👋 The team asked me to explain why we used Django for the backend of our AI Agent challenge, in under 30 seconds! I utterly think it's the best choice, I've left the full article with a comparison analysis against Flask and FastAPI in the comments. It's one of our most read articles 😎 We work in bets, with strong beliefs weakly held, so if you have proof that other frameworks are best, I'd love to hear from you in the comments or DMs #ai #aiagent #codechallenge #backend #django #aichallenge
Cubode’s Post
More Relevant Posts
-
🌟 Amidst the buzz of shiny GenAI tools 🛠️ emerging every day, it's easy to overlook the elegance of the algorithms that power language models. This post and talk by Simon Willison offers a fascinating dive into the world of embeddings! From similarity distances, to multi-modal semantic search with CLIP and to answering questions with RAGs, Simon's insights into the wider applications of embeddings are both illuminating 💡 and fundamental. I love the fact that he is emphasizing over and over the importance of simplicity and portability 🚀 when using LLMs and data. Sqlite all the way! #AI #DataScience #Learning #embeddings #word2vec #semantic_search #RAG #llms
To view or add a comment, sign in
-
Nowadays, you can find the omnipresence of Generative AI everywhere. Why don't we take advantage of it in terms of time? I've found that the most time-consuming task is working at the shell level. For example, yesterday I needed to update 272 records out of millions because these records were corrupted. I fixed this issue by using shell-level AI within 30 minutes instead of 6 hours (if I had used the traditional software approach). You can categorize this as a repetitive task. I mean, we should develop a mindset to: - Identify repetitive tasks - Pass these tasks to shell-level AI - Ensure all this happens at zero cost (Nowadays, open-weight models == GPT-4o Turbo) Btw ,I'm using `deepseek-coder-v2:236b` model for this task, alternate options is `claude-3-haiku`. #DevOps #AITrends #MLOps #ArtificialIntelligence #Python #JavaScript #ReactJS #NodeJS #AWS #ProjectManagement #ChatGPT #SoftwareDevelopment #AI #Azure #gcp #google
To view or add a comment, sign in
-
-
Director of AI & Venture Ecosystems, Office of the CTO @ Microsoft | Startup Mentor & Pre-Seed/Seed Angel | B2B SaaS & Agency Founder
Embeddings are a concept that describes how to turn any piece of content into an array of numbers that represent its semantic meaning. Embeddings can be used to perform powerful and exciting techniques with AI and language models, such as finding related content, building semantic search engines, clustering similar items, classifying content into categories, and more. In this article, Simon Willison provides a comprehensive overview of embeddings and their applications. He covers the key components and features of embeddings, such as vector space, cosine similarity, and embedding models. He also demonstrates how to use embeddings for various tasks, such as related content, semantic search, clustering, classification, and more. He illustrates his points with examples from his own projects and tools. This is a very informative and insightful article that offers a lot of value for anyone who wants to learn more about how embeddings work and why they matter. //
Embeddings: What they are and why they matter
simonwillison.net
To view or add a comment, sign in
-
𝐋𝐚𝐧𝐠𝐂𝐡𝐚𝐢𝐧 𝐋𝐋𝐌 𝐋𝐢𝐛𝐫𝐚𝐫𝐲 𝐎𝐯𝐞𝐫𝐯𝐢𝐞𝐰 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐋𝐚𝐧𝐠𝐂𝐡𝐚𝐢𝐧? A library for developing LLM powered apps - LangChain gives structure and tools for connecting LLMs to knowledge sources and guiding the LLM's reasoning process. You can think of LangChain as the bridge between raw LLM power and real-world applications. 𝐖𝐡𝐲 𝐔𝐬𝐞 𝐋𝐚𝐧𝐠𝐂𝐡𝐚𝐢𝐧? 𝐓𝐨 𝐦𝐚𝐤𝐞 𝐋𝐋𝐌𝐬 𝐦𝐨𝐫𝐞 𝐩𝐨𝐰𝐞𝐫𝐟𝐮𝐥 𝐚𝐧𝐝 𝐯𝐞𝐫𝐬𝐚𝐭𝐢𝐥𝐞 - LangChain allows you to connect LLMs to external knowledge sources (databases, documents, websites). This feature makes LLMs more powerful and unlocks applications beyond just basic LLM responses. 𝐓𝐨 𝐬𝐚𝐯𝐞 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭 𝐭𝐢𝐦𝐞 - LangChain offers pre-built components, integrations, and templates. These pre-built components and templates reduce the need for low-level coding, allowing you to focus more on your application's unique logic. 𝐊𝐞𝐲 𝐋𝐚𝐧𝐠𝐂𝐡𝐚𝐢𝐧 𝐂𝐨𝐦𝐩𝐨𝐧𝐞𝐧𝐭𝐬 𝐂𝐡𝐚𝐢𝐧𝐬 - These are sequences of LLM calls combined with other tools. For example: A chain could (1) search a knowledge base, (2) summarize the information found, and (3) have an LLM generate a response using the summary. 𝐀𝐠𝐞𝐧𝐭𝐬 - You can think of agents as AI-powered decision-makers. An agent uses an LLM to choose a series of actions, executes them, and adjusts behavior based on results. Example: an agent for writing a marketing email could generate text, select images, and proofread its own work. 𝐌𝐞𝐦𝐨𝐫𝐲 - LangChain allows state and information to be retained between different calls to your app. This lets apps have "ongoing conversations" and adapt based on past interactions. 𝐋𝐋𝐌 𝐖𝐫𝐚𝐩𝐩𝐞𝐫𝐬 - LangChain simplifies working with different LLMs (like OpenAI, Google, etc.) by offering a standardized interface. To summarize, LangChain is a superb LLM library that simplifies the task of building applications powered by large language models (LLMs). Refer to the first comment for the playlist details. #llms #generativeai #nlproc #ai #datascience #deeplearning #ai #langchain #library #python
To view or add a comment, sign in
-
-
My views on generative AI don't seem to be a good fit for the general vibe on LinkedIn, but I figured I'd share them anyway: I don't use LLMs too much, mostly just occasionally to understand current capabilities, or for a concrete project (parsing unstructured data being the main use case, that works pretty OK when combined with some heuristics). For code writing, I always find them pretty useless beyond trivial stuff. Writing code is something I enjoy and I'm fast at, there's plenty of tools and ways to eliminate boilerplate that I find superior to generating code stochastically, which quickly introduces more complexity than any 2000s Java framework I can think of. Certainly not useful enough to worry about the legal implications. Never publishing/selling LLM generated code is a no-brainer for me right now. For "content" (for me that's largely emails and documentation) writing, I also find them rather useless, because I just don't have an issue with starting from a blank page, I find it actually helps me organize my thoughts. I've seen plenty of people struggle with that though, just like there's plenty of people struggling with writing code. I can certainly see why it's a life changer for them. For me personally, it's just not. GenAI art is something I'm thoroughly unexcited about. Most of what I've seen looks quite terrible, possibly because it's being generated by people with little taste (like a spiritual successor to WordArt). I certainly wouldn't have a keen enough eye to figure out how I need to change an image to look "right". When I do need art, I happily pay humans for it. If your business wants to go for quantity, that's fine, I want to go for quality. I believe there'll be a place for both for the foreseeable future. What I do find LLMs rather useful for is as a Google replacement for two specific areas: 1. Tech that's not used widely enough to be omnipresent on StackOverflow, but well documented. Common Lisp is one example of that. 2. Tech that has evolved a lot over the years and there's a dozen ways to do things, and all you get from Google is posts from 2009 describing some way of doing it that's deprecated (or should be). Django comes to mind. The answers are still pretty hit and miss, but quite often I turned to GPT-4 after 15 minutes of Googling, and the answer at least pointed me in a better direction. It's pretty useful for getting into tech I haven't worked with in a long time (or never). I don't paste that code, it's essentially a pointer to find the relevant documentation/discussions. I do wonder if that usefulness might evaporate over time as LLM generated SEO content becomes more prevalent than human written content. We'll see I suppose.
To view or add a comment, sign in
-
Simon Willison's blog is one of the best modern NLP reads we have today. As Jerry Liu of LlamaIndex noted, RAGs are a bit hacky, but they're an incredible hack. They work really well with the context window allowed even in 3b and 7b models. With that, an understanding of embeddings is crucial. Here's a great overview of how they're used and what is used to calculate embedding similarity. https://lnkd.in/gxwDQ-gD
Embeddings: What they are and why they matter
simonwillison.net
To view or add a comment, sign in
-
"🚀 Day 10 of #100DaysOfML! Explored web scraping and Django. Excited to continue this journey into the world of machine learning. Looking for tips and project ideas - let's connect and learn together! 💻🌟 #MachineLearning #DataScience #AI #LearningJourney"
To view or add a comment, sign in
-
This is a great primer in understanding how to pivot from cloud-based LLM APIs to Red Hat OpenShift AI using Lanchain.js as a wrapper. It's a hands-on focused article, but you don't need a lab to click through the 4 sections of code samples :-) https://lnkd.in/gpE-uJB9
How to get started with large language models and Node.js | Red Hat Developer
developers.redhat.com
To view or add a comment, sign in
-
I'm starting up the weekly Monday evening virtual session for North Shore AI Developers again. This Monday I'm planning on showing SillyTavern. If you haven't heard of SillyTavern, take this as an opportunity to check it out! It's one of the most common locally installable LLM interfaces for LLM powerusers. It's all the rage on Reddit r/LocalLLaMA for everyone from script-kiddies to seasoned sysadmins. 🏚 SillyTavern Website: https://sillytavernai.com/ As always, I'll be implementing this open source project on my own machine, and showing that implementation in this session so we can see how things really work. This is a well respected repository, so it will probably be easy to implement! I encourage anyone attending to try to do it for themselves too. These days it can be hard to find a reason to build something when it seems like everyone else is building cooler and better stuff. If anyone feels like me and benefits from having deadlines to make something work and people to show it to, I would love to have you featured to do that at a future event. Here are some other cool open source AI projects I plan to feature at upcoming events: 💥 GPT4All from Nomic AI: https://lnkd.in/eeZ4xNZf "Your chats are private and never leave your device" 💥 TypeGPT: https://lnkd.in/eZ9iUffj "a Python application that allows you to invoke various AI's and LLM's from any text field in your operating system." 💥 SuperAgent AI: https://lnkd.in/emfpAcTp "allows any developer to add powerful AI assistants to their applications."
AI Builders Weekly Meetup: What is SillyTavern?, Mon, Jul 8, 2024, 6:00 PM | Meetup
meetup.com
To view or add a comment, sign in
https://medium.com/@cubode/whats-the-best-backend-framework-for-ai-development-in-2024-django-fastapi-or-flask-d52c165ea20c