Subconscious AI

Subconscious AI

Research Services

New York, NY 853 followers

Demand without Data. Generative AI for Causal Experiments.

About us

Conduct Research faster, with higher quality and more ethically. Our technology allows users to conduct causal experiments for any human behavior at a fraction of the cost and time of traditional methods. Get started today: https://docs.subconscious.ai/ Join our Discord here: https://discord.gg/paMzAcqEQ2

Website
https://subconscious.ai
Industry
Research Services
Company size
11-50 employees
Headquarters
New York, NY
Type
Privately Held
Founded
2022

Locations

Employees at Subconscious AI

Updates

  • View organization page for Subconscious AI, graphic

    853 followers

    Why settle for overpriced, slow research? 💸 Paying $150k per study for low-quality results doesn’t make sense anymore. There’s a smarter way to get the insights you need. Imagine gaining high-quality insights with 100x better price, speed, and control. Our advanced AI technology makes this possible, transforming market research from a costly burden into a powerful, efficient tool. No more burning through your budget for subpar data. Traditional methods are expensive and time-consuming and often deliver questionable quality. With our solutions, you can achieve accurate, reliable data without the hefty price tag. We’ve moved beyond outdated, inefficient methods. Our technology ensures you get the high-quality data you need quickly and cost-effectively. It’s time to revolutionize how you approach market research. Ready to make the switch? Discover how you can achieve more with less and turn your research challenges into opportunities. Join us in embracing the future of research. www.subconscious.ai

    • No alternative text description for this image
  • View organization page for Subconscious AI, graphic

    853 followers

    🔍 Imagine if we could reliably simulate population behavior. This breakthrough would revolutionize marketing and policy research by solving the elusive "Say-do" gap. At Subconscious AI, we are doing just that. Leveraging advanced AI technology bridges the gap between stated intentions and real-world actions. Traditional methods often struggle with accuracy and reliability, leading to inconsistent results and wasted resources. Our approach provides: 1. High Precision: Simulating human behavior with near-human accuracy. 2. Efficiency: Faster insights without extensive timelines. 3. Cost-Effectiveness: Achieving high-quality results at a fraction of the cost. This capability opens new avenues for understanding consumer behavior, designing effective policies, and conducting impactful research. Whether predicting market trends or crafting data-driven policies, our technology offers a reliable solution. Join us in transforming the way we understand human behavior. See how our simulations can enhance your research efforts. www.subconscious.ai

    • No alternative text description for this image
  • View organization page for Subconscious AI, graphic

    853 followers

    Happy Tuesday! Here’s what we’ve been up to: 🌟 1-click Social Science Replications: With Subconscious AI, you can effortlessly fork and run social science, economics, and psychology experiments. 📄 We've successfully replicated over 300 studies! The first 40 replications are published here: https://lnkd.in/dJzJiZHP 🌎 The global community has replicated about a dozen studies. See our review here: https://lnkd.in/d9uMJAQP ⚙️ We have updated our documentation, allowing anyone programmatic access to top-tier-quality social science research via API: https://lnkd.in/duQHqQtW www.subconscious.ai

  • View organization page for Subconscious AI, graphic

    853 followers

    April Fools. Who is fooling who?  A year ago, CloudResearch posted an April Fools’ Day spoof about creating AI-generated responses via AI-generated simulated humans. A year later, innovative researchers are already using LLMs to enhance and reduce costs of Social and Economics Research. This paper (https://lnkd.in/dfr9TCeN) is a 2023 review of the methods where LLMs work and don't work. Some examples below: - Gilardi et al. (2023) present evidence that ChatGPT "exceeds that of human annotators in four out of five tasks". - Törnberg (2023) examined the accuracy, reliability, and bias of ChatGPT when classifying political affiliations, suggesting that LLMs have "substantial potential for use in the social sciences." - Hämäläinen et al. (2023) explored using LLMs for designing and assessing experiments.  - Kim and Lee (2023) analyzed how LLMs could augment surveys and enable missing data imputation, retrodiction, and zero-shot prediction. Their conclusion is the same as our approach at Subconscious AI: that "LLMs have the potential to address some of the challenges associated with survey research" and "should be used in conjunction with other methods and approaches" to ensure the accuracy and validity of the survey results. By using both Human and our Digital Twin of Earth, Subconscious AI dramatically decreases the cost of research while simultaneously increasing the information (reducing the entropy) of any study. www.subconscious.ai

    • No alternative text description for this image
  • View organization page for Subconscious AI, graphic

    853 followers

    A year ago - using LLMs to predict Human Behavior gave us a signal slightly stronger than noise. We took this signal, and we've been growing it for the last year.  Over that time, we've learned, a lot. We've worked with Nobel Prize Winners, some of the best Social Scientists, Causal Modelers, and Statisticians in the world. Friends from Two Sigma and IBM Research. And the signal grew. Our results are now near-indistinguishable from Human. We call this Bioequivalence. When our Digital Twin of Earth is used to predict human behavior, about 75% of Human behavior is explainable by our Causal models. When humans are used to predict human behavior, about 80% of Human behavior is explainable by the best-designed experiments in the world. This means we can explain ~95% of a Human Study for the purposes of Market Research, at ~1,000x lower cost. Below is an illustration of our Spearman Correlation with Human, across several hundred Human Baseline Replication studies, over the past year. And we continue to grow. Also see: https://lnkd.in/gX2aXYEH www.subconscious.ai

    • No alternative text description for this image
  • View organization page for Subconscious AI, graphic

    853 followers

    Prepper Sam Altman regularly jokes about existential risk. See below: "I have like structures, but I wouldn't say a bunker. None of this is gonna help if AGI goes wrong.”  - Sam Altman “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”  - Sam Altman Now Sam is asking for $7,000,000 million ($7 trillion) to increase existential risk while only committing $10 million to safety. This is dangerous and not human-aligned. What if we could increase knowledge and understanding of human behavior without increasing existential risk? Subconscious.ai is building human-level AI (specifically not superhuman) for the purposes of social research, market research, and product design. Want to test it yourself? Join our waitlist: https://lnkd.in/dW4VD_Ve #ResponsibleAI #HumanAlignedAI

    • No alternative text description for this image
  • View organization page for Subconscious AI, graphic

    853 followers

    Responding to user demand - we now can run 80 years of psychology, sociology, and economics, experiments against *any* LLM! This opens up several new use cases such as Bias Detection. For example - we replicated the Hainmeuller Immigration Study below using Cohere, GPT3/4, Llama and Mistral. Acting as the role of an Immigration admissions officer, LLMs explain ~80% of human decision-making (which is about in-line with Human to Human estimates). We found however some interesting bias. Cohere may be less likely to admit male immigrant applicants (second row highlight). Mistral may be more likely to admit immigrant applicants from Iraq than (middle highlight). This is part of our data flywheel - we deploy the most human-like LLMs for any Domain. We're always looking for Market Researchers willing and able to run free Causal Studies! Links in comments.

    • No alternative text description for this image
  • View organization page for Subconscious AI, graphic

    853 followers

    It is easy to identify a generative image model with bias (see images). Detecting bias, deception, or non-human alignment in a generative text model is much less trivial. What if we could detect bias just as easily in text? Subconscious.ai takes any language model and runs 80 years of Psychology, Sociology, and Economics experiments against a Language Model respondent. This makes detecting bias, deception, and misalignment with humans as easy as detecting biased images. Work with us: https://lnkd.in/dW4VD_Ve #SubconsciousAI #BiasDetection #HumanAlignedAI #ResponsibleAI

    • No alternative text description for this image
  • View organization page for Subconscious AI, graphic

    853 followers

    Mark Zuckerberg recently claimed that there's no causal link between social media and declining mental health in youth. It's no surprise there's "no evidence" directly linking social media to worse mental health outcomes. “Proving Harm” would require Randomized, Controlled, Trials (RCT) that expose Humans to harm, even for the greater good, is deeply unethical in science (and hopefully elsewhere)! In the realm of behavioral research, this principle is called beneficence, safeguarded by Institutional Review Boards (IRB) before any experiment begins. The point of this Ethical Oversight is to ensure no harm comes to participants. What if we could prove things like Social Media cause harm to humans, without putting humans at risk? Could we create an entirely new form of science? Subconscious.ai does. Want to test it yourself? Join our waitlist: https://lnkd.in/dW4VD_Ve #EthicalAI #EthicalResearch #SubconsciousAI

    • No alternative text description for this image

Similar pages

Funding