Haus

Haus

Software Development

San Francisco, CA 6,579 followers

Measure marketing incrementality, allocate budget efficiently, and maximize growth.

About us

Maximize growth and allocate your budget efficiently by leveraging the Haus marketing science and experimentation platform for measuring incrementality. Haus enables you to configure robust regional experiments on-demand, utilizing statistical tools and controls to achieve the perfect balance between speed and precision. Using only your first-party data, you'll gain fast and accurate insights into incrementality across all marketing channels. Benefit from cutting-edge advancements in causal inference and machine learning to ensure unmatched accuracy and precision in your measurements. Join leading innovative companies like FanDuel, Sonos, Hims & Hers, and Caraway in making the shift to this gold standard of marketing measurement.

Website
https://bit.ly/48zpWsA
Industry
Software Development
Company size
51-200 employees
Headquarters
San Francisco, CA
Type
Privately Held
Founded
2021
Specialties
Marketing Measurement, Incrementality, and Marketing experimentation

Products

Locations

Employees at Haus

Updates

  • View organization page for Haus, graphic

    6,579 followers

    Noa Gutterman, Senior Director of Growth at TextNow and former director of marketing at VSCO will join Zach Epstein and Olivia Kory for our July 15th Open Haus to discuss how to master mobile app measurement: unique challenges marketers are facing, attributing conversions to marketing channels, KPIs, paid user acquisition, and how to set up useful mobile app marketing experiments. Join us for the 45' live Zoom event with an added Q&A at the end for questions around testing, experiments, incrementality, etc. See you there! Registration here 👉 https://lnkd.in/gHJirNQx

    This content isn’t available here

    Access this content and more in the LinkedIn app

  • View organization page for Haus, graphic

    6,579 followers

    700%: That’s how much social and video marketing spend outshine search when it comes to influencing sales on Amazon compared to DTC, according to internal Haus data. Take it with a grain of salt – every brand is different, and it’s key to test for your unique business. With Amazon Prime Day 2024 right around the corner, there’s no better time to dig into what our trove of customer experiment data reveals. Here’s a preview: - Across media channels and ad formats, 97% of incrementality tests in our database show a non-zero, positive lift in Amazon sales. - One out of every six tests show greater sales lift on Amazon than on DTC. - 83% of experiments drive a >10% halo effect on Amazon sales. Dive into the full insights: ⤵️

  • View organization page for Haus, graphic

    6,579 followers

    In marketing analytics, there's often pressure to present data in a way that aligns with stakeholders' expectations. However, this can lead to challenges in making truly data-driven decisions. At Haus, we've developed an approach to address this common issue: hands-free analysis. What does this mean? - Our pipelines and analysis configurations are set up to run without human intervention at the end. - No knob-turning or lever-pushing to manipulate outcomes. - Clear decision gates: "If the data looks like X, we do Y." This approach ensures: - Consistency across all analyses. - Elimination of bias from result-seeking behavior. - Trustworthy insights that truly inform business decisions. While this might be common in smaller businesses, larger companies often struggle with the temptation to "massage" the data. Our hands-free method removes that possibility entirely. The bottom line is that marketing experiments aren't conducted to tell you what you want to hear – they’re meant to reveal the hard truths that can help you optimize your marketing budget.

  • Haus reposted this

    View profile for Joe Wyer, graphic

    Head of Science @ Haus | We're Hiring!

    There’s more to geo testing than matched markets A common misconception is that matched market tests are the only way to do geo testing. Economists and statisticians have made enormous progress in geo testing so you don’t have to settle and can do regional experiments instead! Matched market tests, for example, are when you do something in Detroit and compare it to Milwaukee. Regional experiments, on the other hand, randomly sample a larger number of markets into control and treatment groups. Matched market methods have limitations compared to regional experiments: 1. Less precision. With only 1-3 locations in the treated zone, the error on any matched market test analysis is going to be much higher than it would be for a broader test that can average out noise across many regions. 2. Less transferable insights. Different regions may respond differently to the same creatives and campaign configurations. As you isolate your treatment down to 1-3 locations for a matched market test you lose credibility that the estimated impacts for those areas will transfer to the rest of the country. 3. Downward bias on estimated ROI. Depending on how much you plan to spike your marketing for the matched markets, you may be flooding them with much more spend than you otherwise would have in a business as usual setting. If that's the case, you will be further down the diminishing marginal return curve than in business as usual, making your overall ROI look smaller. The most accurate and precise regional experiments use frontier methods: * At Haus, instead of taking the naive average of a few matched markets, we build synthetic control models that weigh control regions to best fit the targeted regions. For example, for your business, Seattle might be comparable to 60% San Francisco, 30% Denver, and 10% San Diego. * And we don’t stop at traditional synthetic control, Haus has PhD scientists focused on continually improving the performance of the models we use. This is why science matters.

    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Haus, graphic

    6,579 followers

    Objectivity and precision can be overlooked when designing marketing experiments. Prioritizing these qualities should be a top goal for any marketer running experiments. 1. Bias means your results are consistently off-target. 2. Poor precision means your results are all over the place, even if they're right on average. Your aim as a marketer is to minimize bias and maximize precision to get as close to the truth as possible. So, how can we improve? A powerful tool we use at Haus is a Placebo test (also known as an AA test). Here's how it works: - You set up your experiment as usual, but…  - … don't actually change anything (keep everything identical between your “test” and “control” groups). - Finally, you analyze the results as if you had run the test. This exercise demonstrates how much your results might fluctuate due to noise alone. If your real test results fall far outside this range, you can be more confident they represent a true effect. Remember: The choices you make in experiment design and analysis impact bias and precision. Take the time to get it right.

  • View organization page for Haus, graphic

    6,579 followers

    Think Amazon ads are the only effective marketing channel for Prime Day? Think again. Did you know that social and video channels like Meta, TikTok, and YouTube can significantly boost your Amazon sales? At Haus, we’ve run hundreds of experiments with brands selling across both direct-to-consumer (DTC) and Amazon to help them understand the holistic and incremental impact of their media spend. One thing we saw consistently? Their DTC advertising had a positive impact on Amazon sales a whopping 97% of the time. Learn more about these halo effects and how to use them in optimizing your marketing strategy for Prime Day success. Check out our latest blog post in the comments below!

    • No alternative text description for this image
  • View organization page for Haus, graphic

    6,579 followers

    Hexclad would've missed over half Meta’s incremental impact without extended measurement !! Do you have a high AOV product or a product with a long consideration phase?When you test for incrementality with a long purchase cycle product, it’s important to continue measuring treatment and control regions for a few weeks after the treatment to capture any lagging effects. HexClad Cookware’s most popular product is a 12-piece pan set priced at $699.99. Given the high price, customers typically have a long consideration phase before purchasing. Connor Rolain and team ran an incrementality test on Meta measuring its omni-channel impact across DTC and Amazon. To capture the long purchase cycle, they added on a 3 week window after the test to measure the lagging impact of the ads. The results of the experiment before and after the post treatment window were dramatically different. Check out the results of the study below and shoutout to Connor Rolain for implementing a strategic test design customized for his business:

    Hexclad measures Meta's incremental impact for a high AOV product with a long consideration phase.

    Hexclad measures Meta's incremental impact for a high AOV product with a long consideration phase.

    haus.io

  • View organization page for Haus, graphic

    6,579 followers

    Incrementality testing comes with a cost, absolutely. But, if you’re doing it well, the cost of NOT testing would be much higher. And we get it – no one wants to potentially miss out on revenue if you hold out a subset of a channel or if you increase/lower your budget on a channel you think is high-performing. But without regularly testing the incrementality of your marketing channels, you're flying blind. When you’re spending millions of dollars in a channel, the opportunity cost of an experiment is quite low to make sure that the channel is actually incremental.

  • View organization page for Haus, graphic

    6,579 followers

    Experiments are the only way to prove the causality of your marketing. But poorly run tests can lead to inconclusive results, or worse, lead you to make suboptimal decisions based on faulty data. So, how do we build experiments that we can trust? We are excited to bring in Joe Wyer PhD, Head of Science at Haus and former Senior Economist at Amazon, to our next Open Haus to chat about the biggest mistakes brands make when running marketing experiments and the lessons he’s learned from evaluating $1B+ in marketing. We’ll also talk about: - The top five errors brands make when designing and running marketing experiments - How to avoid these errors to ensure more reliable and effective campaign outcomes. - Lessons from high-stakes testing scenarios. - Insights into better experiment design and implementation. It will be a 45' live Zoom event with an added Q&A at the end for you to ask any questions around testing, experiments, incrementality, etc. If you’d like to join us, the link to register is in the comments.

Similar pages

Funding

Haus 4 total rounds

Last Round

Series unknown

US$ 17.5M

See more info on crunchbase