Congratulations to SEED’s Alessandro Sestini for his presentation at the IEEE World Congress on Computational Intelligence 2024 last week in Yokohama, Japan! Alessandro presented the paper he co-authored with Derek Yadgaroff, Konrad Tollmar, Ayca Ozcelikkale, and Linus Gisslén. It was titled “Improving Generalization in Game Agents with Data Augmentation in Imitation Learning.” Check out the paper here: https://lnkd.in/gtXx3JiZ Video of the presentation is coming soon!
SEED (Electronic Arts)
Computer Games
Redwood City, California 6,396 followers
Electronic Arts Studios' Applied R&D Division. Computer Graphics, Machine Learning, Future Experiences.
About us
SEED is a cross-disciplinary R&D team within Electronic Arts. Our mission is to explore, build, and help define the future of interactive entertainment. SEED’s research targets meaningful areas of applied innovation that can be expressed directly in new player experiences, future connected services, and advanced development techniques. Our research horizon looks 2–4 years ahead and beyond. Our projects target specific game needs but with an eye towards solutions that can benefit all games. SEED also exists to educate, share, and be a great partner. We collaborate with game teams and industry partners, and we publish and present our research within our industry and to the public.
- Website
-
https://ea.com/seed
External link for SEED (Electronic Arts)
- Industry
- Computer Games
- Company size
- 5,001-10,000 employees
- Headquarters
- Redwood City, California
- Type
- Public Company
- Founded
- 2015
- Specialties
- Computer Graphics, Machine Learning, R&D, Rendering, Virtual Reality, Augmented Reality, Publications, AI, and Animation
Locations
Employees at SEED (Electronic Arts)
Updates
-
How do we best construct game avatars from photos? There’s a great deal of interest in personalizing game avatars with photos of players’ faces. Training an ML model to predict 3D facial parameters from a photo requires abundant training data. This presentation by SEED’s Igor Borovikov discusses a work in progress with an optimized view of the training data. Igor’s presentation was delivered at the Center for Advanced Signal and Image Sciences (CASIS) 28th Annual Workshop on 5 June 2024, which was held at the Lawrence Livermore National Laboratory. Watch the presentation and download the presentation deck. https://lnkd.in/gmUVXkfu
-
-
How do we give superpowers to game audio designers using AI? Take a peek inside how ML research and development works at SEED with Mónica Villanueva Aylagas and Jorge Garcia. In this presentation, they walk us through implementing the "ExFlowSions" research project, which uses machine learning to perform style transfer on sound effects. While this research is still in its early days, this presentation is a great showcase for the ideation, workflow, and implementation of their system. Watch the full-length presentation and download the slide deck. https://lnkd.in/gFNVGCQV #gamedev #ai #ml #WeAreEA
-
-
Join SEED’s own Farah Ali (VP of Technology Growth Strategy) for an insightful episode of “The Creative and the AI” podcast. AI brings enormous potential to art direction, coding, and other parts of the game design process. Curious to find out what's happening in game production right now that is enhancing what you see on the screen? Check out the podcast! https://lnkd.in/g2Y7aeva #WeAreEA
-
SEED’s own Igor Borovikov and Karine Levonyan, PhD attended the Center for Advanced Signal and Image Sciences (CASIS) 28th Annual Workshop this week held at the Lawrence Livermore National Laboratory. Igor presented their paper “Towards Optimal Training Distribution for Photo-to-Face Models”, which is about training ML systems to generate facial models from photos. Lots of great questions and audience engagement! Congrats Igor and Karine! #gamedev #WeAreEA
-
-
Did you know that SEED's research is often available as open source software? By providing open source tools, SEED enables researchers and developers to access cutting-edge tools and innovations. Check out our Open Source page: https://lnkd.in/gUEZDTCX #gamedev #WeAreEA
-
-
Our work got a mention from ESPN! https://lnkd.in/g84Sskf8 Deep inside this terrific profile article on Daryl Holt (EA's senior VP in charge of EA SPORTS College Football) is a great overview of GIBS – Global Illumination Based on Surfels. This is a powerful lighting technology used in College Football spearheaded by SEED's own Henrik Halén, in partnership with our friends at Frostbite and EA SPORTS. Learn more about GIBS here: https://lnkd.in/gCMmykKD Congrats to Henrik for the shout-out from ESPN! #gamedev #WeAreEA
-
Going beyond white noise for temporal and spatial denoising in real-time rendering can produce better results with no increase in rendering time. In this full-length video presentation, SEED’s Alan Wolfe discusses the use of different types of noise for random number generation, focusing on the application of blue noise in rendering images for gaming. Alan’s presentation covers: - Randomness and fairness in number generation - Stochastic rendering - Noise textures and error patterns Check out the presentation video here: https://lnkd.in/dZcB8Ksb
-
-
Are you attending the i3D symposium this week in Philadelphia? Tomorrow, be sure to check out the paper presented by SEED’s William Donnelly on Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering. https://lnkd.in/gR7dQCE6 And we hope you caught the keynote this morning from Fabio Zinno and SEED’s Harold Chaput on the impact of AI on gaming! #gamedev #WeAreEA
Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering
ea.com
-
SEED (Electronic Arts) reposted this
Check out this awesome paper from SEED on how to tailor the frequencies of rendering noise to improve image denoising in real time rendering. Read the research paper: https://lnkd.in/gR7dQCE6 The paper is being presented next week at ACM I3D in Philadelphia and was authored by William Donnelly, Alan Wolfe, Judith Bütepage, and Jon Valdes. Stochastic sampling techniques are everywhere in real-time rendering, where performance constraints force us to use low sample counts, which leads to noisy intermediate results. To remove this noise, temporal and spatial denoising in post-processing is an integral part of the real-time graphics pipeline. This paper's main insight is that we can optimize the samples used in stochastic sampling to minimize the post-processing error. The 2024 ACM Siggraph Symposium on Interactive 3d Graphics and Games is 8-10 May in Philadelphia. #gamedev #WeAreEA
-