Antoine Toisoul

United Kingdom Contact Info
578 followers 500+ connections

Join to view profile

About

Senior Research Scientist at Meta GenAI working on video and 3D generation. PhD in…

Activity

Join now to see all activity

Experience & Education

  • Meta

View Antoine’s full experience

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Publications

  • Accessible GLSL Shader Programming

    Proc. EuroGraphics 2017

    Teaching fundamental principles of Computer Graphics requires a thoroughly prepared lecture alongside practical training. Modern graphics programming rarely provides a straightforward application programming interface (API) and the available APIs pose high entry barriers to students. Shader-based programming of standard graphics pipelines is often inaccessible through complex setup procedures and convoluted programming environments. In this paper we discuss an…

    Teaching fundamental principles of Computer Graphics requires a thoroughly prepared lecture alongside practical training. Modern graphics programming rarely provides a straightforward application programming interface (API) and the available APIs pose high entry barriers to students. Shader-based programming of standard graphics pipelines is often inaccessible through complex setup procedures and convoluted programming environments. In this paper we discuss an undergraduate entry level lecture with its according lab exercises. We present a programming framework that makes interactive graphics programming accessible while allowing to design individual tasks as instructive exercises to solidify the content of individual lecture units. The discussed teaching framework provides a well defined programmable graphics pipeline with geometry shading stages and image-based post processing functionality based on framebuffer objects. It is open-source and available online.

    Other authors
    • Daniel Rueckert
    • Bernhard Kainz
    See publication
  • Acquiring Spatially Varying Appearance of Printed Holographic Surfaces

    SIGGRAPH Asia 2018

    We present two novel and complimentary approaches to measure diffraction effects in commonly found planar spatially varying holographic surfaces. Such surfaces are increasingly found in various decorative materials such as gift bags, holographic papers, clothing and security holograms, and produce impressive visual effects that have not been previously acquired for realistic rendering. Such holographic surfaces are usually manufactured with one dimensional diffraction gratings that are varying…

    We present two novel and complimentary approaches to measure diffraction effects in commonly found planar spatially varying holographic surfaces. Such surfaces are increasingly found in various decorative materials such as gift bags, holographic papers, clothing and security holograms, and produce impressive visual effects that have not been previously acquired for realistic rendering. Such holographic surfaces are usually manufactured with one dimensional diffraction gratings that are varying in periodicity and orienta-
    tion over an entire sample in order to produce a wide range of diffraction effects such as gradients and kinematic (rotational) effects. Our proposed methods estimate these two parameters and allow an accurate reproduction of these effects in real-time. The first method simply uses a point light source
    to recover both the grating periodicity and orientation in the case of regular and stochastic textures. Under the assumption that the sample is made of the same repeated diffractive tile, good results can be obtained using just one to five photographs on a wide range of samples. The second method is based on polarization imaging and enables an independent high resolution measurement of the grating orientation and relative periodicity at each surface point. The method requires a minimum of four photographs for accurate results, does not assume repetition of an exemplar tile, and can even reveal minor fabrication defects. We present point light source renderings with both approaches that qualitatively match photographs, as well as real-time renderings under complex environmental illumination.

    Other authors
    • Daljit Singh Dhillon
    • Abhijeet Ghosh
  • Factorized Higher-Order CNNs with an Application to Spatio-Temporal Emotion Estimation

    CVPR 2020

    Training deep neural networks with spatio-temporal (i.e., 3D) or multidimensional convolutions of higher-order is computationally challenging due to millions of unknown parameters across dozens of layers. To alleviate this, one approach is to apply low-rank tensor decompositions to convolution kernels in order to compress the network and reduce its number of parameters. Alternatively, new convolutional blocks, such as MobileNet, can be directly designed for efficiency. In this paper, we unify…

    Training deep neural networks with spatio-temporal (i.e., 3D) or multidimensional convolutions of higher-order is computationally challenging due to millions of unknown parameters across dozens of layers. To alleviate this, one approach is to apply low-rank tensor decompositions to convolution kernels in order to compress the network and reduce its number of parameters. Alternatively, new convolutional blocks, such as MobileNet, can be directly designed for efficiency. In this paper, we unify these two approaches by proposing a tensor factorization framework for efficient multidimensional (separable) convolutions of higher-order. Interestingly, the proposed framework enables a novel higher-order transduction, allowing to train a network on a given domain (e.g., 2D images or N-dimensional data in general) and using transduction to generalize to higher-order data such as videos (or (N+K)-dimensional data in general), capturing for instance temporal dynamics while preserving the learnt spatial information.
    We apply the proposed methodology, coined CP-Higher-Order Convolution (HO-CPConv), to spatio-temporal facial emotion analysis. Most existing facial affect models focus on static imagery and discard all temporal information. This is due to the above-mentioned burden of training 3D convolutional nets and the lack of large bodies of video data annotated by experts. We address both issues with our proposed framework. Initial training is first done on static imagery before using transduction to generalize to the temporal domain. We demonstrate superior performance on three challenging large scale affect estimation datasets, AffectNet, SEWA, and AFEW-VA.

    Other authors
    See publication
  • Image Based Relighting using Room Lighting Basis

    SIGGRAPH 2015 poster

    Other authors
    • Abhijeet Ghosh
  • Image-Based Relighting using Room Lighting Basis

    CVMP 2016

    We present a novel and practical approach for image-based relighting that employs the lights available in a regular room to acquire the reflectance field of an object. The lighting basis includes diverse light sources such as the house lights and the natural illumination coming from the windows. Once the data is captured, we homogenize the reflectance field to take into account the variety of light source colours to minimise the tone difference in the reflectance…

    We present a novel and practical approach for image-based relighting that employs the lights available in a regular room to acquire the reflectance field of an object. The lighting basis includes diverse light sources such as the house lights and the natural illumination coming from the windows. Once the data is captured, we homogenize the reflectance field to take into account the variety of light source colours to minimise the tone difference in the reflectance field. Additionally, we measure the room dark level corresponding to a small amount of global illumination with all lights switched off and blinds drawn. The dark level, due to some light leakage through the blinds, is removed from the individual local lighting basis conditions and employed as an additional global lighting basis. Finally we optimize the projection of
    a desired lighting environment on to our room lighting basis to get a close approximation of the environment with our sparse lighting basis. We achieve plausible results for diffuse and glossy objects that are qualitatively similar to results produced with dense sampling of the reflectance field
    including using a light stage and we demonstrate effective relighting results in two different room configurations. We believe our approach can be applied for practical relighting applications with general studio lighting.

    Other authors
    • Abhijeet Ghosh
    See publication
  • Practical Acquisition and Rendering of Diffraction Effects in Surface Reflectance

    ACM Transactions On Graphics 2017

    We propose two novel contributions for measurement based rendering of diffraction effects in surface reflectance of planar homogeneous diffractive materials. As a general solution for commonly manufactured materials, we propose a practical data-driven rendering technique and a measurement approach to efficiently render complex diffraction effects in real-time. Our measurement step simply involves photographing a planar diffractive sample illuminated with an LED flash. Here, we directly record…

    We propose two novel contributions for measurement based rendering of diffraction effects in surface reflectance of planar homogeneous diffractive materials. As a general solution for commonly manufactured materials, we propose a practical data-driven rendering technique and a measurement approach to efficiently render complex diffraction effects in real-time. Our measurement step simply involves photographing a planar diffractive sample illuminated with an LED flash. Here, we directly record the resultant diffraction pattern on the sample surface due to a narrow band point source illumination. Furthermore, we propose an efficient rendering method that exploits the measurement in conjunction with the Huygens-Fresnel principle to fit relevant diffraction parameters based on a first order approximation.
    Our proposed data-driven rendering method requires the precomputation of a
    single diffraction look up table for accurate spectral rendering of complex diffraction effects. Secondly, for sharp specular samples, we propose a novel method for practical measurement of the underlying diffraction grating using out-of-focus “bokeh” photography of the specular highlight. We demonstrate how the measured bokeh can be employed as a height field to drive a diffraction shader based on a first order approximation for efficient real-time rendering. Finally, we also drive analytic solutions for a few special cases of diffraction from our measurements and demonstrate realistic rendering results under complex light sources and environments

    See publication
  • Real time rendering of realistic surface diffraction with low rank factorisation

    SIGGRAPH 2017 Poster

  • Real-time rendering of realistic surface diffraction with low rank factorisation

    CVMP 2017

    We propose a novel approach for real-time rendering of diffraction effects in surface reflectance in arbitrary environments. Such renderings are usually extremely expensive as they require the computation of a convolution at real-time framerates. In the case of diffraction, the diffraction lobes usually have high frequency details that can only be captured with high resolution convolution kernels which make calculations even more expensive. Our method uses a low rank factorisation of the…

    We propose a novel approach for real-time rendering of diffraction effects in surface reflectance in arbitrary environments. Such renderings are usually extremely expensive as they require the computation of a convolution at real-time framerates. In the case of diffraction, the diffraction lobes usually have high frequency details that can only be captured with high resolution convolution kernels which make calculations even more expensive. Our method uses a low rank factorisation of the diffraction lookup table to approximate a 2D convolution kernel by two simpler low rank kernels which allow the computation of the convolution at real-time framerates using two rendering passes. We show realistic renderings in arbitrary environments and achieve a performance from 50 to 100 FPS making possible to use such a technique in real-time applications such as video games and VR.

    Other authors
    • Abhijeet Ghosh
    See publication

Languages

  • Français

    Native or bilingual proficiency

  • Anglais

    Full professional proficiency

  • Espagnol

    Elementary proficiency

More activity by Antoine

View Antoine’s full profile

  • See who you know in common
  • Get introduced
  • Contact Antoine directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More