Christophe Pere, PhD

Montreal, Quebec, Canada Contact Info
21K followers 500+ connections

Join to view profile

About

Hi,

I'm a researcher in quantum machine learning with collaborations in classical…

Contributions

Activity

Join now to see all activity

Experience & Education

  • École de technologie supérieure

View Christophe’s full experience

See their title, tenure and more.

or

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Licenses & Certifications

Volunteer Experience

Publications

  • XGSwap: eXtreme Gradient boosting Swap for Routing in NISQ Devices

    ArXiv

    In the current landscape of noisy intermediate-scale quantum (NISQ) computing, the inherent noise presents significant challenges to achieving high-fidelity long-range entanglement. Furthermore, this challenge is amplified by the limited connectivity of current superconducting devices, necessitating state permutations to establish long-distance entanglement. Traditionally, graph methods are used to satisfy the coupling constraints of a given architecture by routing states along the shortest…

    In the current landscape of noisy intermediate-scale quantum (NISQ) computing, the inherent noise presents significant challenges to achieving high-fidelity long-range entanglement. Furthermore, this challenge is amplified by the limited connectivity of current superconducting devices, necessitating state permutations to establish long-distance entanglement. Traditionally, graph methods are used to satisfy the coupling constraints of a given architecture by routing states along the shortest undirected path between qubits. In this work, we introduce a gradient boosting machine learning model to predict the fidelity of alternative--potentially longer--routing paths to improve fidelity. This model was trained on 4050 random CNOT gates ranging in length from 2 to 100+ qubits. The experiments were all executed on ibm_quebec, a 127-qubit IBM Quantum System One. Through more than 200+ tests run on actual hardware, our model successfully identified higher fidelity paths in approximately 23% of cases.

    Other authors
    See publication
  • Noise Aware Utility Optimization of NISQ Devices

    arxiv.org

    In order to enter the era of utility, noisy intermediate-scale quantum (NISQ) devices need to enable long-range entanglement of large qubit chains. However, due to the limited connectivity of superconducting NISQ devices, long-range entangling gates are realized in linear depth. Furthermore, a time-dependent degradation of the average CNOT gate fidelity is observed. Likely due to aging, this phenomenon further degrades entanglement capabilities. Our aim is to help in the current efforts to…

    In order to enter the era of utility, noisy intermediate-scale quantum (NISQ) devices need to enable long-range entanglement of large qubit chains. However, due to the limited connectivity of superconducting NISQ devices, long-range entangling gates are realized in linear depth. Furthermore, a time-dependent degradation of the average CNOT gate fidelity is observed. Likely due to aging, this phenomenon further degrades entanglement capabilities. Our aim is to help in the current efforts to achieve utility and provide an opportunity to extend the utility lifespan of current devices --albeit by selecting fewer, high quality resources. To achieve this, we provide a method to transform user-provided CNOT and readout error requirements into a compliant partition onto which circuits can be executed. We demonstrate an improvement of up to 52% in fidelity for a random CNOT chain of length 50 qubits and consistent improvements between 11.8% and 47.7% for chains between 10 and 40 in varying in increments of 10, respectively.

    See publication
  • Financial Modeling Using Quantum Computing

    Packt

    Quantum computing has the potential to revolutionize the computing paradigm. By integrating quantum algorithms with artificial intelligence and machine learning, we can harness the power of qubits to deliver comprehensive and optimized solutions for intricate financial problems. This book offers step-by-step guidance on using various quantum algorithm frameworks within a Python environment, enabling you to tackle business challenges in finance. With the use of contrasting solutions from…

    Quantum computing has the potential to revolutionize the computing paradigm. By integrating quantum algorithms with artificial intelligence and machine learning, we can harness the power of qubits to deliver comprehensive and optimized solutions for intricate financial problems. This book offers step-by-step guidance on using various quantum algorithm frameworks within a Python environment, enabling you to tackle business challenges in finance. With the use of contrasting solutions from well-known Python libraries with quantum algorithms, you’ll discover the advantages of the quantum approach. Focusing on clarity, the authors expertly present complex quantum algorithms in a straightforward, yet comprehensive way. Throughout the book, you'll become adept at working with simple programs illustrating quantum computing principles. Gradually, you'll progress to more sophisticated programs and algorithms that harness the full power of quantum computing. By the end of this book, you’ll be able to design, implement and run your own quantum computing programs to turbocharge your financial modelling.

    Other authors
    See publication
  • A Preprocessing Perspective for Quantum Machine Learning Classification Advantage in Finance Using NISQ Algorithms

    entropy

    Quantum Machine Learning (QML) has not yet demonstrated extensively and clearly its advantages compared to the classical machine learning approach. So far, there are only specific cases where some quantum-inspired techniques have achieved small incremental advantages, and a few experimental cases in hybrid quantum computing are promising, considering a mid-term future (not taking into account the achievements purely associated with optimization using quantum-classical algorithms). The current…

    Quantum Machine Learning (QML) has not yet demonstrated extensively and clearly its advantages compared to the classical machine learning approach. So far, there are only specific cases where some quantum-inspired techniques have achieved small incremental advantages, and a few experimental cases in hybrid quantum computing are promising, considering a mid-term future (not taking into account the achievements purely associated with optimization using quantum-classical algorithms). The current quantum computers are noisy and have few qubits to test, making it difficult to demonstrate the current and potential quantum advantage of QML methods. This study shows that we can achieve better classical encoding and performance of quantum classifiers by using Linear Discriminant Analysis (LDA) during the data preprocessing step. As a result, the Variational Quantum Algorithm (VQA) shows a gain of performance in balanced accuracy with the LDA technique and outperforms baseline classical classifiers.

    Other authors
    See publication
  • Employing Feature Selection to Improve the Performance of Intrusion Detection Systems

    Springer FPS

    Abstract
    Intrusion detection systems use datasets with various features to detect attacks and protect computers and network systems from these attacks. However, some of these features are irrelevant and may reduce the intrusion detection system’s speed and accuracy. In this study, we use feature selection methods to eliminate non-relevant features. We compare the performance of fourteen feature-selection methods, on three ML techniques using the UNSW-NB15, Kyoto 2006+ and DoHBrw-2020…

    Abstract
    Intrusion detection systems use datasets with various features to detect attacks and protect computers and network systems from these attacks. However, some of these features are irrelevant and may reduce the intrusion detection system’s speed and accuracy. In this study, we use feature selection methods to eliminate non-relevant features. We compare the performance of fourteen feature-selection methods, on three ML techniques using the UNSW-NB15, Kyoto 2006+ and DoHBrw-2020 datasets. The most relevant features of each dataset are identified, which show that feature selection methods can increase the accuracy of anomaly detection and classification.

    Other authors
    See publication
  • Estimation of Uncertainty Bounds on Disparate Treatment when using Proxies for the Protected Attribute

    Proceedings of the Canadian Conference on Artificial Intelligence

    This paper proposes a new method for uncovering discrimination in decision making systems with continuous value outcomes when lacking the protected attribute values. We demonstrate our method over race discrimination using name and surname proxies. Also, we use a new method for estimating uncertainty in the disparate treatment evaluation to allow for better judgment when using imprecise proxies for the protected attribute. We carry out tests using synthetic data.

    Other authors
    See publication
  • Étude de l’atmosphère de Vénus à l’aide d’un modèle de réfraction lors du passage devant le Soleil du 5-6 Juin 2012

    HAL Archives Ouvertes

    Ph.D. Thesis: Study of the Venus' atmosphere with a refraction model and images obtained during the transit of 2012, June 5-6

    See publication
  • Multilayer modeling of the aureole photometry during the Venus transit: comparison between SDO/HMI and VEx/SOIR data

    A&A

    The mesosphere of Venus is a critical range of altitudes in which complex temperature variability has been extensively studied by the space mission Venus Express (Vex) during its eight-years mission (2006-2014). Data collected at different epochs and latitudes show evidence of short and medium timescale variability as well as latitudinal differences. Spatial and temporal variability is also predicted in mesospheric and thermosphere terminator models with lower boundary conditions at 70 km near…

    The mesosphere of Venus is a critical range of altitudes in which complex temperature variability has been extensively studied by the space mission Venus Express (Vex) during its eight-years mission (2006-2014). Data collected at different epochs and latitudes show evidence of short and medium timescale variability as well as latitudinal differences. Spatial and temporal variability is also predicted in mesospheric and thermosphere terminator models with lower boundary conditions at 70 km near cloud tops. The Venus transit on June 5-6 2012 was the first to occur with a spacecraft in orbit around Venus. It has been shown that sunlight refraction in the mesosphere of Venus is able to provide useful constraints on mesospheric temperatures at the time of the transit. The European Space Agency's Venus Express provided space-based observations of Venus during the transit. Simultaneously, the Venus aureole photometry was observed using ground-based facilities and solar telescopes orbiting Earth (NASA Solar Dynamic Observatory, JAXA HINODE). As the properties of spatial and temporal variability of the mesosphere are still debated, the opportunity of observing it at all latitudes at the same time, offered by the transit, is rather unique. In this first paper, we establish new methods for analyzing the photometry of the so-called aureole that is produced by refraction of the solar light, and we investigate the choice of physical models that best reproduce the observations. We obtain an independent constraint of 4.8 +/- 0.5 km for the aerosol scale height in the upper haze region above 80 km. We show that a full multiple-layer approach is required to adequately reproduce the aureole photometry, which appears to be sensitive to several second-order variations in the vertical refractivity.

    Other authors
    See publication
  • Red noise versus planetary interpretations in the microlensing event OGLE-2013-BLG-446

    The Astrophysical Journal

    For all exoplanet candidates, the reliability of a claimed detection needs to be assessed through a careful study of systematic errors in the data to minimize the false positives rate. We present a method to investigate such systematics in microlensing data sets using the microlensing event OGLE-2013-BLG-0446 as a case study. The event was observed from multiple sites around the world and its high magnification (Amax ∼ 3000) allowed us to investigate the effects of terrestrial and annual…

    For all exoplanet candidates, the reliability of a claimed detection needs to be assessed through a careful study of systematic errors in the data to minimize the false positives rate. We present a method to investigate such systematics in microlensing data sets using the microlensing event OGLE-2013-BLG-0446 as a case study. The event was observed from multiple sites around the world and its high magnification (Amax ∼ 3000) allowed us to investigate the effects of terrestrial and annual parallax. Real-time modeling of the event while it was still ongoing suggested the presence of an extremely low-mass companion (∼3M⊕) to the lensing star, leading to substantial follow-up coverage of the light curve. We test and compare different models for the light curve and conclude that the data do not favor the planetary interpretation when systematic errors are taken into account.

    Other authors
    • Bachelet, E.; Bramich, D. M.; Han, C.; Greenhill, J.; Street, R. A.; Gould, A.; D’Ago, G.; AlSubai, K.; Domini
    See publication
  • New view on exoplanet transits. Transit of Venus described using three-dimensional solar atmosphere STAGGER-grid simulations

    Astronomy & Astrophysics, Volume 576, id.A13, 11 pp.

    Context. An important benchmark for current observational techniques and theoretical modeling of exoplanet atmospheres is the transit of Venus (ToV). Stellar activity and, in particular, convection-related surface structures, potentially cause fluctuations that can affect the transit light curves. Surface convection simulations can help interpreting the ToV as well as other transits outside our solar system.
    Aims: We used the realistic three-dimensional (3D) radiative hydrodynamical (RHD)…

    Context. An important benchmark for current observational techniques and theoretical modeling of exoplanet atmospheres is the transit of Venus (ToV). Stellar activity and, in particular, convection-related surface structures, potentially cause fluctuations that can affect the transit light curves. Surface convection simulations can help interpreting the ToV as well as other transits outside our solar system.
    Aims: We used the realistic three-dimensional (3D) radiative hydrodynamical (RHD) simulation of the Sun from the Stagger-grid and synthetic images computed with the radiative transfer code Optim3D to predict the transit of Venus (ToV) in 2004 that was observed by the satellite ACRIMSAT.
    Methods: We computed intensity maps from the RHD simulation of the Sun and produced a synthetic stellar disk image as an observer would see, accounting for the center-to-limb variations. The contribution of the solar granulation was considered during the ToV. We computed the light curve and compared it to the ACRIMSAT observations as well as to light curves obtained with solar surface representations carried out using radial profiles with different limb-darkening laws. We also applied the same spherical tile imaging method as used for RHD simulation to the observations of center-to-limb solar granulation with Hinode.

    Other authors
    • Andrea Chiavassa
    • Faurobert, M. Ricort, G.; Tanga, P.; Magic, Z.; Collet, R.; Asplund, M.
    See publication
  • A Super-Jupiter Orbiting a Late-type Star: A Refined Analysis of Microlensing Event OGLE-2012-BLG-0406

    The Astrophysical Journal

    We present a detailed analysis of survey and follow-up observations of microlensing event OGLE-2012-BLG-0406 based on data obtained from 10 different observatories. Intensive coverage of the light curve, especially the perturbation part, allowed us to accurately measure the parallax effect and lens orbital motion. Combining our measurement of the lens parallax with the angular Einstein radius determined from finite-source effects, we estimate the physical parameters of the lens system. We find…

    We present a detailed analysis of survey and follow-up observations of microlensing event OGLE-2012-BLG-0406 based on data obtained from 10 different observatories. Intensive coverage of the light curve, especially the perturbation part, allowed us to accurately measure the parallax effect and lens orbital motion. Combining our measurement of the lens parallax with the angular Einstein radius determined from finite-source effects, we estimate the physical parameters of the lens system. We find that the event was caused by a 2.73 ± 0.43 M J planet orbiting a 0.44 ± 0.07 M ⊙ early M-type star. The distance to the lens is 4.97 ± 0.29 kpc and the projected separation between the host star and its planet at the time of the event is 3.45 ± 0.26 AU. We find that the additional coverage provided by follow-up observations, especially during the planetary perturbation, leads to a more accurate determination of the physical parameters of the lens.

    Other authors
    • Tsapras, Y.; Choi, J.-Y.; Street, R. A.; Han, C.; Bozza, V.; Gould, A.; Dominik, M.; Beaulieu, J.-P.; Udalski

Languages

  • Français

    Native or bilingual proficiency

  • Anglais

    Professional working proficiency

Recommendations received

More activity by Christophe

View Christophe’s full profile

  • See who you know in common
  • Get introduced
  • Contact Christophe directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Add new skills with these courses