About
cKnowledge (Collective Knowledge) is a research, engineering and educational company
established by Grigori Fursin
in 2019 and located in Paris (France).
We are developing the Collective Knowledge Playground
and the Collective Mind workflow automation framework (CM)
with portable, reusable and technology-agnostic automation recipes
(CM4MLOps, CM4MLPerf and CM4ABTF)
to help startups, companies, educational organizations and non-profits
build and run AI, ML and other emerging workloads
in the most efficient and cost-effective way across diverse
and rapidly evolving models, data sets, software and hardware
from different vendors: see our ArXiv white paper about
enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments
for more details.
We are glad and proud that our technology is trusted by
MLCommons (150+ AI companies),
the Autonomous Vehicle Computing Consortium (AVCC),
ACM/IEEE and other organizations to automatically co-design software and hardware
for efficient and cost-effective AI systems while dramatically reducing their R&D costs
and time to market.
Our mission is to democratize AI research, development and education by
helping researchers and engineers automate all their tedious and repetitive ResearchOps
to develop, optimize and deploy complex AI systems, accelerate innovation,
reduce all R&D costs, and make AI accessible to everyone.
Our success is based on Grigori's long scientific and industrial experience
pioneering the use of AI, federated learning and collective tuning (cTuning) to automate the development of
high-performance, cost-effective and energy-efficient computer systems, and helping the community
reproduce the state-of-the-art research projects and validate them in the real world.
Ongoing projects
2024
-
Latest: successfully prototyped an open platform (Collective Knowledge Playground)
to enable the 1st mass-scale benchmarking, optimization and co-design of efficient and cost-effective
AI/ML systems Collective Mind, virtualized MLOps, MLPerf and reproducible optimization tournaments
based on the feedback from AVCC, MLCommons and ACM/IEEE - please
check our ArXiv white paper,
AVCC/MLCommons press-release,
MLPerf press release (1),
MLPerf press release (2)
and get in touch with Grigori Fursin
to learn about our plans.
-
Helping AI OEMs, Tier 1s, system integrators and users from the Autonomous Vehicle Computing Consortium (AVCC)
use CM and CK playground
to co-design the most efficient and cost-effective AI systems assembled from different models,
datasets, software and hardware from different vendors (CM4ABTF).
-
Helping MLCommons
modularize MLPerf benchmarks and make it easier to run them and reproduce results using CM
across diverse, rapidly evolving open-source or proprietary models, datasets, software and hardware
from different vendors (CM4MLOps and CM4MLPerf).
-
Helping ACM&IEEE automate and reproduce experiments from published papers and optimization tournaments
including the Student Cluster Competition at SuperComputing
and computer systems conferences
(CM4Research).
-
Repeated mass-scale submission of nearly 10K results to MLPerf inference benchmark v4.0 (>90% of total results from 20+ other companies)
using our community version of the Collective Mind workflow automation technology
in collaboration with the cTuning foundation. Our framework helped to obtain for the 1st time
top performance, power and cost-effective AI inference configuration across commodity hardware and software:
see our report
and CM GUI for MLPerf.
-
supporting the upcoming Student Cluster Competition at SuperComputing'24
making it easier to run and reproduce MLPerf benchmarks
across different hardware and software from the international student teams.
Accomplishments
2023
-
validated our community version of the Collective Mind workflow automation technology
during the 1st community submission to MLPerf inference v3.1
that became the 1st mass-scale MLPerf benchmark submission with >12000 results across diverse models, software and hardware
from different vendors (>90% of all results from 20+ other submitters):
report.
-
was invited to give a keynote about CM at the 1st ACM conference on reproducibility and replicability:
program,
slides.
-
sponsored artifact evaluation and helped automate experiments using CM at
ACM/IEEE MICRO'23.
-
supported Student Cluster Competition at SuperComputing'23
making it easier to run and reproduce MLPerf Bert benchmark using CM
across different hardware and software.
2021-2022
-
donated open-source Collective Mind framework (CM) to MLCommons to benefit everyone: GitHub;
-
established the MLCommons Task Force on Automation and Reproducibility
to continue improving the CM automation framework
and help modularize, automate and unify MLPerf benchmarks as a collaborative engineering effort;
-
developed a prototype of the Collective Mind framework (CM) with OctoML to modularize AI systems and automated their benchmarking
and optimization across different models, data sets, software and hardware from different vendors.
-
was invited to give an ACM TechTalk about our experience helping the community reproduce 150+ research papers and validate them in the real world:
YouTube,
slides.
2020
2019