Les missions du poste

Établissement : Institut Polytechnique de Paris Télécom Paris École doctorale : Ecole Doctorale de l'Institut Polytechnique de Paris Laboratoire de recherche : Laboratoire de Traitement et Communication de l'Information Direction de la thèse : Matthieu RAMBAUD ORCID 0009000330426504 Début de la thèse : 2026-10-01 Date limite de candidature : 2026-10-01T23:59:59 \begin{abstract}(Example from \cite{zkfingpt}) There is an ongoing legal battle between model producer companies and traditional content publishers.
NYT sued OpenAI and Microsoft together, accusing them of using millions of copyrighted
articles to train the GPT-4 model without authorization. The court
required OpenAI to set up two servers as a sandbox where NYT lawyers examined the training
corpus remotely. However, OpenAI engineers accidentally deleted the operation logs on the servers, which stalled the trial process.
\textbf{Existing solutions investigated in the PhD: cryptographic zero knowledge proofs (ZK) enable to prove the authenticity of model output without disclosing the input; they also enable to prove that the model was correctly fine-tuned \cite{veryfinetuning} (or even trained) with (possibly secret) data, without disclosing its parameters.}
\end{abstract}

\begin{center}
\includegraphics[width=0.7\linewidth]{zkimage.png}
\end{center}
Zero knowledge proofs (ZKs) enable a prover (left on the picture), to demonstrate to a verifier (right on the picture) that she knows a secret ($x$ on the picture), verifying some public statement ($Y=g^x \bmod p$ on the picture), without disclosing $x$.
A rapidly evolving line of research, presented in the top worldwide conferences \cite{zkgpt,verfcnn,deepfold} builds ZKs for statements such as: $Y$ is the output of some public model evaluated over some secret $x$.
There are an infinite number of applications of these tools, such as for financial purpose \cite{zkfingpt} or for auditability of untrusted servers delivering model predictions \cite{concurrentselhacen}.
The starting goal is to build ZK proofs for specific language embeddings, using libraries such as deep-prove\footnote{\url{https://github.com/Lagrange-Labs/deep-prove}, of Lagrange Labs \cite{nonpolynomialquantization}}, which has just achieved a ZK proof of a full GPT-2 inference
\footnote{\url{https://www.lagrange.dev/blog/deepprove-1}}
or the implementation of ZK-GPT \cite{zkgpt}.
The PhD goal is to attack or improve existing ZKs of LLMs.

Hosted at Télécom Paris, in collaboration with El-Hacen Diallo and Daniele Lunghi in the Luxembourg research team working on LLM security.
It is the continuation of an M2 internship, details are confidential. Hosted at Télécom Paris, in collaboration with El-Hacen Diallo and Daniele Lunghi in the Luxembourg research team working on LLM security.
It is the continuation of an M2 internship, details are confidential.

Le profil recherché

cryptographie ou mathématiques, machine learning

Postuler sur le site du recruteur

Ces offres pourraient aussi vous correspondre.

Technicien IT H/F

  • Paris 16e - 75
  • CDI
  • GR Intérim & Recrutement
Publié le 16 Avril 2026
Je postule