Text recognition with transformer models and LLM or Vision-Language Model integration

Description

Transcription of text from centuries-old works represents a research area that is underserved by current tools, such as Adobe Acrobat’s OCR. While these resources can perform text recognition from clearly printed modern sources, they are incapable of extracting textual data from early forms of print, much less manuscripts. This project will focus on the application of hybrid end-to-end models based on transformers (e.g. VIT-RNN or CNN-TF or VIT-TF) and integration of either LLM models or VLM models (or both) to recognize text in Spanish printed sources from the seventeenth century. This project aims to expand the dataset from previous iterations, to help the model finetune handling of print and handwritten documents. The project will also integrate LLM models such as Gemini3 to increase the accuracy of the transcription, with VLMs also being a possible path for contributors, as a late-stage step of the process. The goal is to increase our fine-tuning and transcription accuracy on larger datasets incorporating diverse typographical styles, both printed and handwritten.

Duration

Total project length: 175 hours

Task ideas

Expected results

Requirements

Python and some previous experience in Machine Learning.

Difficulty level

Advanced

Mentors

Please DO NOT contact mentors directly by email. Instead, please email human-ai@cern.ch with Project Title and include your CV and test results. The mentors will then get in touch with you.

Corresponding Project

Participating Organizations