Humanlike AI Systems and Trust Attribution

Description

This project will build an open-source, modular experimentation engine for studying trust calibration in AI-assisted decision systems. The platform will allow researchers to systematically manipulate humanlike and authority-signaling interface cues (e.g., assistant name, tone, confidence framing) and log user behavior at high temporal resolution to measure reliance vs. override decisions.

The end product will be a reusable research infrastructure for human–AI trust, calibration, and adoption studies.

Motivation

As AI assistants increasingly adopt humanlike names, conversational tone, avatars, and confidence framing, users infer competence, agency, and intentionality from interface design alone. These inferences can lead to appropriate reliance, underuse, or overtrust.

Most existing research relies heavily on self-report trust scales. This project focuses on behavioral trust metrics, grounded in observable decision behavior and structured logging.

Project Goals

The contributor will build a modular web-based experimental environment that supports:

Scope of Work

1) Experimental Web Application

Build a lightweight experiment platform (React/Next.js or similar) including:

2. Cue Manipulation System

Implement a condition management framework enabling systematic manipulation of at least 3 cue dimensions, such as:

3. Behavioral Task Module

Implement one structured decision task that generates clear behavioral trust outcomes: Recommendation Acceptance Task

4. Instrumentation and Logging

Design and implement a clean event schema capturing:

Requirements:

5. Dataset Export + Analysis Notebook

Deliver:

Deliverables

By the end of the project, the contributor will provide:

Stretch Goals (If Time Allows)

Required Skills

Project difficulty level

Moderate. This project requires integration of frontend development, experimental condition logic, and structured behavioral logging.

Mentorship Expectations

Contributors will be expected to:

Broader Impact

This project supports responsible AI by:

Screening Test (2-4 Hours)

Applicants must build a minimal working prototype.

Requirements:

Submission

Mentors

Please DO NOT contact mentors directly by email. Instead, please email human-ai@cern.ch with Project Title and include your CV and test results. The mentors will then get in touch with you.

Corresponding Project

Participating Organizations