Our Vision

At Vectis AI, we envision a future where AI alignment research is accelerated through powerful automation and augmentation tools. Our name, derived from the Latin word for "lever," reflects our commitment to being a force multiplier for researchers working on the critical challenge of AI alignment.

Research Automation

Developing AI systems to augment and progressively automate alignment research

Safety-First

Building rigorous safety cases and evaluation frameworks for automated research

Infrastructure Focus

Creating scalable systems and tools to accelerate alignment breakthroughs

Impact-Driven

Aiming for 10-100x productivity gains in alignment research

Our Roadmap

Building powerful tools for AI safety research while maintaining rigorous safety standards.

Nexus: AI Research Command Center

Phase 1

Command Center

  • Nexus Dashboard Launch
  • AI Research Assistant Integration
  • Experiment Management Tools
Phase 2

Research Automation

  • Automated Literature Review
  • Multi-Agent Research Teams
  • Compute Resource Management
Phase 3

Research Network

  • Cross-Platform Integration
  • Advanced Safety Frameworks
  • Research Knowledge Graph

Vectis AI Company Vision

Phase 1

Core Infrastructure

  • Launch to vetted researchers
  • Build evaluation frameworks
  • Safety organization partnerships
Phase 2

Scale Research

  • Research incubation program
  • Expand automation capabilities
  • Testing framework development
Phase 3

Advanced Systems

  • Large-scale research automation
  • Formal verification systems
  • Automated safety products
Phase 4

Research Breakthroughs

  • Automated alignment research
  • Novel safety techniques
  • Scalable oversight solutions

Our Team

Led by experienced researchers and engineers dedicated to advancing AI safety.

JT

Jacques Thibodeau

Founder & AI Safety Researcher

With a background in AI research and software engineering, Jacques is focused on developing tools and frameworks that accelerate progress in AI alignment. His work combines technical expertise with a deep understanding of AI safety challenges.

Previously worked on large language models and AI safety research, with a particular focus on mechanistic interpretability and scalable oversight.

Join Our Mission

We're looking for passionate individuals to help build the future of AI alignment research