Our Vision
At Vectis AI, we envision a future where AI alignment research is accelerated through powerful automation and augmentation tools. Our name, derived from the Latin word for "lever," reflects our commitment to being a force multiplier for researchers working on the critical challenge of AI alignment.
Research Automation
Developing AI systems to augment and progressively automate alignment research
Safety-First
Building rigorous safety cases and evaluation frameworks for automated research
Infrastructure Focus
Creating scalable systems and tools to accelerate alignment breakthroughs
Impact-Driven
Aiming for 10-100x productivity gains in alignment research
Our Roadmap
Building powerful tools for AI safety research while maintaining rigorous safety standards.
Nexus: AI Research Command Center
Command Center
- •Nexus Dashboard Launch
- •AI Research Assistant Integration
- •Experiment Management Tools
Research Automation
- •Automated Literature Review
- •Multi-Agent Research Teams
- •Compute Resource Management
Research Network
- •Cross-Platform Integration
- •Advanced Safety Frameworks
- •Research Knowledge Graph
Vectis AI Company Vision
Core Infrastructure
- •Launch to vetted researchers
- •Build evaluation frameworks
- •Safety organization partnerships
Scale Research
- •Research incubation program
- •Expand automation capabilities
- •Testing framework development
Advanced Systems
- •Large-scale research automation
- •Formal verification systems
- •Automated safety products
Research Breakthroughs
- •Automated alignment research
- •Novel safety techniques
- •Scalable oversight solutions
Our Team
Led by experienced researchers and engineers dedicated to advancing AI safety.
Jacques Thibodeau
Founder & AI Safety Researcher
With a background in AI research and software engineering, Jacques is focused on developing tools and frameworks that accelerate progress in AI alignment. His work combines technical expertise with a deep understanding of AI safety challenges.
Previously worked on large language models and AI safety research, with a particular focus on mechanistic interpretability and scalable oversight.