thechibuzornwachukwu@gmail.com
CASE FILES

The Evidence

Detailed case studies of problems identified, interventions engineered, and outcomes delivered.

Computer Vision / AgriTech

Valor (The Mango Master)

"Waste is invisible. Intelligence makes it visible."

1st Place, Bell's University Hackathon Project concluded Jan 2026

The Overlooked Problem

Across Nigeria and much of Sub-Saharan Africa, an estimated 30% of harvested fruit never reaches consumers. Not because of lack of demand, but because spoilage goes undetected until it's too late. Farmers, traders, and logistics operators lack the tools to assess fruit quality at scale. The waste is invisible until the damage is done.

The Intervention

Valor was built to make spoilage visible before it becomes irreversible. Working with a team of four, I led the AI development for a mobile-first computer vision system that could classify mango ripeness and detect early signs of spoilage.

  • Curated and labeled a dataset of 1,700+ mango images across multiple ripeness stages and spoilage conditions
  • Designed and trained a lightweight CNN architecture optimized for edge deployment on mobile devices
  • Implemented transfer learning and data augmentation to maximize accuracy with limited data
  • Built the inference pipeline for real-time classification on Android devices

Outcome

The system achieved reliable classification accuracy in controlled testing environments. More importantly, it demonstrated a viable path for bringing AI-powered quality assessment to agricultural supply chains without requiring expensive infrastructure or constant connectivity.

The project earned 1st Place at the Bell's University Hackathon, validating both the technical approach and the market potential.

AI Safety / Research

Adversarial Attacks on RAG Systems

"If you can't break it, you can't trust it."

The Overlooked Problem

Retrieval-Augmented Generation (RAG) has become the default architecture for enterprise AI deployments. Organizations trust these systems with sensitive documents, customer data, and internal knowledge bases. But the security assumptions underlying RAG systems remain largely untested in adversarial conditions.

Most RAG implementations assume that retrieved documents are trustworthy. This assumption creates a blind spot: what happens when an attacker can influence what gets retrieved?

The Intervention

I conducted systematic adversarial testing against RAG pipelines to identify and document vulnerabilities in the retrieval-to-generation chain. The research focused on indirect prompt injection through poisoned documents and context manipulation attacks.

  • Developed a taxonomy of RAG-specific attack vectors distinct from traditional prompt injection
  • Created adversarial document payloads designed to manipulate model outputs when retrieved
  • Tested multiple embedding models and retrieval strategies for differential vulnerability
  • Documented bypass techniques that achieved 60%+ success rates against standard safety filters

Why This Matters

As RAG becomes the foundation of enterprise AI, understanding its failure modes becomes critical. This research contributes to the growing body of work on AI safety by demonstrating that retrieval-based systems require security considerations beyond those applied to standalone language models.

The findings are being prepared for publication and have informed defensive recommendations for organizations deploying RAG in production environments.

Data Analytics

The Churn Detective

"Acquisition is vanity; Retention is sanity."

The Overlooked Problem

Databel, a telecommunications ISP, was experiencing customer churn but lacked visibility into the underlying drivers. Marketing teams focused on acquisition metrics while the silent exodus of existing customers eroded profitability. The data existed, but the story it told remained unread.

The Intervention

I conducted an end-to-end analysis of Databel's customer data to identify the behavioral and contractual patterns most predictive of churn. The goal was not just to predict who would leave, but to understand why.

  • Performed exploratory data analysis on customer demographics, contract terms, and service usage patterns
  • Identified contract length and service call frequency as the strongest churn predictors
  • Built predictive models using Python, Pandas, and Scikit-learn to score churn probability
  • Created interactive PowerBI dashboards for stakeholder communication and ongoing monitoring

Outcome

The analysis revealed that customers on month-to-month contracts with high service call volumes were 3x more likely to churn than those on annual contracts with minimal support interactions. This insight shifted the retention strategy from generic discounts to targeted interventions for high-risk segments.

The PowerBI dashboard became a standard tool for the customer success team, enabling proactive outreach before churn signals converted to cancellations.