Index of projects & research stuff, June-Aug (Summer) 2025
A browser-based implementation of an aquarium simulator features procedurally generated fish with genetic algorithms controlling their appearance and behavior. Users can catch, breed, and evolve unique fish through selection and mutation, creating infinite variations from 56+ genetic traits. The simulator includes a fully customizable 3D aquarium environment with decorations, plants, and various backgrounds, all rendered in pure HTML5/CSS3/JavaScript without external dependencies. Fish genetics determine everything from body shape and fin types to colors, patterns, and swimming behaviors, making each fish unique.
Game, Documentation, Teaching Guide Education Module
08162025
A comprehensive genetics/evolution simulator. This multi-scale simulation spans from molecular genetics to global climate systems with event-driven cross-scale interactions. The project uses real-time performance optimization to handle 1000+ organisms at 60fps while maintaining scientific accuracy for educational purposes. Built with a modular architecture separating genetics, climate, UI, and ecosystem systems.
An interactive educational simulation demonstrating the Gaia Theory through a simplified planetary climate model. Features black and white daisies that regulate planetary temperature through albedo effects, providing hands-on learning about climate feedback mechanisms, homeostasis, and tipping points. Includes comprehensive teaching materials with lesson plans, activities, and assessment tools aligned with NGSS standards.
A Python-based space exploration game combining mechanics from three classic games: Miner (for resource extraction), Drug Wars (for trading economics), and VGATrek (for space navigation). Players traverse a 2D universe grid, mine resources on various planets, and trade minerals at planetary exchanges while managing ship repairs and supplies. Features multiple interconnected game systems with placeholder graphics for future enhancement.
An adventure game framework designed for educational content delivery through exploration and puzzle-solving. Incorporates inventory management, dialogue trees, and interactive environments to teach various subjects through engaging gameplay mechanics. Characters are AI based via LLM for dynamic conversational skills.
A simulation featuring autonomous AI-driven characters living in a virtual town environment. Characters have individual personalities, routines, and interact with each other based on relationship dynamics, creating emergent storytelling opportunities.
Research into machine learning approaches for autobiographical memory retrieval and enhancement. Explores neural network architectures inspired by human memory systems, including temporal modeling, multimodal learning, and personalization algorithms. Investigates state-of-the-art models for memory prediction, cue-memory association, and pattern recognition in memory data.
A conversational AI system that conducts psychological assessments through natural dialogue. Features real-time MBTI typing, Big Five personality analysis, cognitive function mapping, and emotional trajectory visualization. Includes a web-based dashboard showing live updates of psychological metrics as conversations progress. Integrates with Ollama for local LLM processing.
Investigation into how visual patterns, colors, and imagery trigger positive emotional responses and longer-term mood effects. Focuses on physiological and neurological measurements of responses to various visual stimuli across cultures. Explores evolutionary biology factors, biophilic responses, and the neuroscience behind aesthetic preferences.
Research on how different frequency spectrums of noise (pink, brown, white, blue, etc.) affect cognitive function, learning, and auditory processing. Particularly focused on educational applications and how ambient noise colors in classroom settings might benefit student performance and concentration. In conjunction with acoustic structure design project.
This system analyzes video recordings to detect emotional authenticity by identifying incongruences between verbal and non-verbal communication channels. Using computer vision, voice analysis, and machine learning, it detects micro-expressions, voice stress patterns, and multimodal inconsistencies that may indicate when someone's external emotional presentation doesn't match their internal state. Designed for self-analysis and personal development, the system helps users understand their own communication patterns and emotional expression
08.15.2025
Multiple projects involving MindWave EEG headset integration for real-time brainwave monitoring and analysis. Includes pattern detection in EEG data across Delta, Theta, Alpha, Beta, and Gamma frequencies. Features attention and meditation level tracking, frequency band analysis, and temporal pattern recognition for identifying cognitive states. Core project is the MCP server that bridges the eeg interface to the AI, application for realtime adaptive learning system.
Research and development of systems that dynamically adapt based on real-time brainwave data. Explores both subtle adjustments (pacing, complexity) and dramatic interface changes based on detected cognitive states. Achieves 70-95% accuracy in detecting attention, meditation, and cognitive load states for triggering appropriate interface behaviors.
Development of a Model Context Protocol server for streaming real-time EEG data from MindWave headsets. Provides WebSocket connections for real-time data streaming, frequency band analysis using FFT, and integration with various analysis tools for research applications.
System using computer vision and AI to analyze student engagement in real-time. Runs on Raspberry Pi 5 with Hailo-8L accelerator for edge AI processing. Provides anonymous tracking of facial expressions, body language, and participation patterns to help educators identify when students are engaged, confused, or fatigued. Features predictive analytics, social dynamics analysis, and automatic curriculum effectiveness evaluation. Will run on the 13 TOPS chip, upgrading to 26 TOPS will increase crowd size.
Statistical analysis system for identifying unusual patterns in student absences across 5-8 class periods. Uses multiple mathematical approaches including kurtosis analysis, entropy measures, and pattern detection to identify strategic absences, coordination between students, and potential gaming of attendance policies. Adapted for both multi-period and AM/PM attendance systems. Will run on standard computer if machine learning module is bypassed.
A bias-reducing survey system that transforms traditional questionnaires into natural conversations. Implements research-backed strategies for reducing response bias through indirect questioning, rapport building, and adaptive conversation flow. Features trust level tracking, sentiment analysis, and dynamic question adaptation based on user engagement.
Development of preprocessing systems for handling large documents before AI analysis. Implements semantic search with embeddings, extractive summarization, and intelligent chunking strategies to optimize context window usage. Uses local models for creating document embeddings and relevance scoring.
Comprehensive analysis tools for finding patterns in EEG data stored in SQL databases or CSV files. Analyzes relationships between brainwave types (Delta through Gamma) and subject activities. Implements clustering algorithms, correlation analysis, and temporal pattern recognition for neuroscience research.
Reverse engineering and reconstruction tools for legacy SimCity 2000 save files, enabling modern analysis and visualization of classic city simulations. Inputs a SC2k file and creates a photo realistic enviroment in Unreal Engine 5 that can be navigated via VR.
Real-time vehicle detection and tracking system using Raspberry Pi 5 with Hailo-8 accelerator (26 TOPS). Implements YOLO or MobileNet SSD for object detection at 30+ FPS, calculates vehicle speed and acceleration, and includes audio analysis for correlating engine noise with driving patterns.
Development for vintage TMS1000 4-bit microcontrollers, working with the limited 43-instruction set, 1K ROM, and 64x4-bit RAM constraints. Focus on efficient assembly language programming for these classic embedded systems.
A web-based multiplayer island management simulation. Features WebRTC peer-to-peer networking, AI players with multiple personality types, complex environmental simulation, and self-contained HTML implementation.
Audio synthesis tool for generating various colors of noise (pink, brown, white, blue, etc.) with specific frequency characteristics for research and therapeutic applications.
Rapid prototyping using AI assistance to test hypotheses about attendance data patterns. Successfully identified statistical functions for detecting consecutive vs. scattered absences through iterative testing with synthetic datasets.
Analysis and optimization of elevator systems in high-rise buildings, developing algorithms for efficient passenger routing and wait time minimization.
A Python GUI application for programming robots via audio cassette commands. Features a visual timeline interface for arranging audio command sequences (Light, Bleep, Forward, Back, Left, Right, SoundOn, SoundOff). Includes voice recording capabilities, 2-second gaps between commands for robot processing, drag-and-drop timeline editing, and export to MP3 for cassette recording.
Mathematical modeling system for designing fountains that produce specific noise colors through water flow dynamics. Calculates water impact acoustics, resonant frequencies of basin structures, material absorption properties, and turbulence noise characteristics. Uses physics equations including Helmholtz resonance and spectral slope modeling.
Extension of fountain acoustics to wind-driven architectural elements. Designs structures like Aeolian harps, Helmholtz resonators, edge-tone generators, and vortex whistles that produce specific acoustic signatures from wind. Includes mathematical models for wind speed variations, turbulence effects, and structural resonances.
Comprehensive system for designing entire urban acoustic environments. Integrates multiple sound sources to create specific acoustic zones. Features path-based acoustic experiences, multi-scale design from courtyards to parks, and integration with existing infrastructure. Includes specialized modules for waterfront, mountain, and desert environments.
Development of multiple MCP servers for various integrations including EEG data streaming with frequency band analysis and NHA App OAuth/Okta authenticated access to school communication systems. Implementation includes TypeScript/Node.js architecture, robust error handling, token refresh logic, and comprehensive API coverage.
School assignment optimization system for 18 supervisors/managers visiting 104 schools. Uses Google Maps API for real-world routing data, implements clustering algorithms for geographic grouping, and employs Hungarian algorithm and linear programming for optimal assignment. Minimizes total travel time while respecting constraints.
System for generating 3D environments in Unreal Engine from text descriptions. Features Python integration for procedural generation, blueprint spawning system for interactive elements, VR-specific setup options, and dynamic environment systems. Creates fully explorable VR worlds from natural language descriptions.
Pen plotter optimization tool that converts raster images to SVG files for plotting with multiple pens. Features 7 drawing styles, K-means color reduction to match available pens, TSP path optimization for efficient plotting, and separate file export for each pen color. Includes both GUI and command-line interfaces.
This project transforms video files into properly formatted screenplays through a distributed processing pipeline that leverages specialized AI models across multiple hardware platforms. The system intelligently extracts and analyzes both visual and elements from video content to generate industry-standard screenplay documents complete with scene headings, character identification, dialogue attribution, and action descriptions.
The technical architecture employs a hybrid approach that distributes computational tasks across a Raspberry Pi 5 with Hailo8L NPU accelerator and a Windows machine with NVIDIA RTX 3090 GPU. The Pi handles real-time scene detection and face recognition using the Hailo8L's efficient inference capabilities, while the 3090 processes audio transcription through OpenAI's Whisper model and generates scene descriptions using vision-language models like BLIP-2 or LLaVA. The pipeline uses temporal sampling strategies to extract only keyframes at scene boundaries rather than processing every frame, dramatically reducing computational overhead while maintaining accuracy.
Key technical components include PySceneDetect for shot boundary detection, face_recognition for character tracking and clustering, Whisper for dialogue transcription with timestamps, and smaller LLMs (Mistral 7B or Llama 3.1 8B) running through Ollama for final screenplay formatting. The system implements sophisticated character identification by tracking faces across scenes, clustering similar appearances, and using contextual clues from dialogue to assign character names. The distributed architecture communicates via Flask REST APIs, allowing the specialized hardware components to work in concert while remaining modular and scalable. This approach demonstrates how complex video analysis tasks can be accomplished on consumer hardware by intelligently orchestrating multiple specialized models rather than relying on a single, computationally expensive large model.
08142025