Skip to content

Manu-N-S/VisionX

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 

Repository files navigation

Vision.X

Screenshot 2024-06-25 135704

Architecture

Screenshot 2024-06-25 135236

Screenshot 2024-06-25 135257

Demo Image

Screenshot 2024-06-25 135318

Flutter Version Dart Version

A mobile app designed to assist visually impaired individuals by leveraging object detection, text-to-speech conversion, and environmental interaction features. The app uses Flutter and Dart to create a user-friendly interface, making it easier for users to navigate and interact with their surroundings.

Key Features

  • Object Detection: Utilize machine learning models to track and identify objects in the user's surroundings using the mobile camera.

  • Text-to-Speech Conversion: Convert text information, such as object labels or environmental details, into spoken words to aid users in understanding their surroundings.

  • Adaptive Interface: Design an adaptive and user-friendly interface with features like gesture controls, voice commands, and customizable settings to accommodate various user preferences.

  • Accessibility: Ensure compatibility with screen readers and adherence to accessibility guidelines, making the app usable for individuals with visual impairments.

  • Real-time Interaction: Enable real-time interaction with the camera feed, allowing users to receive instant feedback about objects and their environment.

Getting Started

These instructions will help you get a copy of the project up and running on your local machine for development and testing purposes.

Prerequisites

Installation

  1. Clone the repository:

    git clone https://github.com/Manu-N-S/ZILCKATHON-HFT-SerVIsta.git
  2. Navigate to the project directory:

    cd ZILCKATHON-HFT-SerVIsta
  3. Install dependencies:

    flutter pub get
    
  4. Running the App

Now that you have the project and its dependencies installed, you can run the app on your local machine.

```bash
flutter run 

Vision.AI - Backend

Overview

Llava7B is a versatile multi-model AI designed for extracting features from both images and text inputs. This README provides a quick guide to the most important features for seamless integration into your project.

Key Features

  1. Multi-Model Support:

    • Extract features from both image and text inputs, providing a unified solution for diverse data types.
  2. Feature Extraction:

    • Identify patterns, structures, and relevant information in input data for valuable insights.
  3. Easy Integration:

    • Simple API for seamless integration into existing projects, minimizing development effort.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published