Skip to content

longtransnt/DOCR-Handwriting-Pipeline

Repository files navigation

DOCR-Handwriting-Pipeline

Front End User Interface for HANDWRITING RECOGNITION APPLICATION FOR TETANUS TREATMENT

Acknowledgements

This project module has been developed and belongs to:

Authors

Built With

This project has been built with:

Python

Flask

Prerequisites

To install local or deployed application, the following steps should be done:

[DEPLOY] To deploy this application on Amazon Web Services, you need to set up a G-Instance EC2.

[LOCAL] To run locally, you need to install CUDA Toolkit 11.3 (https://developer.nvidia.com/cuda-11.3.0-download-archive)

Installations

First we need to update packages and install python3-pip:

sudo apt update
sudo apt install python3-pip

After that, install extra packages for installations:

sudo apt install build-essential
sudo apt-get install ffmpeg libsm6 libxext6  -y

Then, install packages on anaconda environment, we need to use the env-spec.txt file:

pip install --user --upgrade aws-sam-cli
conda update --name base --file env-spec.txt

Detectron and Pytorch need to be installed from their distributions due to versioning:

pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install 'git+https://github.com/facebookresearch/detectron2.git' --no-cache-dir

Environment Variables

To run this project, you will need to add the following environment variables to your ./Misc/Constant.py file:

DEFAULT_PATH = [PATH TO THE PIPELINE]

Run pipeline's prediction flow in Folder mode

This project has two mode: Folder and Server

To run in Folder mode:

  1. In folder "static/Input" and paste images/records into the folder

  2. Command to activate the conda environment:

conda activate base
  1. Command to run the pipeline:
python main.py -op Folder
  1. Results at each stage can be founded in subfolders of "static/Output/":
 ├── PaperDetection                # Records that has irrelevant parts cropped out
 ├── Preprocessing                 # Normalized instances of the Paper Detected images
 ├── TextDetection                 # Cropped images of handwriting lines, divided in folders
 │   ├── .../coordinates.json      # Coordinates of each cropped images on the Paper Detected images
 ├── Adaptive                      # Adaptive Preprocessed images from the Text Detection instances
 │   ├── Adaptive-Preview          # Folder to store Preview manually processed images (for UI usage)
 │   ├── .../blur.json             # Blur degree of each processed image (for UI usage)
 └──  TextRecognition              # Translation of handwriting to machine text. Stored in json

Run pipeline's prediction flow in Server mode

To run in local mode:

  1. Command to activate the conda environment:
conda activate base
  1. Command to run the pipeline locally in server mode:
python main.py -op Server
  1. Command to run the pipeline:
python main.py -op Server

The services can be access by the following endpoints:

localhost:8080
  1. Run the UI code and navigate to the following URL to upload a new record to input into the pipeline:
localhost:3000/input
  1. Click on the image to start the operation in the UI

About

Paper Detection Module for Training Detectron2

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published