LLM Essay Evaluator
A LLM tool for evaluating essays using local models.
The Essay Evaluator automates essay grading using local language models and sentiment analysis with the DSPy framework. It provides detailed, objective feedback on grammar, structure, and content. This self-learning project includes both the evaluation logic and a user-friendly interface. Read more in the DSPy project guide on Medium.
Project Features
- Automate Essay Evaluation: To reduce the time and effort required to grade essays by leveraging machine learning.
- Provide Detailed Feedback: To offer comprehensive feedback on various aspects of the essay, such as grammar, structure, and content quality.
- Enhance Objectivity: To minimize subjective biases in the grading process.
Tech Stack
- Python: For developing the application and implementing machine learning models.
- DSPy: Integrate prompt workflows into the application.
- Ollama: Handle the management and deployment of large language models.
- Streamlit: Create the interactive user interface.
- WSL: Enabled the execution of Linux-based development environments on Windows.
Project Workflow
- Clone the repository:
git clone https://github.com/Gayanukaa/Essay-Evaluator-LLM.git
- Navigate to the project directory:
cd Essay-Evaluator-LLM
- Set up the environment with dependencies:
conda env create -f environment.yml -n dspy-dev
- Activate the environment required for the application:
conda activate dspy-dev
- Run the application:
streamlit run app.py
Scenarios Demonstrated:


Conclusion
The Essay Evaluator using DSPy represents a significant step forward in automating and enhancing the essay grading process. By leveraging state-of-the-art language models and sentiment analysis, it provides comprehensive, objective, and instant feedback, making it a feasible tool for educators and students alike.
References
For more details, please visit the Essay Evaluator GitHub Repository.