Week 9 – CST499 Fall 2025 Capstone
This week, our team made significant progress on the
summarization pipeline and publishing for our project. We successfully
connected the summarizer to a local LLM running on the client’s computer via an
API. This setup allowed us to generate summaries much faster, which helped us
efficiently test and refine the summarizer’s output.
On the publishing side, we implemented a working TiddlyWiki
page that displays the generated summaries as tiddlers. Each summary appears in
a “Recent” list, and clicking on a title opens its corresponding tiddler with
the full summary.
Marcelo handled most of the code for generating the HTML file using the
TiddlyWiki CLI, while I contributed by designing and implementing a CSS
stylesheet and theme to improve readability and give the site a cleaner appearance.
The website now meets the requirements for a minimum viable product (MVP),
though we plan to continue refining it.
Next week, we plan to focus on UX/UI improvements for the
website to make it more intuitive and visually appealing. I will also work on automating
the publishing process using GitHub Actions and deploying the site via GitHub
Pages.
One of the challenges we are facing is the processing limitations
of our local computers when running the summarizer service. Even when using the
client’s computer, the summarization process remains time-consuming. Another
challenge involves improving the quality of the generated summaries.
Fortunately, our advisor has suggested a few potential solutions, including
using an LLM-as-a-judge approach and implementing a BART score to evaluate and
refine summary quality. We plan to experiment with both methods next week to
determine which produces better results.
Comments
Post a Comment