Please turn JavaScript on
header-image

Daily Dose of Data Science

Click on the "Follow" button below and you'll get the latest news from Daily Dose of Data Science via email, mobile or you can read them on your personal news page on this site.

You can unsubscribe anytime you want easily.

You can also choose the topics or keywords that you're interested in, so you receive only what you want.

Daily Dose of Data Science title: Daily Dose of Data Science

Is this your feed? Claim it!

Publisher:  Unclaimed!
Message frequency:  0.14 / day

Message History

Recap

In the last chapter (Part 12), we explored how language models can be adapted via fine-tuning.

We began by discussing the central question of when fine-tuning is actually worth doing. We studied the reasons to fine-tune and the reasons to avoid it.

After that, we moved to understanding PEFT techniques. In particular, we explored LoRA and QLoRA. We understood...


Read full story
Recap

In the last chapter (Part 11), we completed our journey of understanding how evaluation works in LLM systems.

We began by exploring multi-turn evaluation. Unlike single-turn scenarios where one prompt produces one response, conversational systems require evaluating behavior across an entire dialogue. A response in later turns often depends on things that happened ...


Read full story
Recap

In part 10, we expanded our understanding of evaluation by exploring model benchmarks and the evaluation of LLM-powered applications within their real operational context.

We began by exploring model capability benchmarks. We understood that these benchmarks attempt to measure the general intelligence, reasoning ability, and knowledge breadth of large language mod...


Read full story
Recap

In Part 9, we began exploring the evaluation space of LLM applications, laying the groundwork, covering challenges and a practical taxonomy of evaluation methods.

We began by examining why LLM evaluation is different from traditional software or classical ML evaluation. Unlike deterministic systems, LLMs generate open-ended, probabilistic outputs. This introduces ...


Read full story