🍲 Resep ID Gemma 4

This Space explains an end-to-end fine-tuning project: taking google/gemma-4-e2b-it, adapting it to Indonesian recipe generation, evaluating the result, quantizing it to GGUF, and deploying it as a lightweight recipe assistant.

The goal was simple:

Given an Indonesian dish title, generate a structured recipe with Bahan: and Langkah: in natural Bahasa Indonesia.

Example input:

Tulis resep masakan Indonesia berjudul: "Tumis Kangkung Tempe".

Expected output shape:

Bahan:
- ...
- ...

Langkah:
1. ...
2. ...

Project Summary

ItemDetails
Base modelgoogle/gemma-4-e2b-it
Fine-tuned modeljunwatu/resep-ID-gemma-4-E2B-it
GGUF modeljunwatu/resep-ID-gemma-4-E2B-it-gguf
Datasetjunwatu/indonesian-recipes
TaskIndonesian recipe generation
Training hardwareAMD Instinct MI300X
GPU memory192 GB HBM3 class
Software stackROCm 7.2, PyTorch ROCm wheel, Transformers 5.x, TRL 1.x
Training methodFull supervised fine-tune
Training data66,419 recipes
Validation data1,748 recipes
Held-out test data1,748 recipes
Final deployment formatSafetensors + GGUF Q4_K_M / Q8_0

Why Fine-Tune?

The base Gemma 4 model was already fluent in Indonesian, but it often missed the identity of specific Indonesian dishes.

For example, the base model could produce a plausible recipe, but not always the right recipe. It struggled with regional or highly specific dishes such as:

A baseline evaluation on 50 held-out recipes showed the main gap:

DimensionBase Gemma 4 E2B
Language fidelity5.00
Format compliance3.90
Ingredient plausibility3.10
Step coherence3.20
Dish authenticity2.70
Overall3.58

The key weakness was dish_authenticity: the model was fluent, but too often produced a generic Indonesian recipe instead of the requested dish.

Dataset

The dataset contains structured Indonesian home-cooking recipes. Each row has:

FieldDescription
titleRecipe name
ingredientsList of ingredient lines
stepsOrdered cooking steps
num_ingredientsIngredient count
num_stepsStep count
char_countApproximate recipe length

The project converts the original parquet files into JSONL splits:

data/processed/train.jsonl
data/processed/val.jsonl
data/processed/test.jsonl

The held-out test split is not used for training. It is used only for pre/post fine-tune comparison.

Training Setup

The fine-tune used a single AMD MI300X GPU on ROCm 7.2. Important training choices:

Gemma 4 is multimodal, but this project trains only the text path:

Train:
- model.language_model.*
- lm_head

Freeze:
- vision tower
- audio tower
- vision/audio adapters

Training Format

The project uses TRL prompt/completion conversational format:

{
  "prompt": [
    {
      "role": "user",
      "content": "Tulis resep masakan Indonesia berjudul: \"Tumis Kangkung Tempe\"..."
    }
  ],
  "completion": [
    {
      "role": "assistant",
      "content": "Bahan:\n- ...\n\nLangkah:\n1. ..."
    }
  ]
}

This format was important. In this stack, the alternative messages format with assistant_only_loss=True caused unstable loss behavior.

Results

The fine-tuned model improved the practical recipe-generation behavior.

DimensionBaseFine-tuned
Language fidelity5.00~4.6
Format compliance3.90~4.95
Ingredient plausibility3.10~3.5
Step coherence3.20~3.9
Dish authenticity2.70~3.25
Overall3.58~4.0

The strongest gains were:

Critical Inference Setting

One important lesson from the project: the fine-tuned model needs repetition control.

model.generate(
    **inputs,
    max_new_tokens=1280,
    do_sample=False,
    repetition_penalty=1.05,
    no_repeat_ngram_size=6,
    pad_token_id=tok.eos_token_id,
)

Without no_repeat_ngram_size=6, long recipes can fall into repeated ingredient-list loops.

For GGUF runtimes such as llama.cpp or LM Studio, use the DRY sampler equivalent with allowed length around 6.

GGUF Deployment

The model was also converted to GGUF for local and CPU-friendly use.

QuantApprox. sizeUse case
Q4_K_M~3.2 GBDefault portable version
Q8_0~4.7 GBHigher quality, more RAM

The GGUF model can run with llama.cpp, LM Studio, or other GGUF-compatible runtimes.

What Worked

The project worked well for:

Examples of stronger categories: Ayam, Ikan, Sapi, Kambing, Tahu, Tempe, Telur, Udang, Sambal, Tumis, Pepes, Rendang-style dishes.

Limitations

The main remaining bottleneck is dataset coverage, especially for regional and specialty dishes.

Lessons Learned

  1. Use the native ROCm 7.2 PyTorch wheel on MI300X.
  2. Avoid older ROCm wheels for this Gemma 4 bf16 training path.
  3. Use prompt/completion format with TRL for this stack.
  4. Always run a cheap quick-validation training pass before a full run.
  5. Judge the base model before fine-tuning.
  6. Automatic metrics are not enough for recipe quality.
  7. no_repeat_ngram_size=6 is critical for stable inference.
  8. Dataset coverage matters more than another epoch for rare dishes.

Cost and Runtime

PhaseApprox. cost
Setup and debugging~$2.50
Quick validation~$1.50
Full training~$3.00
Evaluation iterations~$2.00
GGUF conversion and upload~$1.30
Idle/debugging slack~$4.00
Total~$14

Future cycles should be cheaper because the stack and gotchas are now documented.

Links


This project inherits the Gemma Terms of Use from the base model.