Inference of Galen, an LLM fine-tuned on medical Q&As
Go to file
2024-01-30 18:18:14 +03:00
1.png initial commit 2024-01-30 18:18:14 +03:00
Galen_AI_Demo.ipynb initial commit 2024-01-30 18:18:14 +03:00
README.md initial commit 2024-01-30 18:18:14 +03:00
Ask

inference datasets language library_name tags
false
medalpaca/medical_meadow_medqa
en
transformers
biology
medical
QA
healthcare

Galen

Galen is fine-tuned from Mistral-7B-Instruct-v0.2, using medical quesion answering dataset

Galen's view about future of medicine and AI:

alt text

Get Started

Install "accelerate" to use CUDA GPU

pip install accelerate
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained('ahmed/galen')
model_pipeline = pipeline(task="text-generation", model='ahmed/galen', tokenizer=tokenizer, max_length=256, temperature=0.5, top_p=0.6)
result = model_pipeline('What is squamous carcinoma')
#print the generated text
print(result[0]['generated_text'][len(prompt):])