Inference of Galen, an LLM fine-tuned on medical Q&As
inference |
datasets |
language |
library_name |
tags |
false |
medalpaca/medical_meadow_medqa |
|
|
transformers |
biology |
medical |
QA |
healthcare |
|
Galen
Galen is fine-tuned from Mistral-7B-Instruct-v0.2, using medical quesion answering dataset
Galen's view about future of medicine and AI:
Get Started
Install "accelerate" to use CUDA GPU
pip install accelerate
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained('ahmed/galen')
model_pipeline = pipeline(task="text-generation", model='ahmed/galen', tokenizer=tokenizer, max_length=256, temperature=0.5, top_p=0.6)
result = model_pipeline('What is squamous carcinoma')
#print the generated text
print(result[0]['generated_text'][len(prompt):])