[ot][spam][log] attempt bits: talk to a language model on list
Undescribed Horrific Abuse, One Victim & Survivor of Many
gmkarl at gmail.com
Mon Sep 11 08:31:27 PDT 2023
colab: https://colab.research.google.com (i think people are using
non-google alternatives now but i know this url)
configuring gpu runtime and running !nvidia-smi shows the T4 gpu has
15GB of ram, so i’d need the quantized approach to use it in gpu ram
as 13b float16 would be 26GB otherwise
More information about the cypherpunks
mailing list