[ot][spam][crazy][data] transformer model 'attention' improvement

Undiscussed Horrific Abuse, One Victim & Survivor of gmkarl at gmail.com
Wed Jan 26 10:47:51 PST 2022


-
https://github.com/xloem/transformers/commit/7575b8286dd5c2b328d3c34d9b66dab434282fc0

A draft of calling memory_efficient_attention from the perceiver
model, when configuration parameters are set.
-

Untested. Maybe I can copy google's example again, like before,
somehow, and run the same test with the configuration settings set,
and walk through it to make sure it uses the new code.


More information about the cypherpunks mailing list