A fully on-chain GPT2-Instruct implementation – both front-end and backend hosted on the Internet Computer. This free version, v0.2, can process up to 56 tokens – input and output. Although the output it still deterministic, the temperature is now adjustable currently with 0.7 as the default value.
Disclaimer
This is an early GPT-2 implementation. Responses may be inaccurate or inconsistent. Use for demonstration purposes only. Your feedback helps us improve!
This canister handles the tokenization process for our GPT-2 implementation. It converts raw text inputs into tokens that can be processed by the model.