Model saved
message as below.
temperature
and max_tokens
values that affect responses.
temperature
values indicate that the model takes fewer risks and completions are more accurate and deterministic. On the other hand, high temperature
values result in more diverse completions. And max_tokens
value should be as close to the expected response size as possible.
Now we are ready to talk with the chatbot.
The right half of the screen is a chat interface where you can submit messages and receive replies.
Bot User OAuth Token
and App-Level Token
.
From scratch
or select an existing app.
From scratch
.From an app manifest
, please follow the Slack docs here.socket
and add the connections:write
scope.xapp-...
token - you’ll need it to publish the chatbot.xoxb-...
token - you’ll need it to publish the chatbot.message.im
.