Now Assist Skill Kit (NASK) FAQ
What is the Now Assist Skill Kit (NASK)?
Now Assist Skill Kit, or NASK, was released in the Xanadu release. This feature allows you to build and deploy custom skills that leverage generative AI directly within your instance. These skills can serve to enable use cases that the current suite of Out of the Box (OOTB) Now Assist skills such as task summarization and code generation cannot today.
The output of NASK is a custom skill, that then can be activated from within the Now Assist Admin console.
When would I use NASK?
NASK is designed for those seeking greater flexibility with generative AI capabilities. Common use cases that may invoke the usage of NASK are:
- An instance has highly complex workflows where the output of a LLM is required to drive further action.
- Capabilities requires the usage of an external LLM (i.e. non-ServiceNow managed model).
- This includes the requirement for a LLM to have domain-specific knowledge or particular data handling and security restrictions that prevent you from using a NowLLM.
- You have organization-specific use cases that OOTB skills do not cater to.
We generally recommend that one approach using this feature thoughtfully, as we at ServiceNow are unable to monitor or manage custom solutions and thus typically recommend that for most admins they stay within the confines of our OOTB capabilities. Where OOTB isn’t fit for purpose, one may experiment with the configuration options provided within the Now Assist Admin console. If this still isn’t sufficient, then the NASK may be a good option.
This question is answered more broadly in our article How to approach building custom generative AI solutions using Now Assist.
How can I access the Now Assist Skill Kit?
To access the NASK, you must adhere to the following criteria:
- Have an active license for a Now Assist for [x] product
- You have updated the Now Assist for [x] plugins to the latest versions
- Have an instance that is on at least the Xanadu release
As a note, you cannot access any Now Assist/generative AI features (and consequentially NASK) on your personal development instance (PDIs).
Once you have confirmed the above, you then need to grant your users access. To do so, add the sn_skill_builder.admin role to your users.
What do I need to know before beginning to use NASK?
The process of building custom skills with NASK involves a broad range of experiences, all of which we recommend getting familiar with prior to usage. To summarize, we have a user journey documented below:
- Define provider: This process requires one to understand the benefits and potential downsides for each LLM being considered. Our recommendation is typically to use our generic NowLLM service were possible, but your use case may have particularities resulting in one LLM being preferred.
- Build: During the build process, you will be asked to:
- Define where input data should come from to augment the prompt with the information it needs. This at a minimum requires you to have an understanding of the architecture of your instance, but can also necessitate you writing a script or building a flow to extract what you need.
- Develop your prompt – within NASK we provide a text box in which you input your desired prompt. Within the prompt you will need to outline everything the LLM needs to know to provide the outcome you are seeking. This includes format, language, action, and references to the data you want it to refer to.
- Adjust prompt settings. These settings can require you to write a script (such as when you wish to include a pre- or post-processor to augment the outgoing or incoming request) or simply have an understanding of LLM fundamentals, such as an understanding that temperature relates to how “creative” a LLM can be.
- Test: NASK provides an area for you to test your prompt from the editor itself. Having a rubric that defines success for the outcome of your skill is key.
- Deploy: In the August 2024 release, we enabled you to deploy directly to a UI Action, with the OOTB configuration to simply display the resultant message from the LLM in an informational message. In the likely chance that you wish to take action using the output of that skill, so will have to be confident in your scripting abilities to build that into your UI Action.
How many Assists are consumed when using NASK?
For information on Assist consumption, please refer to our overview or reach out to your account representative.
Do custom skills support languages other than English?
Yes – you can leverage the Dynamic Translation component of the Generative AI Controller to enable the use of custom skills for those operating in a language other than English.
Learn more here.
Where can I limit who has access to the deployed custom skill?
You can do it from within the UI Action itself – learn more here.
Is NASK available in GCC environments?
Not today.
How do I find the NASK in my instance?
Within your instance, you can type Now Assist Skill Kit into the filter navigator to display the link. If not visible, ensure your instance is on at least the Xanadu release, and you have an active license for a Now Assist for [x] product.
How do I build a custom skill using NASK?
You can find a walkthough video available here, or simply refer to the product documentation to learn how to build a custom skill.
Which LLMs can I use in my custom skill?
Your options today are:
- Now LLM Service
- External LLM
We typically recommend utilizing the Now LLM service for most use cases. Those with requirements that prevent that can choose to leverage an external LLM. We have 2 methods of connecting to external LLMs – via spokes, or BYOLLM.
The prebuilt spokes we offer allow you to connect to external LLMs with ease. The list as of August 2024 is:
- Azure OpenAI
- OpenAI
- Aleph Alpha
- WatsonX
- Google Bard (MakerSuite and Vertex AI)
Note that although these are spokes, they don’t consume integration hub transactions but rather Assists. For more information on this topic, please contact your account representative.
Instances on at least the Washington DC release are able to use the generic LLM connector to connect to any external LLM not listed above, i.e. BYOLLM. This process requires a fair amount of technical acumen. To integrate with a non-spoke-supported LLM, you need:
- An API key from the provider
- Endpoint for the LLM
- Access to API documentation for that LLM to assist with writing the transformation script to translate the input and response into an acceptable format
Regardless of the external LLM you choose to connect to, you will be responsible for managing the appropriate license and model configuration for your use case.
Can I use multiple LLMs in a single skill?
This is unsupported within the product today, but you can build a workaround by doing the following:
- Creating a custom skill
- Deploying it as a UI Action
- Within the script for the UI Action, take the output of your custom skill and send it to the other LLM for processing.
What data can I bring in to use within my prompt?
You can bring in data from anywhere you have access to – records, flows, subflows, scripts, integrations, events. As long as it is stored somewhere in ServiceNow, and you have access rights to it, you can configure NASK to pull it into the prompt.
It is important to note however that if the data is any more complex than fields on a record, you will likely have to create a subflow or script to parse the data into a string usable within the prompt. You can see an example here.
You need to add a skill input to bring in information from a particular record or to input a static variable such as a string or Boolean. To do so, click on the plus icon next to Skill Inputs. From the module that appears, select the type of data you wish to add, and then finish populating the form with the details of the input. You can then use this skill input as an input to a flow, script, or directly within the prompt itself.
You can add a tool to utilize the output of a workflow within the prompt. To do so, click the plus icon to the right of the Tools section. Within the module that opens, select which type of workflow you wish to add. If your flow requires a particular input, you can populate that with either a skill input (noted above) or a static value that you provide yourself.
How can I build a good prompt?
Prompt engineering is an artform, and can vary from model to model and from use case to use case. Generally it is found that adding as much specificity as possible tends to help the LLM generate a better result. An example below showcases this, where we initially asked the LLM to do the following:
You are an expert in understanding the underlying emotions within text. Review the below survey answers and determine what the overall sentiment of the user is, and answer in one word.
The survey questions and answers are found below: {{GetSurveyResults.survey_comments}}
This had pretty poor results, and thus we iterated on the prompt until we arrived at this prompt:
You are an expert in understanding the underlying emotions within text.
Review the below survey answers and determine what the overall sentiment of the user is, and answer in one word.
Use the following categories to provide the overall sentiment:
Negative: If the sentiment is negative in nature
Positive: If the sentiment is positive in nature
Neutral: If the sentiment is neither negative nor positive
The response should only contain the overall sentiment.
The survey questions and answers are found below: {{RetrieveSurveyResults.survey_comments}}
The second prompt provided us with outputs we classed as successful at a much higher frequency than the first.
This example is rather specific to our use case however, so we recommend spending the time to test and iterate on your prompts prior to deployment.
How can I dictate my desired format for the output of the skill?
You can do so from within the prompt itself – by adding statements such as:
- Provide the list in bullet points
- Answer in one word
- Expand all acronyms in your response
What is meant by pre and/or post processors?
When building your skill, you have the option to add pre or postprocessors. These are essentially scripts that will run prior to the prompt leaving your instance (preprocessor) or after the response has been returned (postprocessor).
These are great to use if you have particular data handling restrictions that limit what data can leave your instance, so you can configure a method of masking/unmasking particular information if the OOTB Sensitive Data Handler is not fit for your needs. An additional use case that may require the use of a processor is to develop a mapping of acronyms specific to your organization. Prior to delivering the response to the LLM, you can have the preprocessor expand the acronyms so that the LLM knows what they represent.
Can I employ the use of multiple prompts in a single skill?
Not today.
How do I delete custom skills I no longer need?
For now, navigate to the sn_nowassist_skill_config table and manually delete them.
How can I test my custom skill?
We offer an inproduct method of testing. To do so, click on Run tests below the prompt editor. You will be presented with the output from the LLM in the Response tab. If you wish to review the data that was added to the prompt from your skill input/tools, then you can click on the Grounded prompt tab.
Do note that testing your skill consumes an Assist. For more information please reach out to your account representative.
Where can I deploy my custom skills?
As of the August 2024 release you are able to deploy the skill only via a UI action or from within a script. Future releases will increase support for future deployment vectors.
I’m done building my custom skill. What now?
Once your prompt is complete, and you have completed testing, you can now look to publish and deploy it.
To publish it, click Publish in the top right of the screen. This will lock your prompt, meaning that no further adjustments can be made. If you wish to refine it at a later date, you will have to create a copy of the prompt and work on a copy.
Once published, click on the Skill Settings tab, then click on Deployment Settings in the left navigation bar. This will give you the option to configure two things:
- Where in the Now Assist Admin console the skill should be found.
- How and where users will trigger your skill.
As of August 2024, you can only deploy it to a UI Action. To do so, select the UI Action box, determine upon which record type the UI Action should be present on (typically this is whatever you selected as a skill input) and click Save. This will automatically generate a UI Action for you that, when triggered, will call the skill, and return the response in an information message. You can edit how the output is used from within the script in the UI Action directly.
Can I call my custom skills from within a Flows or a Virtual Agent topic?
You cannot do this directly today. A workaround however is to build a script (see next question) that calls your skill and returns the response for you to use in your Flow or VA topic.
Can I call the custom skill from within a script?
Yes. See an example script below, and replace the variables with your data.
var inputsPayload = {};
// create the payload to deliver input data to the skill
inputsPayload[‘input name’] = {
tableName: 'table name',
sysId: 'sys_id',
queryString: ''
};
//create the request by combining the capability sys ID and the skill config sys ID
var request = {
executionRequests: [{
payload: inputsPayload,
capabilityId: ‘capability sys id’,
meta: {
skillConfigId: ‘skill config sys id’
}
}],
mode: 'sync'
};
//run the custom skill and get the output in a string format
try {
var output = sn_one_extend.OneExtendUtil.execute(request)['capabilities'][request.executionRequests[0].capabilityId]['response'];
var LLMOutput = JSON.parse(output).model_output;
} catch(e) {
gs.error(e);
gs.addErrorMessage('Something went wrong while executing the skill.');
}
action.setRedirectURL(current);
Can I see a demo using the NASK?
You can find one here.
When I navigate to NASK, I get an error stating “You do not have permission to access this page”
To access NASK you need to have the sn_skill_builder.admin role. Please ensure your user has that role, then log and log back in to see if you have access granted.
Why is my custom skill not appearing in Now Assist Admin console?
Ensure that your skill has been published, and a deployment method selected.
If you selected “Other” under the deployment settings, you will find you skill in the tab named Available.
Why can’t I find NASK in my instance?
- Check the following:
a) Your instance is on the Xanadu release.
b) Your license for relevant Now Assist plugins is up to date.
c) All relevant Now Assist plugins (Now Assist for ITSM/HRSD/CSM/ITOM, etc.) are up to date. The Now Assist Skill Kit comes bundled with the latest version of the plugins, as shown in the screenshot below.
- To see and access the Now Assist Skill Kit, you’ll need to grant the role “sn_skill builder admin” to users who will use it. If this role has not been assigned, you’ll receive an error message when you try to navigate to the Now Assist Skill Kit.
- Once you’ve assigned the role “sn_skill builder admin,” log out and then log back in for the change to take effect.
If you have completed steps 1 - 3 and are still unable to access NASK, please log a case, and we will look into it for you.
Why isn't my custom skill isn’t appearing on the record?
This is likely because your skill hasn’t been:
- Published
- Deployed as a UI Action
- Activated from within the Now Assist Admin console
Why am I not getting a response when I run a test?
You may be experiencing issues with your connection to the LLM. If using the generic Now LLM service, please raise a case and we will investigate. If using an external LLM, please first verify that the service is running, then check your connection and credentials.
Additional Resources
https://www.servicenow.com/community/now-assist-articles/now-assist-skill-kit-nask-faq/ta-p/3007953
