Testing and Debugging
Introduction
The .test.ts
files are used to locally execute Services, Commands, External Entities, Operations and Agents. Therefore, through the Runner you can provide the input (i.e. the input values) for the execution of a service, command or operation and also access their output after the execution of the script.
For Operations the Input Properties belong in the Request Parameters and Request Body. On the contrary, the Output object is used to read the values of the output (or response for the operations) properties after the await runner.run()
has been successfully executed. Moreover, the await runner.run()
is the line to execute the service, command or operation. The run()
function either does not retrieve an input at all or, in case of Instance Commands, it retrieves as input the instance id
.
For Services and Commands that triggers sending Business events, you need to provide local configuration for Topic Binding Secret(s)
Install and configure your kafka broker see https://kafka.apache.org/quickstart
Create a .env file and place in your project root directory, Make sure its add in .gitignore file so it is not checked in git repository.
For each Topic add proper topic binding configuration in .env file like below
```bash
<Topic Binding Name> ='{"topicName":"<kafka Topic Name>","kafkaBinding":{"kafka_brokers_sasl":["<Kafka Broker sasl>"], "user": "", "password": ""}}'
```
Where <Topic Binding Name> is the name you entered in Solution Designer and "topicName":
is the name of the topic on the Kafka cluster.
The steps that must be followed are:
Open a .test.ts file (it won't work with implementation files)
Set some breakpoints in the implementation files
Navigate to the debug section on the left menu bar
Launch the debug in VS Code on "Current Test"
On the sidebar it is possible to trace variables
Test Environment
With TestEnvironment one is able to create new instances of entities and then perform one or more test scenarios on the created instances. After the tests have been executed, the created instances can be deleted at once using the cleanUp()
method.
Example:
describe('solution:Command1', () => {
// we define the testEnvironment so that it is accessible in the test blocks
// We need to create a new instance of our TestEnvironment
// Each instance of it can handle its own test data
const testEnvironment = new TestEnvironment();
// we define the created entity so that it is accessible in the test blocks
let createdEntity;
before(async () =>
{
createdEntity = testEnvironment.factory.entity.cptest.RootEntity1();
// We set values to each property
createdEntity.property1 = "value1";
createdEntity.property2 = "value2";
// We create the entity in the database
await createdEntity.persist();
});
// This block will define what will happen after all tests are executed.
after(async () =>
{
// Delete everything we've created
// through the testEnvironment in this test session
await testEnvironment.cleanUp();
});
// 1: Create and delete the entity in the actual test.
// In this case you do not need the before() and after() blocks
it('works on a RootEntity2', async () =>
{
// Initialize the entitiy
const rootEntity2 = testEnvironment.factory.entity.cptest.RootEntity2();
// Set value to the properties of the entitiy
rootEntity2.property1 = "value1";
// Create the entity in the database
await rootEntity2.persist();
const runner = new cptest_Command1Runner();
// Run the test on the instance we created before
await runner.run(rootEntity2._id);
console.warn('No tests available');
expect(true).to.equal(true);
// Delete the instance created before
await rootEntity2.delete();
});
// 1: Use the find() function to search for a specific entity that was created in before() block
// do not need to delete manually, after() block will do it for you
it('works on a rootEntity1', async () =>
{
// The before() block will run automatically before this test, provided it was implemented
// Find an instance that already exists
const foundEntity = await testEnvironment.repo.cptest.RootEntity1.find(true, 'myFilter');
const runner = new cptest_Command1Runner();
// Run the test on the instance that already exists
await runner.run(foundEntity._id);
console.warn('No tests available'); expect(true).to.equal(true);
// The after() block will run automatically
});
});
Debug Factory Commands
Debug a Factory Command by following the structure below:
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_FactoryCommand1Runner();
// Give input to factory command
runner.input = testEnvironment.factory.entity.ns.FactoryCommandIdentifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// This will return the created instance of the root entity
const factory_output = await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
});
Debug Instance Commands
Debug an Instance Command by following the structure below:
it('works on an existing rootEntity1 that we find', async () => {
// Give input to factory command
runner.input = testEnvironment.factory.entity.ns.InstanceCommandIdentifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// Use the Id of the created entity
// This will return the modified instance of the root entity
const instance_output = await runner.run(createdEntity._id);
console.warn('No tests available');
expect(true).to.equal(true);
instance_output._id;
instance_output.prop1;
instance_output.prop2;
});
Debug Services
Debug a Service by following the structure below:
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_Service1Runner();
// Give input to factory command
runner.input = testEnvironment.factory.entity.ns.Service1Identifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// This returns the output entity
const service_output = await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
// Get the output of the service and store it in local variable
service_output.prop1;
service_output.prop2;
});
Debug External Entities
Debug an External Entity by following the structure below:
describe('ns:ExternalEntityId', () => {
const testEnvironment = new TestEnvironment();
before(async () => {
// This block will run automatically before all tests.
// Alternatively, use beforeEach() to define what should automatically happen before each test.
// This is an optional block.
});
after(async () => {
// This block will run automatically after all tests.
// Alternatively, use afterEach() to define what should automatically happen after each test.
// This is an optional block.
// Recommended: remove all instances that were created
// await testEnvironment.cleanup();
});
describe('create', () => {
it('works', async () => {
// const runner = new externalEntityRunners.ns_ExternalEntityIdConstructorRunner();
// await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
});
});
describe('load', () => {
it('works', async () => {
// const runner = new externalEntityRunners.ns_ExternalEntityIdLoaderRunner();
// await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
});
});
describe('validate', () => {
it('works', async () => {
// const runner = new externalEntityRunners.ns_ExternalEntityIdValidatorRunner();
// await runner.run(false);
console.warn('No tests available');
expect(true).to.equal(true);
});
});
});
Debug Operations
Debug an Operation by following the structure below:
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_Service1Runner();
// Initialize the path parameters of the operation
runner.request.path.parameter1 = value1;
// Initialize the query parameters of the operation
runner.request.query.parameter1 = value2;
// Initialize the request body and its properties of the operation
// If body is a complex body
runner.request.body = testEnvironment.factory.schema.nspacrnm.SchemaIdentifier();
runner.request.body.property1 = value1;
runner.request.body.property2 = value2;
await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
// Get the output of the command and store it in local variable
const operation_response = runner.response;
});
Debug Error Middleware
Debug an Error Middleware by following the structure below:
describe('apitest_ErrorMiddleware', () => {
const testEnvironment = new TestEnvironment();
before(async () => {
// This block will run automatically before all tests.
// Alternatively, use beforeEach() to define what should automatically happen before each test.
// This is an optional block.
});
after(async () => {
// This block will run automatically after all tests.
// Alternatively, use afterEach() to define what should automatically happen after each test.
// This is an optional block.
// Recommended: remove all instances that were created
// await testEnvironment.cleanup();
});
it('works', async () => {
// Create runner instance
const runner = new errorMiddlewareRunners.apitest_ErrorMiddlewareRunner();
// Assign an error object that will be passed to error middleware
runner.error = new cptest_SomeCustomError();
// Execute error middleware
await runner.run();
// Have some expectation aganist the response returned from error middleware
expect(runner.response.status).to.equal(500);
expect(runner.response.body.code).to.equal('E19001');
expect(runner.response.body.message).to.equal('An error occurred');
});
});
Change Default Log Level
Either adjust the Project Configuration or the Solution-Specific Configuration in the Configuration Management with the following value:
configmap: extraConfiguration: de.knowis.cp.ds.action.loglevel.defaultLevel: INFO
The log level can be changed here as needed either to INFO, DEBUG, TRACE, ERROR or WARN.
Configure Different Log Levels
Prerequisites
Create a JSON file named log-config.json in your project's root directory
Add an entry in the .gitignore file for
log-config.json
so it is not pushed to your repositoryAdjust your VS Code launch configuration to allow output display from
std
. Open .vscode/launch.json in configurations and add"outputCapture": "std"
Supported Log Levels
The supported log levels are:
error
warn
info
debug
trace
Configure Log Levels using Module Names
Configure Solution-Framework Log Level
The below example configures the solution-framework to be at error
log level, this is achieved by placing an entry in log-config.json file with key "solution-framework" and desired log level, in this example error
{
"solution-framework" : "error"
}
Configure Project Implementation Files
The below example will configure the log level all files within the project's src-impl folder (including test files) to be debug. This is achieved by placing an entry in log-config.json file with key "your-solution-acronym" and desired log level, in this example debug
{
"ORDERS" : "debug"
}
Configure using Specific Paths
In the example below:
Every file under the path
"/src-impl/api/apitest/operations"
in your project will be configured to log level debugTest file
"/src-impl/api/apitest/operations/addDate.test"
will be configured to log level warnFile
"/src-impl/api/apitest/operations/addDate"
will be configured to log level warnAll sdk files under
"/sdk/v1"
will be configured to log level errorAll sdk files under
"/sdk/v1/handler"
will be configured to log level trace
{
"/src-impl/api/apitest/operations/*" : "debug",
"/src-impl/api/apitest/operations/addDate.test" : "warn",
"/src-impl/api/apitest/operations/addDate" : "trace",
"/sdk/v1/*" : "error",
"/sdk/v1/handler/*" : "trace"
}