Back to top

Knox SDK for Model Protection deployment

APIs for deployment

After encrypting the ML model, APIs are used to deploy it. To deploy and execute the model, the following instances are used:

S.No API Description
1. KnoxAiManager getInstance(context) 3rd party app gets the KnoxAiManager.
2. void getKeyProvisioning(KeyProvisioningResultCallback cb) Provision Device Encryption Key (DEK) from Server for KnoxAI ML Model Protection. KeyProvisioningResultCallback is an abstract class for handling Knox key provisioning result.
3. KnoxAiSession createKnoxAISession() 3rd party app creates a session to get secure session handle before all operations related to ML begin.
4. int open(KfaOptions option) Loads the encrypted ML model to Knox Framework using the given options.
5. int execute (DataBuffer[] inputs, DataBuffer[] outputs) Executes the model using the given input and returns a vector of pointers to the outputs.
6. int close() Close and release the instance.
7. int destroyKnoxAISession(KnoxAiSession session) Destroy the instance of Knox session handle.

Get instance of KnoxAiManager

Call this API in the beginning to get an instance of KnoxAiManager to expose getKeyProvisioning(KeyProvisioningResultCallback cb), createKnoxAiSession(), and destroyKnoxAiSession() APIs.


private static KnoxAiManager knoxAIManager = null;
if (knoxAIManager == null) {
    knoxAIManager = KnoxAiManager.getInstance(getApplicationContext());
}

Get provisioning key

Call this API to provision Device Encryption Key (DEK) from server for Knox ML Model Protection. Implement the KeyProvisioningResultCallback abstract class for handling Knox key provisioning result and pass its object.

KeyProvisioningResultCallback callBack = new KeyProvisioningResultCallback() {
    @Override
    public void onFinished(KnoxAiManager.ErrorCodes errorCodes) {
        if (errorCodes == KnoxAiManager.ErrorCodes.SUCCESS){
             Log.d(TAG, "Server Provisioning Success !!!");
        } else {
             // error handling code
        }
    }
};

try {
    knoxAIManager.getKeyProvisioning(callBack);
} catch (SecurityException e) {
    Log.d(TAG, "Security Failed !!!");
}

Create KnoxAiSession

Call this API after DEK key provision is successful. This API exposes open(KfaOptions option), execute(DataBuffer[] inputs, DataBuffer[] outputs) and close() APIs.

KnoxAiSession session;
try {
   session = knoxAIManager.createKnoxAiSession();
}catch (SecurityException e) {
   session = null;
}

Load encrypted ML model

The Open(KfaOptions options) API is used to load and decrypt the encrypted model. The input parameter of this API is the object of KfaOptions which needs to be filled properly before calling this API.

Filling KfaOptions object

Following are the fields of KfaOptions class:

Field Name Use
int execType; Execution type for model (Float32,Float16 etc)
int compUnit; Computing Unit (CPU,GPU, NPU, DSP and so on)
int modelInputType; Model Input Type (FD, Path, Buffer)
int modelType; Model Type (Tflite, Tensorflow, ONNX and so on)
String model_name; Name of the model
String model_file; Model File Path
String weights_file; Weights File Path
FileDescriptor fd;

FileDescriptor of Model Data Shared Memory

**[When modelInputType is FD]

long fd_StartOffSet;

 

Offset of FD

**[When modelInputType is FD]

ArrayList<String> outputNames; Output Layers Name
ArrayList<String> inputNames; Input Layers Name
byte[] model_package_buffer_ptr;

Pointer to Model Package Data

**[When modelInputType is Buffer]

int model_package_buffer_len;

Length of Model Package Data

**[When modelInputType is Buffer]

byte[] model_buffer_ptr; Pointer to Model DataBuffer
int model_buffer_len; Length of Model Data

Supported model types

Following are the supported model types:

Model Type Use
Tflite Can be used independently and with ONNX conversion as well
Tensorflow Can be used independently and with ONNX conversion as well
Caffe Can be used independently and with ONNX conversion as well
Keras Can be used with ONNX conversion
Pytorch Can be used with ONNX conversion
CoreML Can be used with ONNX conversion

KfaOptions example

KfaOptions options = new KfaOptions();
options.setExecType(KnoxAiSession.ExecType.FLOAT32.getValue());
options.setCompUnit(comp);
options.setmType(model);
ArrayList ipNames = new ArrayList();
ipNames.add("conv2d_6_input");
options.setInputNames(ipNames);
ArrayList opNames = new ArrayList();
opNames.add("Identity");
options.setOutputNames(opNames);
AssetManager assetManager = mContext.getAssets();
AssetFileDescriptor aFD = null;
try {
  aFD = assetManager.openFd("ml_unencrypted_model.tflite.kaipkg");
} catch (IOException e) {
  e.printStackTrace();
}
options.setCompUnit(KnoxAiSession.CompUnit.CPU.getValue());
options.setmType(KnoxAiSession.ModelType.TENSORFLOWLITE.getValue());
options.setModelInputType(KnoxAiSession.ModelInputType.FD);

Passing converted ONNX models

KfaOptions options = new KfaOptions(); options.setExecType(KnoxAiSession.ExecType.FLOAT32.getValue());
options.setCompUnit(KnoxAiSession.CompUnit.CPU.getValue());
options.setmType(KnoxAiSession.ModelType.ONNX.getValue());

Calling Open() API

int status = -1;
status = session.open(options).getValue();

When open(KfaOptions option) is called, it loads the model package buffer and decrypt the encrypted model according to policies and makes the model ready for execution.

Decrypt ML model

Call Execute API to run model to take the input DataBuffer array and return the output in output DataBuffer array.

Filling DataBuffer inputs for calling execute API

Following are the fields of DataBuffer Class:

Field Name Use
float[] dataOriginal; Input Data
int[] shape; Shape of Data (Dimension)
byte dataType; Type of data in dataOriginal (Float32, Float16, Byte, INT64, String, Sequence_map, INT32)
byte dataFormat; Data format (NCHW,NHWC,etc)
byte dataSource; Source of Data (Fd or SharedMemory)
SharedMemory dataShared;

Shared Memory for output data

**[when dataSource is SharedMemory]

FileDescriptor filedesc;

FileDescriptor for output data

**[when dataSource is Fd]

Different Input Data Types Supported

The following table represents the different input data types supported. To set a particular data type, the user can use an enumerated value.

Data Type Enumerated Value Use Supported for
Float32 FLOAT32(0) When input data type is float All model types
Float16 FLOAT16(1) When input data type is float16 ONNX
String STRING(4) When input data type is string ONNX
Int32 INT32(6) When input data type is int ONNX

Different Output Data Types Supported

The following table represents the different output data types supported. To set a particular data type, the user can use an enumerated value.

Data Type Enumerated Value Use Supported on
Float32 FLOAT32(0) When output data type is float All model types
Int64 INT64(3) When output data type is long ONNX
String STRING(4) When output data type is string ONNX
Sequence map SEQUENCE_MAP(5) When output data type is a sequence of maps ONNX
Int32 INT32(6) When output data type is int ONNX

Filling DataBuffer and Calling execute() API Example

Models with single output layer

DataBuffer[] input = new DataBuffer[1];
DataBuffer[] output = output = new DataBuffer[1];
/* Set the Input DataBuffer options */
status = session.execute(input, output).getValue();

Models with multiple output layers

DataBuffer[] input = new DataBuffer[1];
DataBuffer[] output = new
DataBuffer[output_layers.length];
/* Set the Input DataBuffer options */
status = session.execute(input, output).getValue();
DataBuffer[] input = new DataBuffer[1];
DataBuffer[] output = new DataBuffer[1];
DataBuffer dB = new DataBuffer();
dB.setDataType((byte) 0);
dB.setDataFormat((byte) 1);
int[] shape = new int[]{1, 64, 64, 3};
dB.setShape(shape);
dB.setDataSource((byte)2);
try {
    SharedMemory sharedMemory = SharedMemory.create("data", indata.length * 4);
    ByteBuffer bBuffer = sharedMemory.mapReadWrite();
    byte[] bytes = DataBuffer.readFloatToBytes(indata);
    bBuffer.put(bytes);
    dB.setDataShared(sharedMemory);
    input[0] = dB;
    int status = -1;
    status = session.execute(input, output).getValue();
    sharedMemory.close();
} catch (ErrnoException e) {
    Log.e(TAG, "Failed creating Shared Memory : " + e);
    e.printStackTrace();
}

Inferencing Output from outputs DataBuffer

The output is received in the output DataBuffer object that is needed to extract the desired result.

if (output != null && output.length > 0 ) {
  if (output[0].getDataSource() == 0) {  // Output data source is float buffer
     for (int i = 0; i < outputData.length; i++) {
         prediction[i] = output[0].getDataOriginal()[i];
     }
  } else if (output[0].getDataSource() == 1) { // Output data source is FD

     /* Read the Ouput data from File Descriptor to Bytes */

         byte bytes[] = new byte[size * 4];
         int readDataSize = fileStream.read(bytes);
         if (readDataSize != 0) {
            for (int iter = 0; iter < size; iter++) {
                prediction[iter] = DataBuffer.readFloatFromBytes(bytes, (iter + 1) * 4);
            }
         }
  }
}

Getting the prediction result

float confidence = -1f;
  int label = 0;
  for (int i = 0; i< prediction.length; i++){
      if(prediction[i]>confidence) {
         confidence = prediction[i];
         label = i;
       }
  }
return labelList.get(label) +"\n"+ "confidence: " + Float.toString(prediction[label]);
if (output != null && output.length > 0 ) {
    if (output[0].getDataSource() == 0) {
        float[] outputData = output[0].getDataOriginal();
        for (int i = 0; i < outputData.length; i++)  {
                Log.i(TAG, "Output data :" + outputData[i]);
            }
    } else if (output[0].getDataSource() == 1) {
        FileDescriptor filedesc = output[0].getFileDesc();
        int[] outputShape = output[0].getShape();
        for (int iter = 0; iter < outputShape.length; iter++) {
            Log.d(TAG, "Shape: " + outputShape[iter]);
        }
        int size = 1;
        for (int i : outputShape) {
            size = size * i;
        }
        float[] prediction = new float[0];
        prediction = new float[size];
        FileInputStream fileStream = null;
        try {
            fileStream = new FileInputStream(filedesc);
            byte bytes[] = new byte[size * 4];
            int readDataSize = fileStream.read(bytes);
            if (readDataSize != 0) {
                for (int iter = 0; iter < size; iter++) {
                    prediction[iter] = DataBuffer.readFloatFromBytes(bytes, (iter + 1) * 4);
                    Log.d(TAG, "Output data from FD: " + prediction[iter]);
                }
            }
        } catch (IOException e) {
            e.printStackTrace();
        } finally {
            if (fileStream != null) {
                try {
                    fileStream.close();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }
    }
}

Close ML model

Close the session after execution is complete to ensure session handle is closed. It should open again for next execute session, this avoids any unknown execution. This API is followed by destroyKnoxAiSession().

int status = -1;
Log.i(TAG, "Close session");
status = session.close().getValue();

Destroy KnoxAiSession

Call this API in the end to destroy the session handle ensuring it is needed to be created again for new cycle.

If(knoxAIManager != null)
knoxAIManager.destroyKnoxAiSession(session);

Proguard Rules for Application

The application should have the proguard-rules.pro file added to allow the Knox SDK API classes to ensure these classes are available without obfuscation.

Add the following line in the proguard-rules.pro file for KnoxV2 package/classes to be allowed:

-keep class com.samsung.android.knox.ex.knoxAI.* { *; }
-keep class com.samsung.android.knox.license.KnoxEnterpriseLicenseManager {*;}

Application should add the following to the build.gradle file to ensure Gradle does not compress the models while building the application:

-keep class com.samsung.android.knox.ex.knoxAI.* { *; }

Is this page helpful?