Menu

Knox SDK for Model Protection deployment

APIs for deployment

After encrypting the ML model, APIs are used to deploy it. To deploy and execute the model, following instances are used—

S.No API Description
1. KnoxAiManager getInstance(context) 3rd party app gets the KnoxAiManager.
2. void getKeyProvisioning(KeyProvisioningResultCallback cb) Provision Device Encryption Key (DEK) from Server for KnoxAI ML Model Protection. KeyProvisioningResultCallback is an abstract class for handling Knox key provisioning result.
3. KnoxAiSession createKnoxAISession() 3rd party app creates a session to get secure session handle before all operations related to ML begin.
4. int open(KfaOptions option) Loads the encrypted ML model to Knox Framework using the given options.
5. int execute (DataBuffer[] inputs, DataBuffer[] outputs) Executes the model using the given input and returns a vector of pointers to the outputs.
6. int close() Close and release the instance.
7. int destroyKnoxAISession(KnoxAiSession session) Destroy the instance of Knox session handle.

Get instance of KnoxAiManager

Call this API in the beginning to get an instance of KnoxAiManager to expose getKeyProvisioning(KeyProvisioningResultCallback cb), createKnoxAiSession(), and destroyKnoxAiSession() APIs.

            

private static KnoxAiManager knoxAIManager = null; if (knoxAIManager == null) { knoxAIManager = KnoxAiManager.getInstance(getApplicationContext()); }

Get provisioning key

Call this API to provision Device Encryption Key (DEK) from server for Knox ML Model Protection. Implement the KeyProvisioningResultCallback abstract class for handling Knox key provisioning result and pass its object.

            

KeyProvisioningResultCallback callBack = new KeyProvisioningResultCallback() { @Override public void onFinished(KnoxAiManager.ErrorCodes errorCodes) { if (errorCodes == KnoxAiManager.ErrorCodes.SUCCESS){ Log.d(TAG, "Server Provisioning Success !!!"); } else { // error handling code } } }; try { knoxAIManager.getKeyProvisioning(callBack); } catch (SecurityException e) { Log.d(TAG, "Security Failed !!!"); }

Create KnoxAiSession

Call this API after DEK key provision is successful. This API exposes open(KfaOptions option), execute(DataBuffer[] inputs, DataBuffer[] outputs) and close() APIs

         

KnoxAiSession session; try { session = knoxAIManager.createKnoxAiSession(); }catch (SecurityException e) { session = null; }

Load encrypted ML model

The Open(KfaOptions options) API is used to load and decrypt the encrypted model. The input parameter of this API is the object of KfaOptions which needs to be filled properly before calling this API.

Filling KfaOptions object

Following are the fields of KfaOptions class:

Field Name

Use

int execType;

Execution type for model(Float32,Float16 etc)

int compUnit;

Computing Unit(CPU,GPU etc)

int modelInputType;

Model Input Type(FD, Path, Buffer)

int mType;

Model Type(Tflite, SNF, Tensorflow etc)

String model_name;

Name of the model

String model_file;

Model File Path

String weights_file;

Weights File Path

FileDescriptor fd;

FileDescriptor of Model Data Shared Memory

**[When modelInputType is FD]

long fd_StartOffSet;

 

Offset of FD

**[When modelInputType is FD]

ArrayList<String> outputNames;

Output Layers Name

ArrayList<String> inputNames;

Input Layers Name

byte[] model_package_buffer_ptr;

Pointer to Model Package Data

**[When modelInputType is Buffer]

int model_package_buffer_len;

Length of Model Package Data

**[When modelInputType is Buffer]

byte[] model_buffer_ptr;

Pointer to Model DataBuffer

int model_buffer_len;

Length of Model Data

KfaOptions example

            

KfaOptions options = new KfaOptions(); options.setExecType(KnoxAiSession.ExecType.FLOAT32.getValue()); options.setCompUnit(comp); options.setmType(model); ArrayList<String> ipNames = new ArrayList<String>(); ipNames.add("conv2d_6_input"); options.setInputNames(ipNames); ArrayList<String> opNames = new ArrayList<String>(); opNames.add("Identity"); options.setOutputNames(opNames); AssetManager assetManager = mContext.getAssets(); AssetFileDescriptor aFD = null; try { aFD = assetManager.openFd("ml_unencrypted_model.tflite.kaipkg"); } catch (IOException e) { e.printStackTrace(); } options.setCompUnit(KnoxAiSession.CompUnit.CPU.getValue()); options.setmType(KnoxAiSession.ModelType.TENSORFLOWLITE.getValue()); options.setModelInputType(KnoxAiSession.ModelInputType.FD);

Calling Open() API

            

int status = -1;

status = session.open(options).getValue();

When open(KfaOptions option) is called, it loads the model package buffer and decrypt the encrypted model according to policies and makes the model ready for execution.

Decrypt ML model

Call this API to run model to take the input DataBuffer array and return the output in output DataBuffer array.

Filling DataBuffer inputs for calling execute API

Following are the fields of DataBuffer Class:

Field Name

Use

float[] dataOriginal;

Input Data

int[] shape;

Shape of Data(Dimention)

byte dataType;

Type of data in dataOriginal

byte dataFormat;

Data format(NCHW,NHWC,etc)

byte dataSource;

Source of Data(Fd or SharedMemory)

SharedMemory dataShared;

Shared Memory for output data

**[when dataSource is SharedMemory]

FileDescriptor filedesc;

FileDescriptor for output data

**[when dataSource is Fd]

Filling DataBuffer and Calling execute() API Example

            

DataBuffer[] input = new DataBuffer[1]; DataBuffer[] output = new DataBuffer[1]; DataBuffer dB = new DataBuffer(); dB.setDataType((byte) 0); dB.setDataFormat((byte) 1); int[] shape = new int[]{1, 64, 64, 3}; dB.setShape(shape); dB.setDataSource((byte)2); try { SharedMemory sharedMemory = SharedMemory.create("data", indata.length * 4); ByteBuffer bBuffer = sharedMemory.mapReadWrite(); byte[] bytes = DataBuffer.readFloatToBytes(indata); bBuffer.put(bytes); dB.setDataShared(sharedMemory); input[0] = dB; int status = -1; status = session.execute(input, output).getValue(); sharedMemory.close(); } catch (ErrnoException e) { Log.e(TAG, "Failed creating Shared Memory : " + e); e.printStackTrace(); }

Inferencing Output from outputs DataBuffer

The output is received in the output DataBuffer object that is needed to extract the desired result.

            

if (output != null && output.length > 0 ) { if (output[0].getDataSource() == 0) { float[] outputData = output[0].getDataOriginal(); for (int i = 0; i < outputData.length; i++) { Log.i(TAG, "Output data :" + outputData[i]); } } else if (output[0].getDataSource() == 1) { FileDescriptor filedesc = output[0].getFileDesc(); int[] outputShape = output[0].getShape(); for (int iter = 0; iter < outputShape.length; iter++) { Log.d(TAG, "Shape: " + outputShape[iter]); } int size = 1; for (int i : outputShape) { size = size * i; } float[] prediction = new float[0]; prediction = new float[size]; FileInputStream fileStream = null; try { fileStream = new FileInputStream(filedesc); byte bytes[] = new byte[size * 4]; int readDataSize = fileStream.read(bytes); if (readDataSize != 0) { for (int iter = 0; iter < size; iter++) { prediction[iter] = DataBuffer.readFloatFromBytes(bytes, (iter + 1) * 4); Log.d(TAG, "Output data from FD: " + prediction[iter]); } } } catch (IOException e) { e.printStackTrace(); } finally { if (fileStream != null) { try { fileStream.close(); } catch (IOException e) { e.printStackTrace(); } } } } }

Close ML model

Close the session after execution is complete to ensure session handle is closed. It should open again for next execute session, this avoids any unknown execution. This API is followed by destroyKnoxAiSession().

            

int status = -1;

Log.i(TAG, "Close session");

status = session.close().getValue();

Destroy KnoxAiSession

Call this API in the end to destroy the session handle ensuring it is needed to be created again for new cycle.

            

If(knoxAIManager != null) knoxAIManager.destroyKnoxAiSession(publicsession);

Proguard Rules for Application

The application should have the proguard-rules.pro file added to allow the Knox SDK API classes to ensure these classes are available without obfuscation.

Add the following line in the proguard-rules.pro file for KnoxV2 package/classes to be allowed:

            

-keep class com.samsung.android.knox.ex.knoxAI.* { *; }

In the build.gradle file update the proguard-rules.pro file in the release and debug configurations:

            

buildTypes { release { signingConfig signingConfigs.knoxai debuggable false minifyEnabled true shrinkResources true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } debug { signingConfig signingConfigs.knoxai debuggable true minifyEnabled false shrinkResources false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } }

Application should add the following to the build.gradle file to ensure Gradle does not compress the models while building the application:

            

-keep class com.samsung.android.knox.ex.knoxAI.* { *; }

Share it: