React Native OCR Component
A React component for scanning National Identity cards using a device's camera. This package integrates OCR technology to extract the text data from scanned cards by making API calls.
Features
- Opens the camera directly in your React application.
- Scans National Identity cards in real-time.
- Calls a backend API for OCR processing.
- Returns parsed data for further use.
- Provides base64 encoded images of the front and back of the document, as well as the selfie.
- Supports liveness detection on Android devices.
Installation
Install the package via npm:
npm install react-native-ocr-package
Additionally, install the required dependency:
npm install react-native-document-scanner-plugin
Usage
Here is how to use the component in your React application:
Basic Example
import React, {useState} from 'react';
import { View, Image } from 'react-native';
import {CameraComponent} from 'react-native-ocr-package';
const App = () => {
const [documentImages, setDocumentImages] = useState(null);
const handleOCRResult = ocrResult => {
console.log('OCR Result:', ocrResult);
// Access the base64 images from the response
if (ocrResult.DocumentImages) {
setDocumentImages(ocrResult.DocumentImages);
}
};
return (
<View style={{flex: 1}}>
<CameraComponent
ocrApiUrl="https://your-api-endpoint.com/"
onScanComplete={handleOCRResult}
/>
{/* Display the images if available */}
{documentImages && (
<View style={{flexDirection: 'row'}}>
<Image
source={{uri: `data:image/jpeg;base64,${documentImages.frontImageBase64}`}}
style={{width: 100, height: 100}}
/>
<Image
source={{uri: `data:image/jpeg;base64,${documentImages.backImageBase64}`}}
style={{width: 100, height: 100}}
/>
<Image
source={{uri: `data:image/jpeg;base64,${documentImages.selfieImageBase64}`}}
style={{width: 100, height: 100}}
/>
</View>
)}
</View>
);
};
export default App;
Integrating Liveness Detection with Document Scanning
To prevent crashes when transitioning from liveness detection to document scanning, use the VerificationWorkflow
utility:
import React, { useState } from 'react';
import { View, Button, Text, Image } from 'react-native';
import { CameraComponent, LivenessDetection, VerificationWorkflow } from 'react-native-ocr-package';
const VerificationScreen = () => {
const [currentStep, setCurrentStep] = useState('start'); // 'start', 'document', 'complete'
const [livenessResult, setLivenessResult] = useState(null);
const [finalResult, setFinalResult] = useState(null);
const [error, setError] = useState(null);
const startVerification = () => {
setCurrentStep('processing');
setError(null);
// Use the workflow utility to handle the transition
VerificationWorkflow.startVerification({
ocrApiUrl: 'https://your-api-endpoint.com/',
onError: (err) => {
setError(err.message);
setCurrentStep('start');
},
onReadyForDocumentScan: (result) => {
// Store liveness result and move to document scanning step
setLivenessResult(result);
setCurrentStep('document');
}
});
};
const handleDocumentScanComplete = (ocrResult) => {
// Combine liveness and OCR results
const combinedResult = VerificationWorkflow.combineResults(livenessResult, ocrResult);
setFinalResult(combinedResult);
setCurrentStep('complete');
};
// Render based on current step
return (
<View style={{ flex: 1 }}>
{currentStep === 'start' && (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center', padding: 20 }}>
<Text style={{ fontSize: 24, marginBottom: 20 }}>Identity Verification</Text>
{error && <Text style={{ color: 'red', marginBottom: 20 }}>{error}</Text>}
<Button title="Start Verification" onPress={startVerification} />
</View>
)}
{currentStep === 'processing' && (
<View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
<Text>Processing liveness detection...</Text>
</View>
)}
{currentStep === 'document' && (
<CameraComponent
ocrApiUrl="https://your-api-endpoint.com/"
onScanComplete={handleDocumentScanComplete}
/>
)}
{currentStep === 'complete' && finalResult && (
<View style={{ flex: 1, padding: 20 }}>
<Text style={{ fontSize: 20, marginBottom: 20 }}>Verification Complete</Text>
{/* Display liveness selfie */}
{finalResult.liveness?.selfieImageBase64 && (
<View style={{ alignItems: 'center', marginBottom: 20 }}>
<Text>Liveness Selfie:</Text>
<Image
source={{ uri: `data:image/jpeg;base64,${finalResult.liveness.selfieImageBase64}` }}
style={{ width: 150, height: 150, marginTop: 10 }}
/>
</View>
)}
{/* Display document images */}
{finalResult.ocr?.DocumentImages && (
<View>
<Text>Document Images:</Text>
<View style={{ flexDirection: 'row', justifyContent: 'space-between', marginTop: 10 }}>
<Image
source={{ uri: `data:image/jpeg;base64,${finalResult.ocr.DocumentImages.frontImageBase64}` }}
style={{ width: 100, height: 100 }}
/>
<Image
source={{ uri: `data:image/jpeg;base64,${finalResult.ocr.DocumentImages.backImageBase64}` }}
style={{ width: 100, height: 100 }}
/>
</View>
</View>
)}
<Button title="Start Over" onPress={() => {
setCurrentStep('start');
setLivenessResult(null);
setFinalResult(null);
}} />
</View>
)}
</View>
);
};
export default VerificationScreen;
Props
The CameraComponent
accepts the following props:
Prop Name | Type | Required | Description |
---|---|---|---|
ocrApiUrl |
string |
Yes | The API endpoint to call for OCR processing of the scanned image. |
onScanComplete |
func |
Yes | Callback function invoked with the scanned OCR result. |
Sample Output
Here is an example of the JSON output returned by the API when the scan is successful:
{
"Code": 200,
"Message": "Success",
"Data": {
"OCR": {
"Code": 100,
"Message": "Success",
"CNIC_MATCHED": true,
"DOI": "10.06.2014",
"DOB": "12.03.2000",
"DOE": "10.06.2024",
"CNum": "12345-1234567-1",
"Name": "ABC",
"FName": "XYZ"
},
"Image_Comparison": {
"Code": 100,
"Message": "Selfie and CNIC Matched",
"confidence_score": 100.0
}
},
"DocumentImages": {
"frontImageBase64": "base64_encoded_string_of_front_image...",
"backImageBase64": "base64_encoded_string_of_back_image...",
"selfieImageBase64": "base64_encoded_string_of_selfie_image..."
}
}
The DocumentImages
field contains base64 encoded strings of the captured images, which can be used directly with React Native's Image component or sent to your backend for storage or further processing.
Combined Verification Result
When using the VerificationWorkflow
to combine liveness and document scanning, the result will have this structure:
{
"liveness": {
"isLive": true,
"selfieImageBase64": "base64_encoded_string_of_liveness_selfie...",
"faceDetails": {
"blinkDetected": true,
"smileDetected": true,
"faceAligned": true,
"headEulerAngleX": 1.2,
"headEulerAngleY": 0.5,
"headEulerAngleZ": 0.3
}
},
"ocr": {
"Code": 200,
"Message": "Success",
"Data": {
"OCR": {
"Code": 100,
"Message": "Success",
"CNIC_MATCHED": true,
"DOI": "10.06.2014",
"DOB": "12.03.2000",
"DOE": "10.06.2024",
"CNum": "12345-1234567-1",
"Name": "ABC",
"FName": "XYZ"
},
"Image_Comparison": {
"Code": 100,
"Message": "Selfie and CNIC Matched",
"confidence_score": 100.0
}
},
"DocumentImages": {
"frontImageBase64": "base64_encoded_string_of_front_image...",
"backImageBase64": "base64_encoded_string_of_back_image...",
"selfieImageBase64": "base64_encoded_string_of_selfie_image..."
}
},
"combinedAt": "2023-07-15T12:34:56.789Z"
}