Biometric approach to user identification

Rapid development in IT, DLT, and AI are prompting biometrics to constantly innovate and make the most of market demand. According to the latest reports, the global biometrics market is forecast to reach from $82.8 billion to nearly $100 billion by 2027, growing at a >19.3% Compound Annual Growth Rate (CAGR) from an estimated $24.1 billion in 2020. According to these reports, the multimodal biometric systems segment is projected to increase in revenue at a significant CAGR during the forecast period.

In terms of authentication type, voice recognition is supposed to witness significant growth due to consumer desires for a safer identity mechanism. Facial recognition is also poised for growth, as it is witnessing a boost from the launch of Apple’s Face ID system.

In 2020, the global market for mobile biometrics was estimated at $18 billion, and it is projected to reach a revised size of $79.8 billion by 2027, growing at a CAGR of 23.7% over the analysis period 2020–2027. Growth in the scanner segment is readjusted to a revised 20.1% CAGR for the next seven-year period.

Furthermore, the post-COVID 19 global digital identity verification market is forecast to grow from $7.6 billion in 2020 to $15.8 billion by 2025, at a CAGR of 15.6%.

The ability to privately secure user authentication through biometrics has been the goal of many cryptographic researchers. For the last two decades, cryptographers have concentrated their efforts on solving the problem of biometric protection against malicious activities of the verifier. Solutions like BioHashing, Biometric Cryptosystems, and cancelable biometrics were all researched and proven to be inefficient or insecure for a hypothetical user (G. Davida et al., 1998; N. Ratha, J. Connell & R.M. Bolle, 2001, 2002; A.T.B Jin, D.N.C Ling & A. Goh, 2004; A. Kong, 2006; A.B.J. Teoh, Y.W. Kuan & S. Lee, 2008; C. Rathgeb & A. Uhl, 2011; M.A Syarif, et al., 2014; B.J. Jisha Nair & S. Ranjitha Kumari, 2015).

Until not so long ago, biometric identification methods carried a heavy risk to personal privacy. Biometric data is considered to be very sensitive, as it can uniquely be associated with a human being. Passwords are not considered PII (Personally Identifiable Information), as they can be changed and not associated with any person directly. The main risks of biometric matching in the past were based on the fact that they required the biometric data to be visible at some point during the process.

Humanode bio-authorization overview

The privacy and security of the biometric data have been among the most critical aspects to take into account when deciding on a technology to use in Humanode. Biometric registration and authentication are carried out through a novel method based on cryptographically secure neural networks for the private classification of images of users' faces so that we can:

  • Guarantee the image's privacy, performing all operations without the biometrics of the user's face having to leave the device.

  • Obtain a certificate or proof that the operations are carried out correctly, without malicious manipulation.

  • Have resistance to different attacks, such as the Sybil attack and reply attack.

  • Carry out all registration and authentication operations without the need for a central entity or authority that handles the issuance and registration of users' cryptographic keys.

  • Compare the feature vector each time the user wants to authenticate in a cryptographically secure way.

Let's now see how the different technologies that we use to perform the registration and authentication of users are broken down, guaranteeing privacy in a decentralized environment.

Traditionally, neural networks are used to identify an image. A neural network is a particular case of machine learning technique that consists of a series of so-called nodes structured in layers. These nodes or neurons are mathematical functions that perform a specific operation according to the layer they belong to.

For example, the convolutional layer is in charge of filtering the information to determine the similarity between the original image covered by a filter and the filter itself. The activation layer also determines if the filter pattern defined in the convolutional layer is present at a particular position in the image. There is also a layer called max-pooling that modifies the data to make it easier to handle.

When the user logs into the system for the first time, the neural network gives us a unique feature vector that identifies the user. Once this vector is registered, we can store it for future comparisons when the user wishes to authenticate.

The main objective of the biometric registration and authentication system is to protect the images of users throughout the whole process and on the different layers of the neural network. It is required that the operations are carried out effectively and efficiently, preventing unauthorized access to the data, from when it is obtained on the user's device to it being processed in the neural network and registered in the system.

A malicious user gaining access to the neural network should not be able to obtain any sensitive information. This is why Humanode's biometric system architecture is designed to run neural networks locally on the user's device and only send the proof that all the neural network layers were executed. The user will also send the neural network's output in the form of an encrypted feature vector.

Convolutional Neural Network

Often referred to as CNNs or ConvNets, Convolutional Neural Networks specialize in processing data that is grid-like in topology, such as images.

In a digital image, each pixel contains a binary value that denotes how bright and what color it should be. It contains a series of pixels that are arranged in a grid-like format.

Figure 6. Representation of image as a grid of pixels (Source)

Each neuron works in its own receptive field, interconnected with other neurons so that the entire visual field is covered. The human brain processes enormous amounts of information as soon as it sees an image.

In the same way that each neuron in the biological vision system responds to stimuli only in its receptive field, each neuron in a CNN also processes information only within its receptive field. With a CNN, one can enable computers to sense simpler patterns (lines, curves, etc.) at the beginning and more complex patterns (faces, objects, etc.) as they progress.

There are 4 main layers of CNNs: a convolutional layer, a pooling layer, a fully connected layer and Activation Layers.

Figure 7. Architecture of a CNN (Source)

Convolution Layer

CNNs have a convolution layer that carries a vast amount of computation on its behalf.

Using this layer, we perform a dot product between two matrices, one that contains the set of learnable parameters, known as a kernel, and the other that contains the restricted portion of the receptive field.

In the case of an image composed of three (RGB) channels, the kernel height and width will be smaller than the image, but the depth will encompass all three channels.

When the forward pass is made, the kernel slides across the height and width of the image, creating an image representation of the receptive region. A kernel response is generated by computing an activation map in two dimensions that results in a representation of the image for each spatial position. A stride refers to the size of the kernel as it slides. The size of the output volume can be calculated as follows if we have an input of size W x W x D and a number of kernels of size F with a stride S and a padding P:

Formula for Convolution Layer

This will yield an output volume of size Wout x Wout x Dout.

Figure 8. Convolution Operation (Source: Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville)

Pooling Layer

During the pooling layer summary statistics are derived from the nearby outputs in order to replace certain outputs of the network. As a result the size of the representation is reduced resulting in a decrease in computation and weights. The pooling operation is applied to every slice in turn.

In addition to the rectangular neighborhood average there are several pooling functions such as the L2 norm of the rectangular neighborhood and the weighted average based on the distance to the central pixel. Max pooling, however, is the process most commonly used which reports the max output from the neighbors.

Figure 9. Example of Max-Pooling Operation

The size of the output volume can be determined by this formula if we have an activation map with dimensions W x W x D, a pooling kernel with dimensions F and a stride: Formula for Padding Layer

This generates an output volume of Wout x Wout x D.

The translation invariance of pooling makes it possible to recognize objects wherever they appear in the frame regardless of their position in the frame.

Fully Connected Layer

As with regular FCNNs, neurons in this layer are fully connected to neurons in the preceding and following layers. Thus, it can be calculated as usual by a matrix multiplication followed by a bias effect. This layer enables mapping of inputs and outputs between representations.

Activation Layers

Non-linear layers are often placed directly after the convolutional layer to introduce non-linearity to the activation map, due to the linear nature of convolution and the non-linear nature of images.

1. Sigmoid

The sigmoid nonlinearity has the mathematical form σ(κ) = 1/(1+e¯κ). This formula takes a real-valued number and "squashes'' it between 0 and 1. However, the gradient of sigmoid is almost zero when the activation is at either tail. In backpropagation, if the local gradient becomes very small, it will effectively "kill" the gradient. Furthermore, if sigmoid is always positive, it will produce either all positives or all negatives, resulting in a zig-zag trend in gradient updates for the weights.

2. Tanh

Tanh squashes a real-valued number between -1 and 1. The activation of sigmoid neurons saturates, but the output is zero-centered unlike sigmoid neurons.

3. ReLU

In the last few years, Rectified Linear Units (ReLUs) have been very popular. It computes the function ƒ(κ) = max (0,κ). In other words, the activation is simply threshold at zero. With ReLU, convergence is six times faster than Sigmoid and Tanh.

The disadvantage of ReLU is that it can be fragile during training. It can be updated by a large gradient in such a way that the neuron is never further updated. This can be addressed by setting a learning rate that is appropriate.

Humanode facial recognition system uses modified ResNet architecture for facial feature extraction and uses cosine similarity for matching.

Cosine Similarity for feature vector matching

Cosine Similarity is a measurement that quantifies the similarity between two or more vectors. It is measured by the cosine of the angle between vectors and determines whether two vectors are pointing in roughly the same direction. The vectors are typically non-zero and are within an inner product space.

The cosine similarity is described as the division between the dot product of vectors and the product of the euclidean norms or magnitude of each vector.

Cosine Similarity is a value within a constrained range between 0 and 1. The similarity measurement is a measure of the cosine of the angle between the two non-zero vectors A and B.

Assume the angle between the two vectors is 90 degrees. The cosine similarity will be zero in that case. This indicates that the two vectors are orthogonal or perpendicular to each other. The angle between the two vectors A and B decreases as the cosine similarity measurement approaches 1. The image below illustrates this more clearly.

Figure 10. Two vectors with 96% similarity based on the cosine of the angle between the vectors.

Figure 11. Two vectors with 34% similarity based on the cosine of the angle between the vectors. Humanode uses cosine similarity in the facial feature vector matching part.

Active and Passive Liveness detection

Enterprises use face recognition for onboarding, validating, and approving customers due to its reliability and ease of use. The demand for liveness detection is growing rapidly. Liveness detection identifies presentation attacks like photo or video spoofing, deepfakes, 3D masks or models, rather than matching the facial features.

This makes it much harder for an adversary to spoof an identity. Facial recognition determines whether the person is unique and the same whereas liveness detection determines whether the person is a living human being. Liveness detection confirms the presence of a user’s identification credentials and that the user is physically present, whether on a mobile phone, a computer or tablet or on any camera-enabled device.

There are two methods in facial liveness detection: active and passive.

Active liveness detection method asks the user to do something to confirm that they are a live person. A user would be normally asked to either change the head position, nod, blink their eyes or follow a mark on their device’s screen with their eyes. In spite of this, fraudsters can fool the active method using a so-called presentation attack, also known as the PAD attack. Scammers can use various gadgets or "artifacts" to fool the system, some of which are remarkably low-tech.

Humanode active liveness detection model asks the user to turn their face left or right, blink eyes, make emotions like happiness, anger, surprise and determines whether the user is fake or real depending on the result.

With passive liveness detection the user is not asked to do anything. This provides end users with a modernized and convenient experience. It is an excellent method for determining whether the user is present without any specific movement or gesture. Passive methods use a single image, which is examined for an array of multiple characteristics to determine if a live person is present.

Humanode passive liveness detection model determines if a live person is present based on texture and local shape analysis, distortion analysis and edge analysis:

  • Texture and local shape analysis: analyze the input image from texture analysis point of view by image quality assessment, characterization of printing artifacts and differences in light reflection.

  • Distortion analysis: analyze the input image using the IDA (image distortion analysis) feature vector that consists of four different features, that is specular reflection, blurriness, chromatic moment and color diversity.

  • Edge analysis: analyze the edge of the input to find out whether the edge component is presented or not.

Figure 12. Analyses types in liveness detection

While the active liveness detection process is going on, passive liveness detection is performed in the background.

By combining the advantages of active and passive liveness detection approaches, we made our liveness detection system more secure.

Merits and demerits of biometric identification

The use of biometrics, the science of analyzing physical or behavioral characteristics unique to each individual to recognize their identity, has many benefits. However, there are some risks associated with biometric authentication, which are as follows.

Table 2. Merits and demerits of biometric identification

Merits

Demerits

  • High level of security and accuracy in contrast to passwords as biometric data cannot be forgotten.

  • Simplicity and convenience for the user is a significant factor in the growing popularity of biometric authentication.

  • Higher level of authenticity for users prone to weak passwords that may be common to multiple users or easily shared.

  • Affordability as biometric authentication is now possible in a wide range of common devices.

  • Flexibility as users have their own security credentials with them so they do not need to bother memorizing a complex password.

  • Biometrics is trustable as reports from 2021 claim that the younger generations trust biometric solutions more than others.

  • Biometric solutions are time conserving.

  • Sometimes requires integration and/or additional hardware.

  • Delay, as some biometric recognition methods may take more than the accepted time.

  • Physical disability, as some people are not fortunate enough to be able to participate in the enrollment process.

  • The need to trust your biometric provider that data is secure and private.

Cryptobiometric search and matching operations

When the user registers in the system, the executed private neural network allows the feature vector to be extracted from the user's face for the first time. It is essential to safely store this vector to evaluate the subsequent times that the user wants to authenticate in the system. But this storage must be encrypted. Moreover, to compare the new vector with the already stored one, we cannot decrypt the data. For this, there is an encryption method called homomorphic.

Homomorphic encryption is nothing more than an encryption algorithm with the additional characteristic that operations can be defined so that they can be preserved by encryption.

In mathematics, the preservation of an operation is obtained when we have an operation and a function between two spaces. The function that goes from one space to the other is said to preserve the operation if it is invariant under said operation.

Formally we say that f from space A in space B is homomorphic if given two elements we have that:

a1,a2A  and  f(a1+a2)=F(a1)+f(a2)a_{1},a_{2}\in A \; and \; f(a_{1} + a_{2}) = F(a_{1}) + f(a_{2})

This section will discuss a method used in neural networks to evaluate the similarity between two feature vectors. Then, we will define the homomorphic encryption method that will allow us to store the encrypted feature vector and perform the similarity operation without decrypting the vector.

Cosine similarity encryption

As mentioned above, one of the most efficient and natural ways to find the similarity between two feature vectors in neural networks is cosine similarity. Let c=(c0,...,cr)c=(c_{0},...,c_{r}) and b=(b0,...,bn)b=(b_{0},...,b_{n})be two vectors in RnR^{n} the cosine similarity between a and b is defined by the equation

cos(a,b)=a.ba.bcos(a,b)=\frac{a.b}{\left\| \\a\|. \right\|b\|}

where a\left\| a\right\| is the norm of the vector a.

From (1), if we calculate the internal product between two vectors, we can determine if two vectors are similar directly. In simple terms, the cosine similarity of the angle of two vectors tells us whether two vectors point in the same direction.

If in addition the vectors are normalized, then it is evident that:

cos(a,b)=a.ba.b=a.b=i=1naibicos(a,b)=\frac{a.b}{\left\| \\a\|. \right\|b\|} = a.b=\sum_{i=1}^{n} a_{i}b_{i}

In the cryptobiometric authentication system, we must define an encryption scheme that allows us to calculate the internal product between two vectors, which will give us the similarity between them. This calculation will be carried out on the encrypted vectors without the need to decrypt them.

It is natural to look for a homomorphic encryption scheme where the calculations to determine similarity are performed in the encrypted space.

In a traditional encryption scheme, which only encrypts the data to be sent, it would have to handle the private keys with which the user encrypted the data, decrypt the vectors, and then make the similarity calculation on clear data. From a decentralized perspective, this traditional approach has a flaw as users' private keys are in an environment where peers are by nature untrusted. In a decentralized environment, there is no trusted third party to handle the keys securely.

Homomorphic Encryption

There are different proposals for encryption schemes that preserve operations in a homomorphic manner through the encryption function. In particular, one of the most straightforward and most efficient is encryption based on learning with errors (LWE). Let's see in this section the mathematical preliminaries of this cipher and the algorithms that compose it, namely:

  • Key generation

  • Encryption

  • Decryption

  • Homomorphic operations.

  • Lattices

Lattices

In group theory a lattice in RnR^{n} is an algebraic subgroup of RnR^{n} that spans the vector space RnR^{n} with integer coefficients in its basis.

Formally, let nN,BRnn \in N, B \in R^{n} be a matrix, and biRnb_{i}\in R^{n} the ii-th row of B with 1in1\leqslant i \leqslant n . Then the linear combinations of bib_{i} are defined as

L(B)=i=1nmibi miZ,1inL(B)=\sum_{i=1}^{n}m_{i}b^{i} \: \:\ m_{i}\in Z,1\leqslant i \leqslant n

is a subgroup of RnR^{n} . If the bib_{i} are linearly independent, we say that L(B) is a Lattice in RnR^{n} of dimension n.

Lattice-based ciphers are one of the leading candidates for post-quantum cryptographic algorithms. If an efficient quantum computer is ever built, a post-quantum encryption scheme can resist attacks. In 1994, Shor theoretically demonstrated that a protocol could be built on a quantum computer that would break in polynomial time the problems on which most public-key ciphers known as RSA, Diffie-Hellman, or cryptosystems of elliptic curves are based.

The computational complexity of the problem that shapes cryptosystems based on lattices ensures their quantum resistance.

Furthermore, the LWE-based cryptosystem can be completely homomorphic: it possesses homomorphism in both operations of addition and multiplication. Which is very useful for the calculation of the inner product, and consequently for the similarity of the cosine.

Construction of ring-LWE scheme

Let’s see in detail how the ring-LWE encryption scheme works and how the homomorphic operations are defined.

Setup parameters

First of all we need to define certain general parameters to be used in the key generation algorithm:

  • Set nNn \in N a degree parameter.

  • Let qq be a prime number, defining the ring Rq=RqR=Fq[x]fn(x)R_{q}=\frac{R}{qR}=F_{q}\frac{[x] }{f_{n}}(x) . This ring is the ciphertext space.

  • Take tt as an arbitrary integer, with t<qt < q , defining the ring Rt=RtR=Ft[x]fn(x)R_{t}=\frac{R}{tR}=F_{t}\frac{[x] }{f_{n}}(x) . This ring is the plaintext space.

  • The standard deviation σ, as the parameter for the discrete Gaussian distribution χ=DZn,σ\chi=D_{Z^{n},\sigma }

Key generation

First we sample random elements as follows:

  • Sample ss from the Gaussian distribution χχ

  • take a random p1Rqp_{1} \in R_{q} and the error e sampled from χχ .

Then the public-key is defined as pk=(p0,p1)pk=(p_{0},p_{1}) , where p0=(p1s+te)p_{0}=−(p_{1}s+te), and the secret-key is sk=ssk=s .

Encryption

After encoding the plaintext m as an element in RtR_{t} and given the public-key pk=(p0,p1)pk=(p_{0},p_{1}) , we sample u,f,gu, f, g from the distribution χ and compute

Enc(m,pk)=(c0,c1)=(p0u+tg+m,p1u+tf)Enc(m,pk)=(c_{0},c_{1})=(p_{0}u+tg+m,p_{1}u+tf)

Decryption

If c=(c0,...,cr)c=(c_{0}, ...,c_{r}) is a ciphertext and sk=ssk=s the private key, then the decryption is simply

Dec(c,sk)=[m^]q(modt)RtDec(c,sk)=[m\hat{}]_{q}(modt)\in R_{t}

where

m^=i=0rcisiRqm^{\hat{}}=\sum_{i=0}^{r}c_{i}s^{i}\in R_{q}

If we write the secret key vector SS as S=(1,s,s2,...,sr) S=(1,s,s^{2},...,s^{r}) , then

Dec(c,sk)=[<c,S>]q(modt)Dec(c,sk)=[\left< c,S\right>]_{q}(modt)

Homomorphic Operations

Now, if we have two elements in the encrypted space, c=(c0,...,cr),c=(c0ˊ,...,crˊ)c=(c_{0}, ...,c_{r}), c'=(c_{0}\acute{}, ...,c_{r}\acute{}) , the homomorphic operations are given by

c+c=(c0+c0ˊ,...,cmax(r,t)+cmax(r,t)ˊ)c + c' = (c_{0}+ c_{0}\acute{},...,c_{max (r,t)}+c_{max (r,t)}\acute{} )
xc=(c0^,...,cr+t^)x*c'=(c_{0}\hat{},...,{c_{r+t}^{}}\hat{} )

where

i=0r+tci^zi=(i=0tcizi)(i=0tciˊzi)\sum_{i=0}^{r+t}c_{i}\hat{}z^{i}=(\sum_{i=0}^{t}c_{i}z^{i})*(\sum_{i=0}^{t}c_{i}\acute{}z^{i})

Extracting inner product from the encrypted value

The cosine similarity operation requires, as we saw, the calculation of the inner product in the encrypted space. It is evident then that if we define accordingly a transformation in the encrypted space, thanks to the homomorphic properties of the encryption scheme, we can extract the inner product as a constant term from the encrypted result .

Thus, let F, Q be transformations onto the ring RqR_{q} such that

F(P)=i=0l1pi2iF(P)=\sum_{i=0}^{l-1}p_{i}2^{i}

and

F(Q)=i=0l1qj2njF(Q)=\sum_{i=0}^{l-1}q_{j}2^{n-j}

If we multiply F(P)F(Q)F(P)*F(Q) , then

F(P)F(Q)=i=0l1piqi2n+...=<P+Q>+...F(P)*F(Q)=\sum_{i=0}^{l-1}p_{i}q_{i}2^{n}+...= \left< P+Q \right> +...

Thus, if we encrypt F(P)F(P) and F(Q)F(Q) , thanks to the homomorphic properties of the encryption scheme, we can extract the inner product as a constant term from the encrypted result:

Enc(F(P)F(Q))=<P+Q>+E(...)Enc(F(P)*F(Q))=\left< P+Q \right> + E(...)

ZKP for Verifiable computation

In our setup, a node does not trust any other node in the system. This means that a node can be trusted to follow the protocol but may not be trusted with the computation of the feature extraction process and liveness detection process.

During the registration process, a node will extract a feature vector from the face image and then send it to a peer node. The problem is how does the peer node trust the feature vector? A node may or may not have followed the feature extraction process as required. In this situation, zero-knowledge-based verifiable computation comes to the rescue.

Verifiable computation is a technique to prove that the computation process was followed correctly by an untrusted party. Let y=f(x)y = f(x) be the result of computation on input x. The prover generates a proof of computation, , along with the result and sends x,y,π x,y,π to the verifier. Using x,y, x,y, and verification keys, the verifier verifies the correctness of the proof ππ .

Related Work:

  1. SafetyNet: Specialized interactive proof protocol for verifiable execution of a class of deep neural networks. It supports only quadratic activation functions but in our NN model ReLU is necessary to achieve higher accuracy.

  2. zkDT: Verifiable inference and accuracy schemes on decision trees. Decision trees are simple and quite different from neural network architecture.

  3. vCNN: verifiable inference scheme for neural networks with zero-knowledge. It optimizes only convolution. vCNN uses mixing of QAP (Quadratic arithmetic program), QPP (quadratic polynomial program) and CP-SNARK for making a connection between QAP and QPP. QAP works at the arithmetic circuit level and is costly in terms of computation.

  4. ZEN: R1CS friendly optimized zero-knowledge neural network inference scheme. Proposes R1CS friendly quantization technique. Uses arithmetic level circuit and Groth zero-knowledge proof.

  5. zkCNN: Interactive zero-knowledge proof scheme for Convolutional neural network. Proposes a new sum-check protocol. Uses GKR protocol

vCNN, ZEN, and zkCNN are most closely related to our scenario but all of them reduce the computation program to arithmetic circuit level and then use Groth zkp protocol for verification.

Any verifiable computation scheme utilizes the homomorphic property of the underlying primitive for verification. Therefore, it can support computation that involves either addition or multiplication. Since neural network computations are often complex and non-linear, researchers are using the idea of converting the program to arithmetic circuit level which involves only addition and multiplication at the bit level and then uses zkSNARK type proof. This is a more generalized technique for any circuit. However, if the circuit involves only addition and multiplication at integer level then there is no need to convert it to the arithmetic circuit level.

Our idea is to break down the neural network model of feature extraction into different layers and then prove the computation of individual layers separately. There are four main layers: convolution layer, Batch-normalization layer, ReLU layer, and average pooling layer. Out of these, only the ReLU layer is not in the form of addition and multiplication.

ReLUx=max(x,0)ReLUx=max⁡(x,0)

So, to make it compatible with our idea, we replaced the ReLU function with the bit-decomposition of ReLU which involves bit-level addition and multiplication. After this, we used the idea for Verifiable Private Polynomial Evaluation (PIPE) where an untrusted cloud server proves that the polynomial computation, y=f(x)y = f(x) , is correct without revealing coefficients of the polynomial f. We are aware of other similar schemes like Pinocchio, PolyCommit by Kate et al. and other Garbled circuit-based schemes but PIPE is best suitable for our decentralized untrusted P2P network scenario.

Our scenario is similar but slightly different. We assume that the neural network parameters are available with each node. That means coefficients of the kernel in the convolution layer are available with each node. For input (x1,x2,,xn)(x_{1},x_{2},…,x_{n}) and kernel (a1,a2,,am)(a_{1},a_{2},…,a_{m}) the output of convolution can be represented as:

yj=iaixj+iy_{j}=\sum_{i}^{}a_{i}x_{j+i}

In PIPE scheme, ai is kept secret from the verifier and in our scenario, xi (which represents input image) is kept secret from the verifier. Moreover, in PIPE scheme, input and output are available in plain form for the verifier. However, we cannot reveal the input and outputs of the neural network as well as intermediate layers due to privacy concerns. That means we had to modify the PIPE scheme in such a way that the verifier can still verify the correctness of computation using encrypted input and outputs.

Finally, here is what we have in a ZKP system for the feature vector extraction process.

Figure 12. ZKP for the feature vector

Humanode approach to ZKP

Generalized problem:

Input: (x1,x2,,xn)(x_{1},x_{2},…,x_{n})

Computation: y=iaixiy=\sum_{i}^{}a_{i}x_{i} Prover picks an input and performs computation. Since verifier does not trust the prover, the prover needs to prove that the output y is computed correctly.

Requirement: The coefficients of the computation, ai {a_{i}} , are public and known to verifiers. The prover can’t disclose xi {x_{i}} and y to the verifier due to privacy concerns.

We combined Feldman’s Verifiable Secret sharing, ElGamal Crypto system and non-interactive zero-knowledge proof.

  • Feldman’s Verifiable Secret Sharing:

It is a secret sharing scheme where each share is a point (x,y) on a secret polynomial f. In Feldman’s VSS, given a share (a,b) anybody can verify the validity of the share using some public value corresponding to the secret polynomial f. This means anyone can check whether a = f(b) without knowing the coefficients of the polynomial f.

Let f(x)=i=0kaixif(x)=\sum_{i=0}^{k}a_{i}x^{i} be a k-degree polynomial with aiZpa_{i}\in Z_{p}^{*} .

Let G be a multiplicative group of a prime order p and g be a generator of G. For each ai {a_{i}} set hi=gaih_{i}=g^{a_{i}} .

Now make gg and hih_{i} public. Given a share (a,b)(a,b) , one can check the validity of the share by verifying the following equation:

gb=i=0khiaig^{b}=\prod_{i=0}^{k}{h_{i}}^{a_{i}}

Note: There are two concerns here. First, the share (a,b) is in plain form and hence, if we use this as it is in our scenario, then we have to reveal input and output to the verifier. Second concern is that hi hides ai under the assumption that it is difficult to solve for aia_{i} form hih_{i} under Discrete Logarithm assumption. However, if aia_{i} is a small value then it will be very easy to find aia_{i} from hih_{i} . In neural network computation, the values (input and weight parameters) are always small values and can’t hide it properly.

  • Feldman’s VSS with encrypted input and output:

Input: (x1,x2,,xn)(x_{1},x_{2},…,x_{n})

Computation: y=iaixiy=\sum_{i}^{}a_{i}x_{i} To hide input and output, we need to encrypt both in such a way that we can perform some operation over encrypted value. That means we have to use some homomorphic encryption scheme.

We use ElGamal encryption mainly because it is homomorphic with respect to plaintext multiplication and scalar multiplication as well which suits our system perfectly.

ElGamal Key pair: = (sk,pk)(sk,pk) = (α,h=ga)(α, h=g^{a})

Encrypt input:= Enc(gxi)=(ci,di)=(gri,hrigxi)Enc(g^{x_{i}}) = (c_{i},d_{i})=(g^{r_{i}}, h^{r_{i}}g^{x_{i}})

Encrypt Output:= Enc(gy)=(gr,hrgy)Enc(g^{y}) = (g^{r}, h^{r}g^{y})

Compute

C=iciai=igri,ai=giai,ri=grˊC = \prod_{i}{c_{i}}^{a_{i}}=\prod_{i}{g^{r_{i},a_{i}}}={g^{\sum_{i}^{}}}a_{i},r_{i}=g^{r{\acute{}}}

Where, rˊ=iairir{\acute{}}= \sum_{i}^{}a_{i}r_{i} and

D=idiai=i(hrigxi)ai=(ihriai)(igxiai)=hrˊgyD=\prod_{i}^{}d_{i}^{a^{i}}= \prod_{i}^{}(h^{r_{i}}g^{x_{i}})^{a_{i}} = (\prod_{i}h^{r_{i}a_{i}}) (\prod_{i}g^{x_{i}a_{i}})=h^{r\acute{}}g^{y}

Finally, we have (C,D)=(grhrgy)(C,D)=(g^{r}h^{r}g^{y}) which is an ElGamal encryption of gyg^{y} . So now, prover needs to convince the verifier that (C,D)(C,D) computed from encrypted input is a valid ciphertext of gy. Here, we use NIZKP of (C/gr)=(D/hrgy)(C/g^{r}) = (D/h^{r}g^{y}) .

  • Non-Interactive Zero-Knowledge Proof:

If we generalized above log equation, then we have h1=h2h1 =h2 for some g1,h1,g2,h2Gg_{1},h_{1},g_{2},h_{2}∈G . In 1993, David Chaum and T. P. Pedersen proposed NIZKP to prove exactly this.

NIZKP LogEq: Let G be a multiplicative group of prime order p and HH be a hash function. Let the language L L be the set of all (g1,h1,g2,h2)G4(g_{1},h_{1},g_{2},h_{2})∈G^4 where h1=h2h_{1} =h_{2} . The NIZKP LogEq = (prove,verify) is as follows:

Prove ((g1,h1,g2,h2),w)((g_{1},h_{1},g_{2},h_{2}),w): Using the witness w=h1w=h_{1} , it picks a random r from ZpZ_{p}^{*}and computes A=g1r,B=g2r,z=H(A,B)A = g_{1}^r, B = g_{2}^r, z = H(A,B) and t=r+w.z.t=r+w.z. It outputs proof π=(A,B,t).\pi = (A,B,t).

Verify ((g1,h1,g2,h2),π)((g_{1},h_{1},g_{2},h_{2}),\pi) : Using π=(A,B,t)\pi = (A,B,t), it computes z=H(A,B)z = H(A,B). If

g1t=A.h1zg_{1}^t = A.h_{1}^z

and

g2t=B.h2zg_{2}^t = B.h_{2}^z

Then it outputs 1, else it outputs 0. We achieve the ZKP system for an individual layer of our NN model by combining Feldman’s VSS, ElGamal cryptosystem and NIZKP LogEq properly. Our ZKP system is unconditionally ZK-secure and UNF-secure under Random Oracle Model. Our ZKP system is also privacy preserving under the DDH assumption in the Random Oracle Model.

ElGamal Cryptosystem

We generalize the input image as higher-dimensional vector (x1,x2,x3,,xn)(x_{1},x_{2},x_{3},…,x_{n}) . Similarly, we assume the output of each layer is again higher-dimensional vector (y1,y2,,ym)(y_{1},y_{2},…,y_{m}) . For each layer, we encrypt its input and output using ElGamal encryption. The ElGamal Public Key Encryption scheme is defined as follows:

The ElGamal Public Key Encryption scheme is defined as follows:

  • Gen (λ)(\lambda) : It returns pk=(G,p,g,h)pk=(G,p,g,h) and sk=αsk=α where GG is a multiplicative group of prime order p,gGp, g∈G and h=gah=g^a .

  • Encpk(m)Enc_{pk}(m) : It returns (c,d)=(gr,pkr.m)(c,d)=(g^r,pk^r.m) where r is a randomly chosen integer between 1 and (p-1).

  • Decsk((c,d))Dec_{sk}((c,d)) : It returns m=dcskm =\frac {d} {{c}^{sk}} .

In our scheme, we use 1024 bit prime p to achieve recommended security. Note that ElGamal encryption is randomized encryption and not deterministic. That means if the same message is encrypted twice then both ciphertexts will be different. Thus each transaction will be indistinguishable and preserve the privacy of the user. Moreover, ElGamal encryption is homomorphic with respect to plaintext multiplication and scalar multiplication.

Enc(m1)E(m2)=Enc(m1m2)Enc(m_{1})*E(m_{2})=Enc(m_{1}m_{2})
Enc(m)a=Enc(ma)Enc(m)^a=Enc(ma)

Zero Knowledge Proof System for Liveness Detection

The result of liveness detection is proved by sending the output of the detection algorithm. This output comes in the form of a yes or no. That is a Boolean result.

In a centralized system the algorithm runs in a controlled environment where the central authority manages the input and output.

When the user is given the ability to run the liveness detection algorithm on their own there is the risk of a malicious user tampering with the result of the algorithm. Errors can also occur in the transmission of data or local failures in executing the algorithm and obtaining the results.

The system's decentralization includes the need to prove that the result is obtained through a correct execution of the algorithm. That is why in Humanode, we have an algorithm to generate proof of the correctness of each function of the liveness detection process. In addition, there will be a verification algorithm for the said proof, thus having a Zero-Knowledge Proof System suitable for decentralized testing of the correct execution of liveness detection.

Collective Authority

One of the most critical problems to solve when defining encryption schemes in decentralized environments is the handling of cryptographic keys, where in addition, the calculations are performed and verified by peers through multi-party computation.

In this sense, we will consider a subgroup of the Humanode network, whom we will call Collective Authority, whose objective is to generate the collective keys for homomorphic encryption and also verify the calculations performed by each peer.

In simple terms, the collective authority works as a trusted third party for key generation and verification but is also composed of several peers within the network.

During the Setup process, the collective authority is the one who defines the generic parameters for the establishment of the cryptographic protocols. The security that this collective authority provides us is that each peer takes these generic parameters and locally generates its public and private keys, as we saw in section 2.2.2.

Each user keeps his private key secured locally but sends the public key to the collective authority. After collecting the public keys from each user, the collective authority constructs a collective public key and distributes it back to all users. This collective public key is the one used to encrypt the feature vectors.

If a malicious user intercepts the public key in a traditional cryptosystem, obtaining the private key is computationally challenging. In our case, if the collective public key is intercepted, the perpetrator can't get the private keys as he must know which partial element belongs to which peer. Thus we have an additional layer of security to the public key cryptosystem, in what we can call a lattice-based decentralized public-key cryptosystem.

Humanode’s multimodal biometric approach

The Biometric Identification Matrix was created by the Humanode core to understand which of the existing biometric modalities are the most suitable and superior and, therefore, to choose the proper ones for Humanode biometric processing methods.

According to recent studies, there are three types of biometric measurements (G. Kaur et al., 2014):

  • Physiological measurement includes face recognition, finger or palm prints, hand geometry, vein pattern, eye (iris and retina), ear shape, DNA, etc.

  • Behavioral measurement relating to human behavior that can vary over time and includes keystroke pattern, signature, and gait (S. Jaiswal et al., 2011).

  • There are also some biometric traits that act as both physiological and behavioral characteristics (e.g., brain waves or electroencephalography (EEG)). EEG depends on the head or skull shape and size, but it changes from time to time depending on circumstances and varies according to age.

In light of the latest developments, we propose a fourth measurement—neurological—as a part of both physiological (internal) and behavioral measurements. We believe that neurosignature, the technology of reading a human's state of mind, i.e., signals that trigger a unique and distinct pattern of nerve cell firing and chemical release that can be activated by appropriate stimuli, should be developed and implemented in the Humanode as the most reliable and secure way of biometric processing.

Until then, Humanode implements a multimodal biometric system of several biometric modalities. Each biometric modality has its own merits and demerits. It is laborious to make a direct comparison. Since the end of the 1990s, when A. K. Jain, R. M. Bolle, and S. Pankanti conducted their comprehensive research on all existing biometrics (Jain et al., 1999), seven significant factors were identified to study and compare the biometric types: acceptability, universality, uniqueness (distinctiveness), permanence, collectability, performance, and resistance to circumvention—which are also known as ‘the seven pillars of biometrics’ (K. A. Jain, A. Ross & S. Prabhakar 2004).

Based on Jain et al.’s classification and recent all-encompassing surveys on various biometric systems (A. C. Weaver 2006; T. Sabhanayagam, V. Prasanna Venkatesan & K. Senthamaraikannan, 2018), cancelable systems (B. Choudhury, P. Then, B. Issac & V. Raman, 2018), and unimodal, multimodal biometrics and fusion techniques (A.S. Raju & V. Udayashankara, 2018), we provide a comparison study of different biometric modalities, and propose a ‘Biometric Identification Matrix’, by studying and combining characteristics revealed in the aforementioned works and by adding factors we found necessary to examine. Thus, we divided the ‘Performance’ category proposed by Jain et al., which relates to the accuracy, speed, and robustness of technology used, into two sub-categories (‘Accuracy’ and ‘Processing Speed’) to study the space in more detail. To grasp how easy it is to collect biometric data on a person, we decided to add the ‘Security’ category which refers to vulnerability to attack vectors, as paths or means by which attackers can gain access to biometric data to deliver malicious actions. The category ‘Hardware’ which relates to the type of hardware, its prevalence, and cost, was added to understand which devices are required to be used nowadays and which are best to use in the network.

  • Acceptability

‘Acceptability’ relates to the relevant population’s willingness to use a certain modality of biometrics, their acceptance of the technology, and their readiness to have their biometrics trait captured and assessed.

Complex and intrusive technologies have low levels of public acceptance. Retina recognition is not socially acceptable, as it is not a very user-friendly method because of the highly intrusive authentication process using retina scanning (J. Mazumdar, 2018). Electrophysiological methods (EEG, ECG) and neurosignatures are not highly accepted nowadays, as they are intricate and not yet well-known or fully developed.

An active liveness detection technology may be uncomfortable for the average user if the trait acquisition method tends to be demanding or time-consuming. Even in the absence of physical contact with sensors, many users still develop a natural apathy for the entire liveness detection process, describing it as over intrusive (K. Okereafor & Clement E. Onime, 2016).

  • Collectability

‘Collectability’ refers to the ease of data capturing, measuring, and processing, reflecting how easy this biometric modality is for both the user and the personnel involved.

Fingerprint and hand geometry recognition techniques are very easy to use. Their template sizes are small and so matching is fast (S. Jaiswal et al., 2011). Similarly, the advantage of face biometrics is that it is contactless and the acquisition process is simple. The advantage of all behavioral recognition methods is the ease of acquisition as well.

  • Permanence

‘Permanence’ relates to long-term stability—how a modality varies over time. More specifically, a modality with 'high' permanence will be invariant over time with respect to the specific matching algorithm.

Physiological measurements tend to be permanent, while behavioral measurements are usually not long-term stable. Such modalities have a low or medium level of permanence.

The same person can sign in different ways, as it is affected by physical conditions and feelings. Voice is not constant, as it may change based on an individual's emotion, sickness, or age (L. Rabiner & B.-H. Juang, 1993).

Facial traits are persistent, but may change and vary over time, although heat generated by the facial tissues has a measurable repeatable pattern. It can be more stable than the facial structure (Hanmandlu et al. 2012). Finger and palm prints and vein patterns tend to remain constant. Hand geometry is more likely to be affected by diseases, weight loss/gain, injury. However, the results of hand geometry recognition are not as much affected by skin moisture or texture changes depending on age. Ear size changes over time (S. Jaiswal et al., 2011; Abaza et al. 2013). DNA is highly permanent. Iris remains the same throughout life (G. Kaur et al., 2014; Bowyer et al. 2008). However, diabetes and some other serious diseases cause alterations in it. Likewise, the stable retina pattern changes during medical conditions like pregnancy, blood pressure, other ailments, etc. (G. Kaur et al., 2014).

  • Universality

‘Universality’ means that every person using a system may have the modality.

Different biometric systems have their own limitations, likewise the modalities. For example, some people have damaged or eliminated fingerprints, hand geometry is efficient only for adults, etc. Biological/chemical, electrophysiological, and neurological (in theory) biometrics measurement categories should have the highest level of universality.

  • Uniqueness

‘Uniqueness’ relates to characteristics that should be sufficiently different for individuals such that they can be distinguished from one another.

Every person has a unique walking style as well as writing style and hence a person has his own gate and signature. Voice recognition technology identifies the distinct vocal characteristic of the individual. Even so, human behavior is not as unique as physiological patterns.

Finger and palm prints are extremely distinctive. The blood vessels underneath the skin are also unique from person to person. The iris is highly unique and rich in texture. Moreover, the texture of both eyes are different from each other. Each person has a unique body odor and such chemical agents of human body odor can be extracted from the pores to recognize a person (M. Shu et al. 2014). People display a distinct ‘brain signature’ when they are processing information, similar to fingerprints. At one time, neuroscientists thought brain activity was pretty much the same from one person to another (E. Finn et al., 2015, 2019; A. Demertzi et al., 2019).

Nevertheless, even physical modalities have limitations. Thus, faces seem to be unique, however, in the case of twins, distinctiveness is not guaranteed. DNA itself is unique for each individual, except identical twins, therefore, it achieves high accuracy. However, retina recognition is highly reliable, since no two people have the same retinal pattern and even identical twins have distinct patterns. We assume that neurosignature is to be one of the premier biometric technologies on grounds of the unique nature of human thoughts, memories, and other mental conditions.

  • Accuracy

‘Accuracy’ is a part of the ‘Performance’ category. It describes how well a biometric modality can tell individuals apart. This is partially determined by the amount of information gathered as well as the quality of the neural network resulting in higher or lower false acceptance and false rejection rates.

2D facial recognition may give inaccurate results, as facial features tend to change over time due to expression, and other external factors. Also, it is highly dependent on lighting for correct input. Thermograms, which are easy to obtain and process, are invariant to illumination and work more accurately even in dim light, are far better.

3D face recognition has the potential to achieve greater accuracy than its 2D counterpart by measuring the geometry of facial features. It avoids such pitfalls of 2D face recognition as lighting, makeup, etc. It is worth noting, 3D face recognition with liveness detection is considered the best in accuracy.

Palm prints show a higher level of accuracy than fingerprints. Considering the number of minutiae points of all five fingers, the palm print has more minutiae points to help make comparisons during the matching process compared to fingerprints alone (A. Kong et al. 2009).

The iris provides a high degree of accuracy (iris patterns match for 1 in 10 billion people; J. Daugman, 2004), but still can be affected by wearing glasses or contact lenses. Similarly, retina recognition is a highly accurate technology, however, diseases such as cataracts, glaucoma, diabetes, etc. may affect the results.

  • Security

‘Security’ refers to vulnerability to attack vectors, as paths or means by which attackers can gain access to users’ biometric data to deliver malicious actions.

Vascular biometrics ranks first as the safest because of the many benefits it inherently offers, it is simple and contact-free as well as resilient to presentation attacks. This applies to both hand and eye vein recognition. The vein pattern is not visible and cannot be easily collected like facial features, fingerprints, voice or DNA, which stay exposed and can be collected without a person’s consent.

However, face recognition offers appropriate security if the biometric system employs anti-spoofing and liveness detection so that an imposter may not gain access with presentation attacks. 3D templates and the requirement of blinking eyes or smiling for a successful face scan are some of the techniques that improve the security of face recognition.

  • Processing Speed

‘Processing Speed’ is a part of the ‘Performance’ category. It is related to the time it takes a biometric technology to identify an individual.

As different modalities have different computation requirements, the processing power of the systems used varies. Fingerprints and face recognition are still the fastest in the identification process. The time used by vein recognition systems is also very impressive and reliable, in terms of the comparison of the recorded database to that of the current data. Currently, the time which is taken to verify each individual is shorter than other methods (average is 1/2 second; P. O'Neill, 2011). Iris and retina recognition have a small template size, hence promising processing speed (2 to 5 seconds). Ear shape recognition techniques demonstrate faster identification results, thanks to reduced processing time. The more complicated the procedure, the longer it takes. Behavioral modality identification is fast in processing. Signature, voice, lip motion recognition take a few seconds. The EEG and ECG processes differ. Acquisition of a DNA sample requires a long procedure to return results (S. Bhable et al., 2015).

  • Circumvention

‘Circumvention’ relates to an act of cheating; thus, the identifying characteristic used must be hard to deceive and imitate using an artifact or a substitute.

Nearly every modality may become an easy subject for forgers. Signatures can be effortlessly mimicked by professional attackers; voices can be simply spoofed. Fingerprints are easily deceived through artificial fingers made of wax, gelatin, or clay. Iris-based systems can be attacked with fake irises printed on paper or wearable plastic lenses, while face-based systems without 5 levels of liveness detection can be fooled with sophisticated 3D masks (A. Babu & V. Paul, 2016). Even vein patterns can be imitated by developing a hand substitute.

Having said that our DNA is left everywhere, and has no inherent liveness, it is believed to be the most difficult characteristic to dupe, as the DNA of each person is unique (Maestre, 2009). Brain activity and heartbeat patterns are also hard to emulate.

Hardware

‘Hardware’ category refers to the type and cost of hardware required to use the type of biometric.

Nowadays, there is no need for extra new devices if you have a smartphone for biometric recognition. Facial recognition and fingerprint are common features of smartphones. For lip motion recognition existing image capturing devices, i.e., cameras, can be used. Thermograms need specialized sensor cameras. Voice recognition is also easy to implement on smartphones or any audio device. Hand vein recognition has a low cost in terms of installation and equipment. Nowadays, mobile apps for vascular biometric recognition are integrated using the palm vein modality (R. Garcia-Martin & R. Sanchez-Reillo, 2020). For eye vein identification, smartphones are currently in development, while retina recognition is still an expensive technology, i.e., a high equipment cost. Keystrokes need no special hardware or new sensors, and low-cost identification is fast and secure. Image-based smartphone application prototypes for ear biometrics are in development (S. Bargal & A. Welles, 2015; A. F. Abate, M. Nappi & S. Ricciardi, 2016), as well as mobile apps with digital signatures (E. Rahmawati, M. Listyasari, A. S. Aziz & S. Sukaridhoto, 2017).

In the meantime, electroencephalograms are needed for EEG, and electrocardiograms for ECG. Brain-computer interfaces (BCI) are needed for neurosignature. Special expensive equipment and hardware are needed for DNA matching procedures.

Neurosignature, and other emerging biometric modalities

We assume that a combination of the aforementioned biometrics methods (and even multimodal biometrics) is not one hundred percent safe/secure. In the future, we plan to expand the system with this multimodal scheme, making neurosignature one of the main methods of Humanode user identification/verification.Other emerging modalities to research and to possibly implement in Humanode’s verification system are as follows (Goudelis et al. 2009): smile recognition, thermal palm recognition, hand/finger knuckle, magnetic fingerprints/smart magnet, nail ID, eye movement, skin spectroscopy, body salinity, otoacoustic emission recognition (OAE), mouse dynamics, palate, dental biometrics, cognitive biometrics.

Biometric Identification Matrix

Table 3. ‘Biometric Identification Matrix’: Biometrics Techniques Comparison

The different biometrics techniques are discussed. The advantages and disadvantages associated with each of them are listed in Table 4.

Table 4. ‘Biometric Identification Matrix’: Biometrics Techniques Pros and Cons

Humanode Biometric Modalities Score

We assigned each factor its own value point depending on its effectiveness for the enrollment of new human nodes to the network:

  • Acceptability (6)

  • Collectability (6)

  • Permanence (5)

  • Universality (5)

  • Uniqueness (10)

  • Accuracy (8)

  • Security (10)

  • Processing Speed (3)

  • Circumvention (10)

  • Hardware (8)

Thus, we assume that the most significant for the network are ‘Uniqueness’ and ‘Security’ of the biometric modality, ‘Accuracy’ of the biometric method, low level of ‘Circumvention,’ and ‘Hardware’ type used.

To evaluate every aforementioned biometrics modality technique, we proposed the ‘Humanode Biometric Modalities Score,’ based on the ‘Biometric Identification Matrix’ analyzed.

The study revealed that 3D facial recognition technique has the highest score (198), facial thermography recognition (192) and iris recognition (190) are not far behind. Retina recognition (176) and eye vein recognition (178) also got quite high scores, as well as neurosignature (173) which is not so highly scored as it is not yet fully developed and massively adopted.

Table 5. ‘Biometric Identification Matrix’: Modalities Scores

* When calculated, we swapped the levels (numbers) for the ‘Circumvention’ factor so that it could be correlated with other factors, since a ‘High’ level of circumvention means it is easy to imitate the body part, the modality, by using an artifact or substitute, while ‘Low’ level of circumvention means this is practically impossible to do. In our model ‘Low’ gets 3 while ‘High’ - 1.

Diagram 1. ‘Biometric Identification Matrix’: Modalities Scores

To create a human node, only those modalities will be used that have score points above the median value (>147), i.e., 2D facial recognition, 3D facial recognition, facial thermography recognition, iris, retina, finger/hand vein recognition, eye vein recognition, ECG, DNA matching, and neurosignature (in future).

Due to the possible development of cheap methods of attacks on the current biometric security set-up in the future, the Humanode network will require human nodes to provide additional biometric data during network upgrades. For instance, once iris verification is proven to be secure on smartphone devices, it will be added as an additional minimum requirement to deploy a node. While Samsung already has made attempts to deploy consumer-scale iris recognition into its smartphones, its quality and security levels are quite low compared to specialized hardware.

On top of this, in order to increase the cost of possible attacks on biometrics, the Humanode network requires high standards for the multimodal biometric system used for granting a permission to launch a human node. Starting only with 3D facial recognition and liveness detection, later on one will have to go through multimodal biometric processing.

Also, the ability to create several wallets and to choose their types in the system will be correlated with the biometric modalities selected. For example, to create a high-value wallet, a more secure and complex verification technique should be chosen, and vice versa.

Types of attacks on biometric systems and their solutions

Currently, there are eight possible attacks against biometric systems.

Figure 13. Possible attacks on biometric verification systems

Attack on the sensor

Attackers can present fake biometrics in front of sensors (Jain et al. 2008). For example, someone can make a fake hand with fake vein patterns, or finger with fake wax fingerprint; wear special-made lenses to bypass the iris scanner; other intruders can create images of a legitimate user to bypass the face recognition system, etc. The possible solutions for this type of attack are multimodal biometrics, liveness detection, as well as soft biometrics (Kamaldeep, 2011).

Multimodal biometrics is the main way to prevent attacks and make the biometric system more secure. Multimodal biometrics refers to methods in which several biometric features are considered for enrollment and authentication. When multiple biometric characteristics are used, it becomes difficult for an attacker to gain access to all of them.

Humanode utilizes multimodal biometrics. The network has three tiers with combined biometric modalities that are required to set a human node (read more in the ‘Humanode Biometric Modalities Score’ section).

Liveness detection uses different physiological properties to differentiate between real and fake characters. It is an AI computer system’s ability to determine that it is interfacing with a physically present human being and not an inanimate spoof artifact.

A non-living object that exhibits human traits is called an ‘artifact’. The goal of the artifact is to fool biometric sensors into believing that they are interacting with a real human being instead of an artificial copycat. When an artifact tries to bypass a biometric sensor, it's called a ‘spoof.’ Artifacts include photos, videos, masks, deepfakes and many other sophisticated methods of fooling the AI. Another method of trying to bypass the sensors is by trying to insert already captured data into a system directly without camera interaction. The latter is referred to as ‘bypass’.

In the biometric authentication process, liveness data should be valid only for a set period of time (can be up to several minutes) and then is deleted. As this data is not stored, it can’t be used to spoof liveness detection with corresponding artefacts to try and bypass the system.

The security of liveness detection is really dependent on the size of data they are able to detect. That is why low resolution cameras might never be totally secure. For example if we take a low-res camera and put a 4k monitor in front of it then weak liveness detection methods such as turning your head, blinking, smiling, speaking random words etc. can be easily emulated to fool the system.

In 2017, the International Organization of Standardization (ISO) published ISO/IEC 30107-3:2017 standard for presentation attacks went over ways to stop artifacts such as high-resolution photos, commercially available lifelike dolls, silicone 3D masks etc. from spoofing fake identities. Since then, sanctioned PAD (Presentation Attack Detection) tests for biometric authentication solutions have been created so that any new solutions meet the specified requirements before hitting the market. The most famous of them all is the iBeta PAD Test. It is a strict and thorough evaluation of biometric processing solutions in order to understand whether they can withstand the most intense presentation attacks. Four years have passed since then and this standard is condemned as outdated by many specialists in the field, and iBeta PAD tests have gradually become easy to pass with modern sophisticated spoofing methods.

FaceTec, one of the leading companies in liveness detection, divides attacks into 5 categories that go way beyond those stated in the 30107-3:2017 standard and represent the real world threats much precisely.

Depending on the artifact type, there are three levels of PAD attacks:

  • Level 1: Hi-Res digital photos, HD videos, and paper masks.

  • Level 2: Commercially available lifelike dolls, latex & silicone 3D masks.

  • Level 3 includes ultra-realistic artifacts like 3D masks, and wax heads.

Furthermore, depending on the bypass type, FaceTec researchers identify Level 4 & 5 biometric template tampering, and virtual-camera & video injection attacks:

  • Level 4: Decrypt & edit the contents of a 3D FaceMap™ to contain synthetic data not collected from the session, have the server process and respond with ‘Liveness Success’.

  • Level 5: Take over the camera feed & inject previously captured video frames or a deepfake puppet that results in the FaceTec AI responding with ‘Liveness Success’.

Figure 14: 5 levels of liveness:

Almost all liveness detection methods as well as those described above in the Humanode approach to user identification are software-based and available for any modern smartphone. In hardware-based methods an additional device is installed on the sensor to detect the properties of a living person: fingerprint sweat, blood pressure, or specific reflection properties of the eye.

With liveness detection, the chances of successful spoofing become low enough to make the cost of an attack higher by an order of magnitude in comparison to the potential transaction fees collected by an artificially created human node minus costs to run a node.

The Humanode network implements 3D facial liveness detection from the testnet.

Replay attack

A replay attack is an attack on the communication channel between the sensors and the feature extractor module. In this attack, an impostor can steal biometric data and later can submit old recorded data to bypass the feature extraction module (Jain et al. 2008).

Traditional solutions to prevent this kind of attack are as follows.

  • Steganography is the way by which biometric characteristics can be securely communicated without giving any clue to the intruders. It is mainly used for covert communication and therefore biometric data can be transmitted to different modules of the biometric system within an unsuspected host image.

  • Watermarking is a similar technique where an identifying pattern is embedded in a signal to avoid forging. It is a way to combat replay attacks, but only if that data has been seen before or the watermark can't be removed.

  • A challenge-response system, in which a task or a question is given to the person as a challenge and the person responds to the challenge voluntarily or involuntarily (Kamaldeep, 2011).

Attack on the channel between the database and the matcher

The attacker intrudes the channel to modify the existing data or to replay the old one. Traditionally, this attack can be prevented by such solutions as challenge-response systems, watermarking, and steganographic techniques as a Replay attack (Bolle et al. 2002).

Attack on the database

The attacker can intervene in the database where the templates are stored to compromise the biometric characteristics of a user, replace, modify, or delete the existing templates.

There are two common template protection schemes to counter this attack:

  • Cancelable biometrics, in which the intruder cannot get access to the original biometric pattern from the database because instead of the original data, a distorted version is stored.

  • Cryptobiometrics, where all data is encrypted before sending in the database while the original template is deleted, therefore, it is quite difficult for the attacker to steal the original template, as it exists only for a few seconds on the user’s device.

The Humanode network uses the second type.

Override the final decision

As the software application may have bugs, an intruder can override the actual decision made by the matcher.

Humanode ensures that nobody knows the actual decision result of matching but the protocol before this decision is executed. This attack can be prevented using soft biometrics as well (Kamaldeep, 2011).

Override feature extractor

This attack relates to overriding the feature extractor to produce predetermined feature sets, as the feature extractor is substituted and controlled remotely to intercept the biometric system.

In the Humanode system, feature extraction takes place on the client's device. The human node encrypts the embedded feature vector using the public key and gets the encrypted feature vector. Further, it provides ZKP proof that the feature vector is extracted through the system's feature extraction process only, as a result, hence the attacker is unable to override it.

Override matcher

Overriding the matcher to output high scores compromises system security. In this way, the intruder can control the matching score and generate a high matching score to confirm authentication to the imposter.

In Humanode, the matching score is computed over an encrypted feature vector. Moreover, the matcher is required to provide proof of correctness for the matched score. As a result, the attacker cannot override matchers to generate a high matching score for a target feature vector.

Synthesized feature vector

The route from the feature extractor to the matcher is intercepted to steal the feature vector of the authorized user. Using the legitimate feature vector, the attacker then iteratively changes the false data, retaining only those changes that improve the score until an acceptable match score is generated and the biometric system accepts the false data. The legitimate feature sets are replayed later with synthetic feature sets to bypass the matcher (Bolle et al. 2002; Jain et al. 2008; Kamaldeep, 2011).

In the Humanode system there is a private channel between the feature extractor and the matcher while the feature vector is always kept in encrypted form and is never available in the plain form to the attacker. Therefore these kinds of attacks are not possible.

Reconstruction Attack

Recently, Mai G. et al. (Mai G. et. al. 2018) proposed a neighborly de-convolutional neural network (NbNet) to reconstruct face images from their deep templates. In a distributed P2P network, a node can have access to a biometric template database and it can use NbNet to reconstruct corresponding 2D or 3D mask with very high success probability for verification.

A robust Liveness detection prevents use of reconstructed 2D or 3D mask but it does not protect the privacy of the corresponding user. For protecting privacy, there are several solutions based on user specific randomness in deep networks and user specific subject-keys. Along with using robust liveness detection Humanode stores all biometric templates in the encrypted form and those are never available in the plain form to the attacker.

Table 6. Attacks on biometric systems and their possible solutions:

Neurosignature

With the evolution of neural implants, it became possible to convert the neuroactivity of the brain into electronic signals that can be comprehended by modern computers. Since the 1960s, the neurotech field has moved from simple electroencephalography (EEG) recordings to real brain-computer communication and the creation of sophisticated BCI-controlled applications. Since the late 2010s, large companies have begun to actively pursue brain-computer interface (BCI) development, rapidly approaching its adoption. In 2014, Brainlab developed a prototype that allows a Google Glass user to interface with and give commands to the device using evoked brain responses rather than swipes or voice commands. In 2015, Afergan et al. developed an fNIRS-based BCI using OST-HMD called Phylter, a control system connected to Google Glass that helped prevent the user from getting flooded by notifications. In 2017, Facebook announced the BCI program, outlining its goal to build a non-invasive, wearable device that lets people type by simply imagining themselves talking. In March 2020, the company published the results of a study that set a new benchmark for decoding speech directly from brain activity. Companies, like BrainGate and Neuralink,[3] have manufactured working prototypes of invasive and noninvasive brain-computer interfaces that build a digital link between brains and computers. Even with the immeasurable complexity of neurons and ridiculous entanglement of somas, axons, and dendrites, the above-mentioned projects were able to create devices that not only stimulate and capture the output but also distinguish patterns of signals from one another.

A person will be able to use his own mental state, conscious state, or simply signals from the motor cortex to initiate node deployment and verify transactions without compromising the data itself.

Compared to any other biometric solution including direct DNA screening and other biochemical solutions neurosignature biometrics can be considered to be the most secure way of biometric processing, as it is impossible to forge a copycat or to emulate the prover and try to bypass the system.

Figure 15. Block diagram of a BCI system:

Table 7. Summary of signal acquisition method (Mudgal et al., 2020).

While BCI hardware enables the retrieval of brain signals, BCI software is required to analyze these signals, produce output, and provide feedback.

Moreover, neurotech continues to evolve—hybrid BCIs (hBCIs), which are the combinations of BCIs with a wide range of assistive devices (ADs), prove it (G. Pfurtscheller, 2010; G.R. Müller-Putz et al. 2015; I. Choi et al., 2017; Yang et al., 2020). These hBCI systems are categorized according to the type of signals combined and the combination technique (simultaneous/sequential). Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, such as functional near-infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye-tracking technology (K.-S. Hong & M. Jawad Khan, 2017). In general, the essential goal of combining signals is to increase the detection accuracy, enhance system speed, improve user experience, and overcome the disadvantages of BCI systems (S. Sadeghi & A. Maleki, 2018). With hBCIs, Humanode can achieve unprecedented multi-modality based on internal biometric processing protocols.

There are many different ways to collect data on brain activity, but more importantly, there have been many software layers already created by different organizations and communities such as OpenBCI, BCI2000, NFBLab, PsychoPy, rtsBCI, OpenVibe, OpenEEG, BF++, etc.

These types of software can be divided into three different groups:

  1. Software that provides a stack of protocols that try to precisely read, analyze, and store brain activity data through different types of signals (EEGs, fMRIs, invasive implants, etc.);

  2. Software that converts brain activity data into commands for different computer languages and systems; and

  3. Supplementary software that converts received brain activity data into different types of variables for research and development purposes.

Figure 16. Factors that influence adoption:

Researchers are making great strides towards resolving all of the above-mentioned challenges. The majority of investigators believe in BCI mass adoption in the following years. Recent research examines the possibility of using BCI in everyday-life settings in different contexts (Blum et al., 2012; Hu et al., 2019; Park et al., 2019; D. Friedman, 2020; Benitez-Andonegui, et al., 2020). There is a relevant body of work addressing not only technology improvements (Liberati et al., 2015) but also the fact that BCI design and development should become more user-friendly to achieve successful mainstream applications (Kübler et al., 2014; Nijboer, 2015).

Humanode’s approach to identity attack prevention

The amount of research exploring the use of distributed ledger technology to launch new types of identity management systems has lately increased (Baars, 2016; Jacobovitz, 2016; Tobin & Reed, 2016; Dunphy & Petitcolas, 2018) along with studies combining these systems with biometrics (Hammudoglu et al., 2017; Garcia, 2018; Othman & Callahan, 2018).

Decentralized Identity Trilemma (Maciek, 2019)

Alongside maintaining self-sovereignty (anybody can create and control an identity without the involvement of a centralized third party) and being privacy-preserving (anybody can acquire and utilize an ID without revealing personally-identifying information (PII)) the system also needs to achieve Sybil-resistance, as the majority of large-scale peer-to-peer networks are still vulnerable to Sybil attacks. These occur where a reputation system is subverted by a considerable number of forging IDs in the network (J. R. Douceur, 2002; R. John, J. P. Cherian & J. J. Kizhakkethottam, 2015; A. M. Bhisea & S. D. Kamble, 2016; D. Siddarth, S. Ivliev, S. Siri, & P. Berman, 2020).

None of the existing solutions are privacy-preserving, self-sovereign, and Sybil-resistant at the same time (Maciek, 2019). We at Humanode propose the following solutions to break the trilemma.

Self-sovereignty

The Humanode protocol applies principles of self-sovereign identity (SSI), requiring that users be the rulers of their own ID (C. Allen, 2016). In Humanode, there is no centralized third party to control one’s ID, thus ID holders can create and fully control their identities.

Privacy-preserving

In order to meet the security requirements of protecting highly private biometric information on a truly global decentralized system that is run on nodes by every-day people, simply encoding the information using cryptography (no matter how high an encryption) is not enough. We also need to consider the integrity of the information, preventing malicious actors from accessing the information and the network as a whole, preventing sybil attacks, deep-fakes, and an endless number of various possible and potential attacks. This is where the concept of crypto-biometrics comes into play.

Obviously, in order to safeguard the information while allowing the necessary information (such as if this is a registered user or not, or if he or she is, what account is he or she tied to), crypto-biometrics is based on a combination of various technologies, and exists at the intersection of the disciplines of mathematics, information security, cybersecurity, sybil resistance, biometric technology, liveness detection, zero knowledge proof (ZKP) technologies, encryption, and blockchain technology.

Sybil-resistance

A Sybil-proof system was best conceptualized by Vitalik Buterin as a "unique identity system" for generating tokens that prove that an identity is not part of a Sybil attack (V. Buterin, 2014; 2019). In recent years, attempts in the field were made by blockchain-based initiatives like HumanityDAO, POAP, BrightID, Idena Network, Kleros, Duniter, etc. Nevertheless, there are still no relevant Sybil-resistant identity mechanisms. In other words, in today’s digital space a possibility remains for users to create multiple accounts in one system using distinct pseudonyms to vote several times or receive multiple rewards, etc.

Table 8. Comparison of Sybil attack types:

Figure 18. Main Sybil attack defense methods:

  • Graph-based methods

Graph-based methods rely on a social network’s information to represent dependencies between objects. These schemes fall into two categories:

  1. Sybil detection techniques based on the concept of graph random walk and mix time

  2. Sybil tolerance techniques, which limit the effects of Sybil attack edges (M. Al-Qurishi et al., 2017; A. Alharbi et al., 2019).

  • Machine-learning methods

These methods fall into the following categories:

  1. Supervised, which use regression models, support vector machine (SVM) (P. Gu et al., ‎2017), and decision tree models

  2. Un-supervised, which use fuzzy logic, Markov models (K. Zhang et al., 2015), and clustering methods

  3. Semi-supervised, which use sets of data to improve the quality of learning.

  • Manual verification methods

This scheme relies on users to increase security through user verification, e.g., this may include asking users to report malicious content in the network.

  • Prevention methods

Prevention schemes refer to such traditional approaches as using trusted authorities or resource testing. They may also include the use of crypto puzzles (CAPTCHA) for users to access systems and verifying their ID by sending a verification SMS message to the user’s phone.

Humanode uses various techniques for preventing Sybil attacks:

Table 9. Main techniques for preventing Sybil attacks

From the very start, Humanode uses the aforementioned prevention methods to successfully counter Sybil attacks. Also, imposing economic costs as barriers to becoming a human node are used in the system to make attacks more expensive and less feasible.

In order to create a Sybil-resistant system for human identification, Humanode ensures that every identity is:

  • Unique (two individuals should not have the same ID)

  • Singular (one individual should not be able to obtain more than one ID; F. Wang & P. De Filippi, 2020)

  • Existing (the person behind the ID is alive and well)

To validate users’ identities and to create a Sybil-proof system, Humanode introduces a verification mechanism when the identity is derived from one or more unique features of the human body—with the implementation of premiere biometric solutions such as:

  • Multimodal biometric processing with liveness detection and periodical verification of identity

  • Biochemical biometrics—direct DNA screening, and neurosignature biometrics through BCI

***

Thus, in a nutshell, Humanode’s identity attack prevention scheme solves Maciek’s ‘Decentralized Identity Trilemma’ as the system applies self-sovereignty, privacy-preservation, and Sybil-resistance principles as illustrated below:

Figure 19. Humanode’s approach to identity attack prevention

Last updated