Artificial prosthesis is an important tool to help amputees to gain or partially obtain abled human limb functions. Compared with traditional prosthesis which is only for decoration or merely has feedforward control channel, the perception and feedback function of prosthesis is an important guarantee for its normal use and self-safety. And this includes the information of position, force, texture, roughness, temperature and so on. This paper mainly summarizes the development and current status of artificial prostheses in the field of perception and feedback technology in recent years, which is derived from two aspects: the recognition way of perception signals and the feedback way of perception signals. Among the part of recognition way of perception signals, the current commonly adopted sensors related to perception information acquisition and their application status in prosthesis are overviewed. Additionally, from the aspects of force feedback stimulation, invasive/non-invasive electrical stimulation, and vibration stimulation, the feedback methods of perception signals are summarized and analyzed. Finally, some problems existing in the perception and feedback technology of artificial prosthesis are proposed, and their development trends are also prospected.
With the advantage of providing more natural and flexible control manner, brain-computer interface systems based on motor imagery electroencephalogram (EEG) have been widely used in the field of human-machine interaction. However, due to the lower signal-noise ratio and poor spatial resolution of EEG signals, the decoding accuracy is relative low. To solve this problem, a novel convolutional neural network based on temporal-spatial feature learning (TSCNN) was proposed for motor imagery EEG decoding. Firstly, for the EEG signals preprocessed by band-pass filtering, a temporal-wise convolution layer and a spatial-wise convolution layer were respectively designed, and temporal-spatial features of motor imagery EEG were constructed. Then, 2-layer two-dimensional convolutional structures were adopted to learn abstract features from the raw temporal-spatial features. Finally, the softmax layer combined with the fully connected layer were used to perform decoding task from the extracted abstract features. The experimental results of the proposed method on the open dataset showed that the average decoding accuracy was 80.09%, which is approximately 13.75% and 10.99% higher than that of the state-of-the-art common spatial pattern (CSP) + support vector machine (SVM) and filter bank CSP (FBCSP) + SVM recognition methods, respectively. This demonstrates that the proposed method can significantly improve the reliability of motor imagery EEG decoding.
The brain-computer interface (BCI) based on motor imagery electroencephalography (EEG) shows great potential in neurorehabilitation due to its non-invasive nature and ease of use. However, motor imagery EEG signals have low signal-to-noise ratios and spatiotemporal resolutions, leading to low decoding recognition rates with traditional neural networks. To address this, this paper proposed a three-dimensional (3D) convolutional neural network (CNN) method that learns spatial-frequency feature maps, using Welch method to calculate the power spectrum of EEG frequency bands, converted time-series EEG into a brain topographical map with spatial-frequency information. A 3D network with one-dimensional and two-dimensional convolutional layers was designed to effectively learn these features. Comparative experiments demonstrated that the average decoding recognition rate reached 86.89%, outperforming traditional methods and validating the effectiveness of this approach in motor imagery EEG decoding.