Chinese Physics Letters, 2020, Vol. 37, No. 7, Article code 078501 Voltage-Driven Adaptive Spintronic Neuron for Energy-Efficient Neuromorphic Computing Ya-Bo Chen (陈亚博)1, Xiao-Kuo Yang (杨晓阔)1*, Tao Yan (闫涛)2, Bo Wei (危波)1, Huan-Qing Cui (崔焕卿)1, Cheng Li (李成)3, Jia-Hao Liu (刘嘉豪)3, Ming-Xu Song (宋明旭)1, and Li Cai (蔡理)1 Affiliations 1Department of Basic Sciences, Air Force Engineering University, Xi'an 710051, China 2School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150006, China 3College of Computer, National University of Defense Technology, Changsha 410005, China Received 17 April 2020; accepted 25 April 2020; published online 21 June 2020 Supported by the National Natural Science Foundation of China under Grants Nos. 61804184 and 11975311, the Natural Science Basic Research Plan in Shaanxi Province of China under Grant No. 2020JQ470, and the Foundation of Independent Scientific Research under Grant Nos. YNJC19070501, YNJC19070502, and YNJC19070504.
*Corresponding author. Email: yangxk0123@163.com
Citation Text: Chen Y B, Yang X K, Yan T, Wei B and Cui H Q et al. 2020 Chin. Phys. Lett. 37 078501    Abstract A spintronics neuron device based on voltage-induced strain is proposed. The stochastic switching behavior, which can mimic the firing behavior of neurons, is obtained by using two voltage signals to control the in-plane magnetization of a free layer of magneto-tunneling junction. One voltage signal is used as the input, and the other voltage signal can be used to tune the activation function (Sigmoid-like) of spin neurons. Therefore, this voltage-driven tunable spin neuron does not necessarily use energy-inefficient Oersted fields and spin-polarized current. Moreover, a voltage-control reading operation is presented, which can achieve the transition of activation function from Sigmoid-like to ReLU-like. A three-layer artificial neural network based on the voltage-driven spin neurons is constructed to recognize the handwritten digits from the MNIST dataset. For the MNIST handwritten dataset, the design achieves 97.75% recognition accuracy. The present results indicate that the voltage-driven adaptive spintronic neuron has the potential to realize energy-efficient well-adapted neuromorphic computing. DOI:10.1088/0256-307X/37/7/078501 PACS:85.75.-d, 85.80.Jm, 87.18.Sn © 2020 Chinese Physics Society Article Text Neuromorphic computing refers to the utilization of hardware to emulate the functionalities of biological neurons and synapses.[1–3] It can overcome the limitations that traditional von Neumann machines cannot perform complex tasks, such as real-time image recognition.[4] An artificial neuron with activation function $y=F(x)$ is at the heart of any artificial neural network (ANN) for neuromorphic computing.[5] However, the energy consumption and circuit area of neuromorphic computing based on CMOS technology remain highly inefficient.[6] Recently, artificial neurons based on spintronic devices combine the advantages of high working speeds, low power consumption, and high integration capability of spintronic devices with the excellent processing capabilities of biological neurons.[7,8] To date, various spin neuron schemes such as utilizing spin-torque nano-oscillators,[9,10] magneto-tunneling junction (MTJ) with stochastic magnetization switching behavior in a free layer,[11–16] spintronic memristors,[17] and spin-torque diodes (STDs)[18] have already been proposed and verified by experiments. The energy dissipation of spintronic neuron is caused by the energy loss of the clock generating system. The driven methods of the spin neurons mentioned above are based on current-induced spin orbit torque (SOT), spin transfer torque (STT), or magnetic field. Current control magnetization has been proven to be energy-inefficient due to Joule effect.[19] Spin neuron with step activation function proposed in Ref. [20], which dissipates orders of smaller magnitude energy than the current-driven spin neuron with Sigmoid function. However, the derivative of the step function is 0 in most places (except $x=$0), which is not conducive to updating parameters in the training process. Therefore, this spin neuron cannot be applied to large-scale neural network devices. A spin neuron driven by a mixed-mode (Voltage $+$ STT) clock is proposed in 2019,[21] which can work as current-driven spin neuron, but it has the lower energy dissipation than current-driven spin neuron. Although this design greatly reduces the energy consumption, there is still energy-inefficient spin-polarized current. Therefore, it is necessary to design a clocking scheme with lower energy consumption to fire the stochastic switching behavior of an MTJ, so as to realize low-power spin neurons. In this letter, we propose an energy-efficient voltage-driven spin neuron with sigmoid-like (S-type) activation function, which is implemented by the MTJ that has two discrete resistance values depending on the two magnetization orientations of the free layer. It is driven by the two-step voltage clocking scheme we designed. Energy dissipation per neuron in our design under the same specifications is much lower than the previous designs in theory. Moreover, it is worth noting that the control voltage of writing operation can tune the S-type activation function of spin neuron. The activation function type can be changed to ReLU-like function by controlling the voltage of reading operation, which improve the scalability and adaptability of the spin neuron. Furthermore, an ANN based on voltage-driven spin neurons is designed to perform handwritten digits recognition. The voltage-driven stochastic magnetization switching scheme in the free layer at the room temperature is shown in Fig. 1.(a). The magnetic layer and two electrode pairs are fabricated on the piezoelectric layer in our design. The Terfenol-D layer (magnetic layer) has a damping coefficient $\alpha$ of 0.1 and a saturation magnetization $M_{\rm s}$ of $8\times 10^{5}$ A/m,[22] whose size is 160 nm $\times 120$ nm $\times 12$ nm. PMN-PT (Pb(Mg$_{1/3}$Nb$_{2/3}$)O$_{3}$-PbTiO$_{3}$) layer as the piezoelectric layer (thickness $t_{\rm p} = 400$ nm), which has $d_{31}=-3000$ pm/V and $d_{32}=1000$ pm/V.[23] $U_{1}=U_{2}=50$ mV of voltages would generate 30 MPa stress according to $\sigma =Y(d_{31}-d_{32})U/[(1+v)t_{\rm p}]$,[11] where $Y=200$ GPa is Young's modulus and $v =0.3$ is Poisson's ratio. The magnetization dynamics of the magnetic layer can be obtained by solving the Landau–Lifshitz–Gilbert equation[24] $$ \frac{{d}\boldsymbol{{M}}}{{d}t}=-\gamma \boldsymbol{{M}}\times \boldsymbol{{H}}_{\rm eff} -\frac{\alpha \gamma }{M_{\rm s} }[\boldsymbol{{M}}\times \left({\boldsymbol{{M}}\times \boldsymbol{{H}}_{\rm eff} } \right)],~~ \tag {1} $$ where $\alpha$ is the damping coefficient, $M_{\rm s}$ is the saturation magnetization, $\gamma$ is the gyromagnetic ratio, and ${\boldsymbol H}_{\rm eff}$ is the effective field:[25] $$ {{H}}_{\rm eff} =-\frac{1}{\mu_{0} V}\frac{{d}E_{{\rm total}} }{{d}{{M}}},~~ \tag {2} $$ where $\mu_{0} = 4\pi \times 10^{-7}$ is the vacuum permeability and $V$ is the volume of single nanomagnet. The total energy $E_{\rm total}$ includes shape anisotropy energy, stress anisotropy energy and thermal fluctuations. The shape anisotropy energy is[26] $$\begin{alignat}{1} E_{\rm shape~anisotropy}\,& =\frac{\mu_{0} M_{\rm s}^{2}V}{2}[N_{dx} ({\cos}\theta {\sin}\phi)^{2}\\ +&N_{dy} ({\sin}\theta {\sin}\phi)^{2}+N_{dz} ({\cos}\phi)^{2}],~~ \tag {3} \end{alignat} $$ where $\theta$ is the polar angle and $\phi$ is the azimuth angle. $N_{dx}$, $N_{dy}$, and $N_{dz}$ are the demagnetization factors of elliptical nanomagnets, which can be calculated by employing Ref. [27]. The stress anisotropy energy is given as follows:[28] $$ E_{\rm stress~anisotropy} =-\frac{3}{2}\lambda_{\rm s} \sigma V\sin^{2}\theta \sin^{2}\phi .~~ \tag {4} $$ Substituting Eq. (4) into Eq. (2), we can obtain the stress anisotropic field ${\boldsymbol H}_{\rm stress1}$ generated by voltage $V_{1}$ and $V_{2}$ as follows: $$\begin{align} &H_{\rm stress1} (t)=\frac{3\lambda_{\rm s} }{\mu_{0} M_{\rm s} }\sigma (t)\sin \theta \sin (\phi -\beta),~~ \tag {5} \end{align} $$ $$\begin{align} &H_{\rm stress2} (t)=\frac{3\lambda_{\rm s} }{\mu_{0} M_{\rm s} }\sigma (t)\sin \theta \sin (\phi +\beta),~~ \tag {6} \end{align} $$ where 3$\lambda_{\rm s}/2=6\times 10^{-4}$ is the saturation magnetostriction, $\sigma$ is the stress applied in the magnetic layer and $\beta$ is the angle between the line joining the centers of the electrodes in one pair and the easy axis of nanomagnet in Fig. 1(a). The field $h(t)$ is related to thermal fluctuation energy $E_{\rm thermal~fluctuations}$, which can be expressed as[29] $$ h(t)=\sqrt {\frac{2\alpha kTf}{\gamma \mu_{0} M_{\rm s} V}} G_{(0,1)} (t),~~ \tag {7} $$ where $k=1.38\times 10^{-23}$ J/K is the Boltzmann constant, $T=300$ K is the room temperature, $f$ is the frequency of thermal noise oscillations at room temperature, and $G_{(0,1)}(t)$ is the standard Gaussian distribution. The two-step switching clocking scheme is shown in Fig. 1(b). A voltage applied across the piezoelectric layer generates a mechanical strain in it, which is mostly transferred to the magnetostrictive layer by elastic coupling and produces a stress. This stress will cause the magnetization of the magnetostrictive layer (nanomagnet) to rotate, whose feasibility has been proven by experiments.[22,30] The inverse piezoelectric effect of piezoelectric materials and the inverse magnetostrictive effect of magnetostrictive materials are the physical essence of this clock. In the first step, a voltage $V_{1}$ is applied to the electrode pair (blue), which rotates the magnetization away from the long axis, as shown in Figs. 1(c) and 1(d). In the second step, the $V_{1}$ is withdrawn and a voltage $V_{2}$ is applied to the electrode pair (yellow) at the same time. Magnetization will be stochastically aligned to either one of the easy axis, as shown in Figs. 1(c) and 1(d). Note that the magnetization will not be stochastically rotated when we activated only one pair of electrodes, which will return to the initial state after the voltage is withdrawn.
cpl-37-7-078501-fig1.png
Fig. 1. (a) Voltage-driven stochastic magnetization switching scheme, where $\theta$ is the polar angle and $\phi$ is the azimuth angle, $\beta$ is the angle between the line joining the centers of the electrodes in one pair and the easy axis of nanomagnet. (b) Two-step switching clocking consisting of two voltage signals: $V_{1}$ is the control voltage and $V_{2}$ is the input voltage $V_{\rm in}$. After applying the same voltage clocking, the free layer stochastically appears to be (c) 180$^{\circ}$ magnetization switching, or (d) magnetization remains the initial state.
Since the free layer has the in-plane magnetization, and the voltage-driven 180$^{\circ}$ in-plane magnetization switching will change the resistance of the MTJ, we choose to study the switching probability (probability of resistance change) that appears under the various input voltage. To obtain the 180$^{\circ}$ magnetization switching probability at a certain input voltage $V_{2}$, 1000 simulations in the presence of noise were carried out. The 180$^{\circ}$ switching probability can be calculated by $P =N_{\rm 180^{\circ}\,switching}$/1000, where $N_{\rm 180^{\circ}\,switching}$ is the number of achieving 180$^{\circ}$ magnetization switching. By changing input voltage $V_{2}$ and repeating the measurement, we can obtain the relationship between the input voltage $V_{2}$ and 180$^{\circ}$ magnetization switching probability, which is the activation characteristics of the free layer, as shown in Fig. 2(a). The switching probability increases with increasing input voltage $V_{2}$ and reaches a saturation value finally. Fitting the input-output relationship (activation function) of the device by utilizing shifted sigmoid-like function $F(x)=1/\{1+ \exp[-a(x-b)]\}$. Encouragingly, when slope $a=1.33$ and offset $b=12.71$, the fitting function (blue line) has the same shape as these probability dots (orange dots), as shown in Fig. 2(a). Thereby, the fitting function obtained is the activation function of devices. To show the accuracy of the data, we repeated the above simulation for all probability dots in Fig. 2(a) to obtain standard deviation of these probability dots, as shown in Fig. 2(b). Then, we randomly select 10 groups of data, and the fitting functions of the 10-group data are consistent with the fitting function obtained in Fig. 2(a). These results indicate that the relationship between switching probability and input voltage $V_{2}$ of the device we obtained is correct and has very high reliability.
cpl-37-7-078501-fig2.png
Fig. 2. (a) Switching probability under various input voltages ($V_{2}$) at $V_{1} = 48.5$ mV and $\beta =10^{\circ}$ (orange dots). The data is fitted by a sigmoid-like function (blue line). (b) Standard deviation of these switching probability dots.
cpl-37-7-078501-fig3.png
Fig. 3. Switching probability (dots) versus input voltage $V_{2}$ (a) at different control voltage $V_{1}$ and (b) at different tilt angle $\beta$, fitted by the Sigmoid-like function (line). (c) The slope $a$ and offset $b$ of the Sigmoid-like function versus control voltage $V_{1}$. (d) The slope $a$ and offset $b$ of the Sigmoid-like functions versus tilt angle $\beta$.
We found that the control voltage $V_{1}$ and angle $\beta$ have greater impacts on the 180$^{\circ}$ switching probability. Therefore, we further studied the relationship between these two parameters and the activation function. In Fig. 3(a), the control voltage is swept from 32.5 mV to 64.5 mV under the angle $\beta =10^{\circ}$. Figure 3(c) shows the specific effect of the control voltage on the activation function. When the tilt angle $\beta =10^{\circ}$, the offset $b$ of the activation function decreases with increasing voltage $V_{1}$, while the slope $a$ of the activation function is not sensitive to changes in voltage $V_{1}$. In Fig. 3(b), the angle $\beta$ is swept from 5$^{\circ}$ to 25$^{\circ}$ under the $V_{1}=48.5$ mV. The offset $b$ of the activation function is increased along with the tilt angle $\beta$, as shown in Fig. 3(d). There is a biological mechanism for tuning the input-output functions (S-type) of biological neuron,[31] which is conducive for neurons to implement distinct computational functions and accommodate a variety of new information introduced into the environmental change. Therefore, we can use the voltage $V_{1}$ and tilt angle $\beta$ to mimic tuning mechanism of biological neuron, which can improve adaptability of the voltage-driven spintronic neuron. These results indicate that we can use the voltage-driven spintronics devices to perform as the adaptive neuron, which can further improve the neural network.
cpl-37-7-078501-fig4.png
Fig. 4. (a) Schematic of voltage-driven tunable spin neuron. Voltage-driven magnetization switching is applied to the spin neuron, as shown in Fig. 1(a). The $g_{\rm m}$ is transconductance of voltage controlled current source. (b) Two activation function curves of voltage-driven spin neurons based on hardware characteristics. (c) A schematic illustration of the ANN constructed to recognize the handwriting digit in the MNIST database. (d) The recognition rate obtained from the neural network constructed with the voltage-driven spin neurons, compared to a conventional neural network with ReLU, Sigmoid and Softplus activation functions.
Rotating magnetization of a free layer from the initial state antiparallel to magnetization of the reference layer, the MTJ resistance can reach the maximum value, so that the change of magnetic moment is mapped to the change of resistance. To further improve the adaptability of spin neurons, we propose the reading circuit that can tune the activation function type by changing the reading voltage, as shown in Fig. 4(a). This structure consisting of a voltage controlled current source (VCCS) and the voltage-induced strain manipulation MTJ that can be fabricated in experiment.[32] When the control voltage $V_{\rm C}$ of VCCS is a constant, the controlled current source is a constant current source $I=g_{\rm m}V_{\rm C}$, then the maximum voltage across the MTJ is $V_{\rm MTJmax}=I\times R_{\rm MTJmax}$, where $R_{\rm MTJmax}$ is the maximum MTJ resistance. At this time, this structure (equivalent to DC source $+$ MTJ) has the same function as the reading circuit (DC source $+$ MTJ) of traditional current spin neuron proposed in Refs. [6,20]. To propagate information between neurons, we can use MTJs to convert the resistance change into voltage change. Therefore, the relationship between the output voltage and the input voltage $V_{2}$ satisfies the S-type activation function in Figs. 3(a) and 3(b). Assuming $g_{\rm m}R_{\rm MTJmax}=1$ at room temperature, therefore $V_{\rm MTJmax}=V_{\rm C}$. When $V_{\rm MTJmax}=V_{\rm C}=20$ mV, activation function is $V_{\rm out}=20/\{1+ \exp[-1.33(V_{2} -12.71)]\}$, as shown by the blue dotted line in Fig. 4(b). The S-type activation function is commonly used in artificial neural networks, but its disadvantage is that the gradient disappears. Therefore, the ReLU function and Softplus function (smooth version of ReLU function) with better performance are introduced as activation functions of artificial neuron in neural network.[33] We further discuss how we can control the type of activation function of this neuron. When $V_{\rm MTJmax}=V_{\rm C}=V_{2}$, the relationship between $V_{\rm out}$ and $V_{2}$ changes to $V_{\rm out}=V_{2}/\{1+ \exp[-1.33(V_{2}-12.71)]\}$, as shown by the green line in Fig. 4(b). It is the same as the ReLU-like function proposed by Google in 2017, which tends to work better than the ReLU function on neural network.[34] To verify the specific performance of spin neuron we designed, we construct an ANN based on spin neurons to recognize the handwritten digits in the MNIST database, as shown in Fig. 4(c). The artificial synapse structure that stores weights and biases can be implemented by memristors and spin synapses in Refs. [35,36]. In our simulation, a three-layer neural network model was built using the Tensorflow of Python software. All neurons in the neural network are set with the same activation function. We did not simulate the dynamic behaviors of all spin neurons, but we controlled the voltage range to ensure that the activation behaviors of all spin neurons are consistent with the fitting function in Fig. 4(b). Since traditional neurons are replaced by spin neurons we designed, the input voltages of device were used as the input value of neural network and activation function of spin neuron was used as the activation function of the neural network. The output voltage of device can be applied to the electrode pair (yellow) of the next neurons, which achieves the signal transmission. All the images in the dataset have $28 \times 28$ pixels, so the input layer has 784 neurons corresponding to 784 pixels. In practice, for a neural network based on a stochastic switching magnetic tunnel junction (spin neurons), since the data of 784 pixels is a gray value in the range of 0–255, we need to linearly change the gray value to input voltages. We further study whether spin neurons can work as well as three kinds of traditional neurons. Standard activation functions of traditional neurons include Sigmoid function, Softplus function and ReLU function. Changing the neurons will change the activation function of the neural network, so we input five activation functions respectively into the same simulation framework (three layers neural network). Cross-entropy loss function and traditional gradient descent optimization techniques are used in the simulation. The back-propagation algorithm is used to train the network for 100 training epochs under the learning rate of 0.01. After the above simulation, we obtained the recognition rate curves of five kinds of neurons, as shown in Fig. 4(d): ReLU neuron (orange line), Softplus neuron (blue line), Sigmoid neuron (green line), Spin neuron with ReLU-like (red line) and Spin neuron with Sigmoid-like (purple line). The recognition accuracy of ANN based on spin neurons with sigmoid-like activation function is 90.45%, which can work as the artificial neuron with sigmoid function (green line). The recognition accuracy of spintronic ANN with ReLU-like activation function is 97.75%, which is consistent with those of the ReLU and Softplus function, as shown in Fig. 4(d). These results indicate that the energy-efficient spin neurons we designed can replace traditional neurons in theory. It must be aware that the input voltage range should within the limits range of the hardware and within the nonlinear region of the activation function at the same time. Moreover, the learning rate of the neural network can affect the final recognition accuracy, so our recognition results are obtained at a specific learning rate (0.01). In summary, we have presented a voltage-driven spin neuron model that can be used for energy-efficient neuromorphic computing. The S-type activation function of this spin neuron can be controlled by a control voltage and tilt angle, which shows the ability to mimic the tuning mechanism of biological neurons. We can change activation function type from Sigmoid-like to ReLU-like by controlling voltage of reading circuit, which extends the application range of this adaptive spin neurons. The ANN constructed by voltage-driven spin neurons is used to recognize the handwritten digits in the MNIST database, whose accuracy is 90.45% (Sigmoid-like function) and 97.75% (ReLU-like function) under the learning rate of 0.01. This work provides an effective alternative to the design of lower energy consumption well-adapted spintronic neuron for neuromorphic computing.
References The future of electronics based on memristive systemsOn-Demand Reconfiguration of Nanomaterials: When Electronics Meets IonicsNeuroscience-Inspired Artificial IntelligenceArtificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switchingp-bits for probabilistic spin logicEncoding neural and synaptic functionalities in electron spin: A pathway to efficient neuromorphic computingSpin-neurons: A possible path to energy-efficient neuromorphic computersEfficient Dipole Coupled Nanomagnetic Logic in Stress Induced Elliptical Nanomagnet ArrayNeuromorphic computing with nanoscale spintronic oscillators2017 Index IEEE Transactions on Magnetics Vol. 53Voltage-Controlled Spintronic Stochastic Neuron Based on a Magnetic Tunnel JunctionA two-terminal perpendicular spin-transfer torque based artificial neuronNeural-like computing with populations of superparamagnetic basis functionsProbabilistic Deep Spiking Neural Systems Enabled by Magnetic Tunnel JunctionStochastic resonance in a trapping overdamped monostable systemNeural coding using telegraphic switching of magnetic tunnel junctionMultilevel storage device based on domain-wall motion in a magnetic tunnel junctionSparse neuromorphic computing based on spin-torque diodesMagnetoelastic Clock System for Nanomagnet LogicThe straintronic spin-neuronLow Power Restricted Boltzmann Machine Using Mixed-Mode Magneto-Tunneling JunctionsExperimental Demonstration of Complete 180° Reversal of Magnetization in Isolated Co Nanomagnets on a PMN–PT Substrate with Voltage Generated StrainElectric field control of anisotropy and magnetization switching in CoFe and CoNi thin films for magnetoelectric memory devicesMicromagnetic modelling - the current state of the artModeling of 180° magnetization switching and clock sensitivity in a tilted multiferroic nanomagnetMajority Logic Gate for Magnetic Quantum-Dot Cellular AutomataDemagnetization factors for elliptic cylindersMagnetization dynamics, Bennett clocking and associated energy dissipation in multiferroic logicBinary information propagation in circular magnetic nanodot arrays using strain induced magnetic anisotropyExperimental Clocking of Nanomagnets with Strain for Ultralow Power Boolean LogicRegulation of neuronal input transformations by tunable dendritic inhibitionFull voltage manipulation of the resistance of a magnetic tunnel junctionSearching for Activation FunctionsSpin-torque building blocksAnalogue spin–orbit torque device for artificial-neural-network-based associative memory operation
[1] Zidan M A, Strachan J P, Lu W D 2018 Nat. Electron. 1 22
[2] Lee J, Lu W D 2018 Adv. Mater. 30 1702770
[3] Hassabis D, Kumaran D, Summerfield C, Botvinick M 2017 Neuron 95 245
[4] Kurenkov A, Duttagupta S, Zhang C et al. 2019 Adv. Mater. 31 1900636
[5] Camsari Y K, Sutton B M, Supriyo D 2019 Appl. Phys. Rev. 6 011305
[6] Sengupta A, Roy K 2017 Appl. Phys. Rev. 4 041105
[7] Sharad M, Fan D, Roy K 2013 J. Appl. Phys. 114 234906
[8] Liu J H, Yang X K, Zhang M L et al. 2019 IEEE Electron Device Lett. 40 220
[9] Torrejon J, Riou M, Araujo F A et al. 2017 Nature 547 428
[10] Arai H, Imamura H 2017 IEEE Trans. Magn. 53 1
[11] Cai J L, Fang B, Zhang L K et al. 2019 Phys. Rev. Appl. 11 034015
[12] Kei K, Young C, Jong-Ung B et al. 2018 J. Phys. D 51 504002
[13] Mizrahi A, Hirtzlin T, Fukushima A et al. 2018 Nat. Commun. 9 1533
[14] Sengupta A, Parsa M, Han B, Roy K 2016 IEEE Trans. Electron Devices 63 2963
[15] Agudov N V, Krichigin A V, Valenti D et al. 2010 Phys. Rev. E 81 051123
[16] Suh D I, Bae G Y, Oh H S et al. 2015 J. Appl. Phys. 117 17D714
[17] Cai J L, Fang B, Wang C et al. 2017 Appl. Phys. Lett. 111 182410
[18] Cai J L, Zhang L K, Fang B et al. 2019 Appl. Phys. Lett. 114 192402
[19] Vacca M, Graziano M, Crescenzo D L et al. 2014 IEEE Trans. Nanotechnol. 13 963
[20] Biswas A K, Atulasimha J, Bandyopadhyay S 2015 Nanotechnology 26 285201
[21] Nasrin S, Drobitch J L, Bandyopadhyay S et al. 2019 IEEE Electron Device Lett. 40 345
[22] Biswas A K, Ahmad H, Atulasimha J et al. 2017 Nano Lett. 17 3478
[23] Jin T L, Hao L, Cao J W et al. 2014 Appl. Phys. Express 7 043002
[24] Fidler J and Schrefl T 2000 J. Phys. D 33 R135
[25] Liu J H, Yang X K, Cui H Q et al. 2019 J. Magn. Magn. Mater. 474 161
[26] Imre A, Csaba G, Ji L, Orlov A et al. 2006 Science 311 205
[27] Beleggia M, Graef M D, Millev Y T et al. 2005 J. Phys. D 38 3333
[28] Fashami M S, Roy K, Atulasimha J et al. 2011 Nanotechnology 22 155201
[29] Fashami M S, Al-Rashid M, Sun W Y et al. 2016 Nanotechnology 27 43LT01
[30] D'Souza N, Fashami M S, Bandyopadhyay S, Atulasimha J 2016 Nano Lett. 16 1069
[31] Lovett-Barron M, Turi G F, Kaifosh P et al. 2012 Nat. Neurosci. 15 423
[32] Chen A T, Zhao Y L, Wen Y et al. 2019 Sci. Adv. 5 eaay5141
[33]Nair V and Hinton G E 2010 Proceedings of the 27th International Conference on Machine Learning (ICML-10) (21–24 June 2010, Haifa, Israel) p 807
[34] Ramachandran P, Zoph B and Le Q V 2018 arXiv:1710.05941v2 [cs.NE]
[35] Locatelli N, Cros V, Grollier J 2014 Nat. Mater. 13 11
[36] Borders W A, Akima H, Fukami S et al. 2017 Appl. Phys. Express 10 013007