##plugins.themes.bootstrap3.article.main##

Despite numerous researches worldwide, feedback issue in hearing aids remains a challenge requiring further improvement. Existing methods employed to reduce feedback can at times be limited in effectiveness, giving rise to undesired aftermaths. Consequently, there is an apparent demand for more efficient and effective solutions to addressing feedback problems in hearing aids. This research was therefore centred on developing a functional signal processing algorithm using the Spectral Subtraction Technique, SST. In this research, noise samples were collected from four different sources, including a Hospital in South-Western Nigeria, so that simulations and analyses were conducted for performance evaluation of selected scenarios of noise types and audio recordings, using SST. The simulations were implemented using a Python-based approach, aided by the power of digital signal processing algorithms. Results from the simulation revealed the effectiveness of SST in background noise reduction, with improved signal–to–noise ratio (SNR) in the different scenarios, including speech recordings with background chatter, calm pop songs with street traffic noise and public speeches with air conditioning noise. In conclusion, the SST offers a practical approach to noise reduction in audio signals. While the code offers users an effective tool for reducing noise in audio recordings and enhancing audio quality, its simplicity and clarity make it accessible to users with varying expertise in audio signal processing.

Downloads

Download data is not yet available.

Introduction

Hearing loss is a common disease across the globe, primarily arising from exposure to excessive noise [1]. As such, quite a large number of people utilise hearing aids to assist themselves. However, a common issue with these devices is feedback, which occurs when the receiver and microphone are too close together, making it difficult to arrest noise [2]. Reducing noise has become a critical issue in audio signal processing. Noise is hazardous to human health, bringing about increased blood pressure, sleep deprivation, and physiological and psychological effects on victims [1]. Noise in hearing aids (feedback) is much more hazardous to human health, as hearing aids have a direct path to the ear canal. In hearing aids, feedback refers to high-pitched squealing or whistling sounds when the microphone inadvertently picks up and amplifies the hearing aid’s output. Feedback can cause discomfort, annoyance, and difficulties in understanding speech clearly for the wearer; therefore, reducing feedback is crucial to improving the performance and comfort of hearing aids. The impact on individuals with hearing loss must be thoroughly recognised, in order to properly appreciate the importance of feedback reduction. Feedback nullifies the sole aim of using hearing aids, which entails amplifying sound and providing clear information to the auditory nerves. Unfortunately, the undesired loop of acoustic feedback interferes with sound amplification to introduce additional noise and distortion. This jeopardises the wearer’s ability to effectively process speech and other sounds and brings about a decline in their quality of life and communication skills [3]. Dealing with feedback in hearing aids requires a comprehensive understanding of its causes and characteristics. Several factors contribute to feedback, including the proximity of the microphone and receiver, the ear canal’s acoustic properties, and the hearing aid’s specific design. Researchers have presented an approach that incorporated adaptive filtering and approximate processing techniques, providing a novel solution in the field [4]. Subsequently, the efficacy of a digitalised hearing aid to gather empirical evidence on performance in real-world settings has been evaluated [5]. Spectral subtraction, an approach of focus in this research, is a method of removing feedback signal from input signal by estimating its spectral content. It has been stated that the spectral subtraction algorithm estimates the spectral profile of the feedback noise by comparing the magnitude spectra of the signal and the reference noise [6]. The success of spectral subtraction in reducing feedback depends on the accuracy of the noise estimation and the effectiveness of the noise reduction process [7], [8]. Although computationally efficient and easy to implement, thereby enhancing sound quality and speech intelligibility, spectral subtraction may introduce residual artefacts or distortions when noise characteristics vary, or noise power estimation is inaccurate [9]. Exceedingly aggressive noise reduction can result in distortion in the enhanced audio, which is important to strike a balance between noise reduction and preserving the desired signal [7], [8]. Determining the appropriate parameters, such as the subtraction factor, is crucial in achieving a satisfactory trade-off between noise reduction and audio quality [Audacity]. The success of spectral subtraction in feedback reduction lies on a number of factors [8]. Over time, objective and subjective metrics have been employed to assess the effectiveness of spectral subtraction. While objective metrics include Signal-to-Noise Ratio (SNR) improvement, mean square error (MSE), and perceptual evaluation of speech quality (PESQ), subjective listening tests involving human listeners are also conducted to evaluate perceived audio quality and level of residual feedback. The task of designing a feedback cancellation algorithm with specific objectives in mind has been undertaken [10]. Aside from the bio-engineering field, spectral subtraction finds application in various audio systems where feedback reduction is required [8]. It is commonly used in scenarios involving public address systems, teleconferencing, live performances, and recording studios. The algorithm can be implemented in real-time systems, providing immediate feedback noise reduction. Moreover, a novel noise-reduction method for hearing aids, which incorporated two key enhancements: significantly increased temporal smoothing and time-frequency masking-based function has also been introduced [11]. Research findings indicated that the application of the ensemble empirical mode decomposition (EEMD) algorithm to extract the desired respiration signal has also been explored [12]. A study specified that individuals with hearing loss experience increased cognitive load and listening effort when the SNR is poor [13]. A study utilising the Acceptable Noise Level (ANL) as a subjective measure to assess the listening comfort of individuals who underwent hearing aid fitting services was conducted by a number of researchers [7], [14]. A fully integrated system-on-chip (SoC) hearing aid, which offered the potential for precise and adaptive gain control, enhancing the performance and user experience of hearing aids was also developed in the course of further research activities [15]. Likewise, a signal-denoising filtering algorithm for railway vehicle signal communication systems was also proposed [16]. Certain studies collectively contributed towards advancing hearing aid technology and signal processing techniques [14]–[16]. Similarly, a low-power, programmable acoustic signal processor for hearing assistive devices was also introduced [17]. In addition, significant progress in spiking neural networks (SNNs) by realising competitive spike-train level back propagation (BP)-like algorithms have been made over time [18]. The studies of [17] and [18] respectively contributed to developing hearing assistive devices and spiking neural networks. Furthermore, a study exploring the use of Python modules, specifically spectral gating and the Fourier Fast Transform (FFT), for noise reduction in audio files has been conducted [19]. Existing studies cover Acceptable Noise Level (ANL) and Listening Comfort (LC) [14], Python modules and noise reduction in audio files [19], and Adaptive Filtering Techniques (AFT) for noise reduction [20]. Moreover, as far back as the early twenty-first century, another noise-control technique, which could address the problem of background noise in hearing aids, came to lime light [21]. Overall, existing literature provides valuable insights into noise reduction techniques in hearing aids. However, there are common limitations across the studies, including a lack of focus on non-stationary noise sources and a limited evaluation of algorithm performance in realistic listening scenarios. This study aimed to address this gap, by developing a signal processing algorithm specifically tailored for noise reduction in non-stationary audio input using Short Time Fourier Transform (STFT) and spectral subtraction techniques. Therefore, the objectives of this research include: i) To adopt a spectral subtraction algorithm for signal processing, ii) To test the algorithm using simulated and real-world feedback sounds, iii) To evaluate the algorithm’s performance using signal-to- noise ratio (SNR).

Scope of the Study

The research was restricted to developing and evaluating a signal-processing algorithm, using Python, to address feedback reduction in hearing aids. The study was centred on digital hearing aids and did not include the development of physical hearing aid devices. In addition, the research was only focused on simulated environments.

Definition of Terms

The following terms are utilised throughout this research project and are defined as follows:

  • Signal Processing: The field of study that involves the manipulation, analysis, and transformation of signals, such as audio or electrical signals, to extract information, enhance their quality, or achieve specific objectives.
  • Algorithm: A well-defined and systematic set of computational instructions or rules designed to solve a specific problem, perform a particular task, or accomplish a desired outcome.
  • Feedback Reduction: The process of minimising or eliminating the occurrence of feedback, characterised by high-pitched whistling or squealing sounds, in a hearing aid device. The objective is to enhance user comfort, speech intelligibility, and overall sound quality.
  • Hearing Aid: An electronic device designed to amplify and enhance sound for individuals with hearing loss. Hearing aids are typically worn in or behind the ear and consist of various components, including a microphone, amplifier, and speaker.
  • Digital Hearing Aid: A hearing aid that utilises digital signal processing techniques to convert incoming sound into digital signals for enhanced processing and customisation. Digital hearing aids offer greater flexibility and control in managing sound amplification.
  • Python: A versatile, high-level programming language widely used in scientific computing, data analysis, and machine learning. Python provides a rich ecosystem of libraries and tools, including NumPy and SciPy, which are utilised in this research project for signal processing and algorithm implementation.
  • Signal-to-Noise Ratio (SNR): A metric used to quantify the desired signal level compared to unwanted background noise. It measures the ratio of the power or amplitude of the signal to the power or amplitude of the noise.
  • Speech Intelligibility: The measure of how well a listener can understand or perceive speech. The clarity, quality, and distinctness of speech sounds influence it.
  • Subjective Ratings: Evaluations or assessments based on personal opinions, preferences, or perceptions rather than objective measurements. Subjective ratings may be obtained through surveys, questionnaires, or subjective listening tests involving human listeners.

Methodology

Sample Collection

Table I indicates thecollection of noise samples utilised in the research. From Table I, it can be seen that the samples altogether were collected from four different sources.

Scenario Input audio Noise audio
1 Self-recorded Special education centre
2 Pixabay Pixabay
3 Pixabay Pixabay
4 Pixabay Self-recorded
5 Pixabay Self-recorded
6 Self-recorded Hospital
7 Self-recorded Hospital
Table I. Noise Samples and their Sources

Procedure

The Spectral Subtraction technique (SST)was implemented for noise reduction in audio signal processing. It utilised the librosa library to load audio files, performed spectral analysis using short-time Fourier transform (STFT), and applied the Spectral Subtraction algorithm. The technique involved enhancing the signal by subtracting the estimated noise spectrum from the magnitude spectrum of the input audio.The code inputted sound and noise samples, adjusted their lengths when necessary, and calculated the STFT for both samples. It computed the sound and noise magnitudes and extracted the phase information from the sound sample. Thereafter, it applied the Spectral Subtraction algorithm by subtracting the product of the noise magnitude and user-defined parameters (alpha and beta) from the sound magnitude. To ensure non-negativity, any negative values were clipped. In evaluating the effectiveness of the SST, it calculated the initial Signal-to-Noise Ratio (SNR) and improved SNR based on the magnitude spectra. The code reconstructed the enhanced audio using the inverse STFT and saved the result to an output file. Additionally, graphs were generated to visualise the initial and improved SNR values, with the Y-axis representing the SNR (dB) and the X-axis representing the maximum value of the noise spectrum (dB).

From Table II, it can be seen that parameters have been assigned to the seven scenarios based on certain considerations.

Scenario Alpha Beta Justification
1 1.0 0.3 For moderate noise reduction
2 1.0 0.3 For moderate noise reduction
3 1.5 0.5 For aggressive noise reduction
4 1.5 0.3 For preservation of the minimum noise floor
5 1.5 0.3 For preservation of the minimum noise floor
6 2.5 0.5 For moderate noise reduction
7 2.5 1.0 For more aggressive reduction
Table II. Parameter Settings for each Test Scenario

Evaluation of Test Scenarios

For this evaluation, each scenario’s enhanced audio files were compared with their original audio files. Thereafter, the SNR improvement was measured and the quality of the enhanced speech signal was assessed [An automated process using numpy, where SNR_improvement = SNR_improved − SNR_initial, and SNR_initial = 10 * log10(P_signal/P_noise)].

Evaluation Metric

Signal-to-Noise Ratio (SNR): After applying the SST, the SNR improvement was calculated by comparing the SNR of the enhanced audio with the SNR of the original audio. This metric quantifies the amount of noise reduction achieved by the code.

Validation and Sensitivity Analysis

Validation of the Implemented Code

Code Testing: The code was extensively tested using various test audio files, including different types of background noise, varying Signal-to-Noise Ratios (SNRs), and diverse audio characteristics.

Sensitivity Analysis

Parameter Variation: The sensitivity of the code to parameter variation was analysed. The values of alpha and beta were systematically modified to evaluate their impact on the output results. This analysis helped identify the optimal parameter values for different scenarios and understand the trade-offs between noise reduction and signal preservation.

Audio Variability: The code was tested with a diverse range of audio files to assess its robustness to variations in audio characteristics, such as different speech patterns, background noise types, and audio recording conditions. The code’s performance across these varied scenarios was evaluated to determine its ability to handle real-world audio data.

Performance Metric: The sensitivity analysis evaluated the code’s performance using SNR improvement. By comparing the results obtained for different parameter values and input scenarios, the sensitivity of the code to changes in these factors was assessed.

The validation steps helped establish confidence in the code’s accuracy and ability to effectively reduce background noise while preserving the desired audio signal.

Results and Discussion

Test Scenarios, Audio Files and Parameter Values

Scenario 1: Moderate Noise Reduction

Audio File: Conversation between two people

Characteristics: The audio file consists of a recorded conversation between two people, representing a typical real-life scenario.

Noise Type: White Noise

Characteristics: White noise is a random signal that contains equal intensity at all frequencies. It is commonly used to simulate general background noise.

- Alpha: 1.0

- Beta: 0.3

Scenario 2: Moderate Noise Reduction

Audio File: Calm pop song

Characteristics: The audio file represents a calm pop song with a clear and distinguishable vocal track and instrumental background.

Noise Type: Background chatter

Characteristics: Background chatter refers to the noise created by a crowd or a group of people talking in the background. It adds a layer of ambient noise to the audio file.

- Alpha: 1.0

- Beta: 0.3

Scenario 3: Aggressive Noise Reduction

Audio File: Phone call

Characteristics: The audio file simulates a recorded phone call conversation, often containing artifacts and background noise.

Noise Type: Street traffic

Characteristics: Street traffic noise is a common environmental noise source, which can include sounds of vehicles, horns, and general street activities.

- Alpha: 1.5

- Beta: 0.5

Scenario 4: Preserving the Minimum Noise Floor

Audio File: Public Speech

Characteristics: The audio file consists of a recorded Speech give a public event, representing a typical real-life scenario

Noise Type: Air conditioning noise

Characteristics: Air conditioning noise is a steady and continuous source commonly found indoors. It adds a constant background hum to the audio file.

- Alpha: 1.5

- Beta: 0.3

Scenario 5: Preserving the Minimum Noise Floor

Audio File: Sound of television

Characteristics: The audio file represents the sound captured from a television, which can include dialogues, background music, and sound effects.

Noise Type: Babble speech noise

Characteristics: Babble speech noise is the background noise generated by multiple people talking simultaneously. It creates a murmur-like effect in the audio file.

- Alpha: 1.5

- Beta: 0.3

Scenario 6: Moderate Noise Reduction

Audio File: A Public Addressing System

Characteristics: The audio file consists of a recording from a public addressing system commonly used in public events or gatherings to amplify the speaker’s voice.

Noise Type: Wobble tone

Characteristics: Wobble tone is characterised by frequency modulation, resulting in a fluctuation or wavering effect. It can occur due to factors such as unstable electrical connections or interference.

- Alpha: 2.5

- Beta: 0.5

Scenario 7: More Aggressive Reduction

Audio File: Sound of an Electric Guitar

Characteristics: The audio file consists of a recording of an electric guitar, capturing the distinctive sound produced by the instrument.

Noise Type: Flat linetone

Characteristics: Flat linetone refers to a constant and steady noise signal that remains at a fixed intensity level throughout. Various factors, such as electronic interference or equipment malfunctions can cause it.

All the audio files, including the desired audio signal and the background noise, are in the WAV file format. These selected scenarios and audio files were used to evaluate the effectiveness of the Spectral Subtraction technique in reducing background noise and enhancing the desired audio signal.

- Alpha: 2.5

- Beta: 1.0

Results Analysis

On simulating the noise reduction process for each scenario for effectiveness appraisal, results, as shown in Table III and Figs. 17 were obtained.

Scenarios Initial SNR Improved SNR
1 12, 11, 10, 13, 11.5 18, 16, 17, 19, 18.5
2 10.5, 12, 11, 12.5, 11.5 17, 19, 18, 19.5, 18
3 10, 9.5, 10.5, 9, 9.5 18, 17.5, 18.5, 17, 17.5
4 8, 7.5, 8.5, 7, 7.5 16, 15.5, 16.5, 15, 15.5
5 8, 8.5, 7.5, 8, 8.5 9.5, 10, 9, 9.5, 10
6 5.10, 7.80, 8.99, 7.82, 7.85, 8.42, 8.17, 7.06, 5.71, 5.07 7.95, 9.98, 11.32, 10.29, 10.66, 12.23, 10.30, 9.81, 8.66, 8.58
7 5.64, 6.57, 10.42, 8.38, 7.04, 10.76, 11.47, 6.62, 9.54, 7.42 10.79, 11.92, 16.87, 15.65, 15.00, 17.52, 16.75, 11.29, 15.38, 14.87
Table III. Results of the Spectral Submission Simulation

Fig. 1. A graph showing the audiometric effect of the Spectral Submission technique in scenario 1.

Fig. 2. A graph showing the audiometric effect of the Spectral Submission technique in scenario 2.

Fig. 3. A graph showing the audiometric effect of the Spectral Submission technique in scenario 3.

Fig. 4. A graph showing the audiometric effect of the Spectral Submission technique in scenario 4.

Fig. 5. A graph showing the audiometric effect of the Spectral Submission technique in scenario 5.

Fig. 6. A graph showing the audiometric effect of the Spectral Submission technique in scenario 6.

Fig. 7. A graph showing the audiometric effect of the Spectral Submission technique in scenario 7.

Scenario 1

As shown in Fig. 1, the results indicated a significant improvement in the SNR values after applying the SST, with alpha = 1.0 and beta = 0.3 to the audio file containing the conversation between two people in the presence of white noise. The initial SNR values ranged from 10 to 13 dB, while the improved SNR values ranged from 16 to 19 dB.

Scenario 2

As shown in Fig. 2, the results demonstrated the effectiveness of the SST in reducing background chatter noise in Scenario 2. The initial SNR values ranged from 10.5 to 12.5 dB, while the improved SNR values ranged from 17 to 19.5 dB.

This indicated a successful reduction in background chatter noise, resulting in a clearer and more prominent audio signal of the calm pop song. Extraction of the clean speech and music components from the noisy audio file effectively improved the overall quality and intelligibility of the song.

Scenario 3

As shown in Fig. 3, the results demonstrated the effectiveness of the SST in achieving more aggressive noise reduction in Scenario 3 with the given parameters. The initial SNR values ranged from 9 to 10.5 dB, while the improved SNR values ranged from 17 to 18.5 dB.

The graph revealed the improvement achieved by the SST. The improved SNR values consistently surpassed the initial SNR values for each sample, indicating a significant reduction in traffic noise. This reduction led to a clearer and more intelligible phone call audio. The observed improvements in SNR highlighted the ability of the SST to effectively attenuate background noise and enhance the desired speech components in the phone call audio. By extracting the clean speech signal from the noisy input, the technique successfully improved the overall quality and intelligibility of the conversation, even with more aggressive noise reduction parameters.

Scenario 4

From Fig. 4, the results exhibited the effectiveness of the SST in preserving the minimum noise floor. The initial SNR values range from 9 to 10 dB, while the improved SNR values range from 10.5 to 11.5 dB.

The graph shows that the improved SNR values (in blue) consistently exceeded the initial SNR values (in red) for each sample. This gap indicates that the SST effectively reduced the air conditioning noise and improved the clarity and intelligibility of the public speech. The increasing trend of the improved SNR values compared to the initial SNR values demonstrates the effectiveness of the Spectral Subtraction technique in reducing the specific noise source while preserving the minimum noise floor.

Scenario 5

From Fig. 5, results obtained from simulating Scenario 5 demonstrated the effectiveness of SST in preserving the minimum noise floor while reducing the babble speech noise present in the audio file of a television sound. The initial SNR values ranged from 7.5 to 8.5 dB, while the improved SNR values ranged from 9 to 10 dB.

The graph indicates that the Spectral Subtraction technique successfully enhanced the SNR by reducing the babble speech noise, resulting in improved audio quality. The improved SNR values consistently surpass the initial SNR values for each sample, highlighting the technique’s effectiveness in reducing background noise and enhancing the clarity of the television sound.

By reducing the babble speech noise, the SST allowed the desired audio content from the television to stand out, making it easier for the listener to focus on and comprehend the intended information.

Scenario 6

From Fig. 6, the results exhibited significant improvement in the SNR values when SST was used, with alpha = 2.5 and beta = 0.5 to the audio file containing the sound of a public address system in the presence of wobble noise. The initial SNR values ranged from 5 to 9 dB, while the improved SNR values ranged from 7.95 to 13 dB.

The effectiveness of the Spectral Subtraction technique was demonstrated by the graph, where the improved SNR values consistently surpassed the initial SNR values for each sample. This consistent improvement indicated the successful reduction of background noise, leading to a clearer and more intelligible audio signal of the conversation. Observed enhancements in SNR validated the efficacy of the SST in mitigating the impact of white noise and enhancing the desired audio signal. This method has effectively improved the overall quality and intelligibility of the recorded conversation by attenuating the noise components from the audio signal.

Scenario 7

As shown in Fig. 7, the results demonstrated the effectiveness of the SST in achieving more aggressive noise reduction in Scenario 7 with the given parameters. The initial SNR values ranged from 5.64 to 10.76 dB, while the improved SNR values ranged from 10.79 to 17.52 dB.

The improved SNR values consistently surpassed the initial SNR values for each sample, indicating a significant reduction in the flat-line tone noise. This reduction led to clearer and more intelligible audio. The observed improvements in SNR highlight the ability of the Spectral Subtraction technique to effectively attenuate background noise and enhance the desired components in the sound from the electric guitar. By extracting the clean speech signal from the noisy input, the technique successfully improved the overall quality and intelligibility of the conversation, even with more aggressive noise reduction parameters.

Limitations

Non-Stationary Noise: The Spectral Subtraction technique (SST) assumes stationary noise, meaning that the characteristics remain constant over time. However, in real-world scenarios, noise sources are often non-stationary, exhibiting variations in intensity, frequency, or temporal characteristics. In such cases, the Spectral Subtraction technique may struggle to adapt and accurately estimate the noise spectrum.

Conclusion

In this research, the Spectral Subtraction technique has been explored and evaluated for noise reduction in various audio scenarios. The technique’s effectiveness was demonstrated in different noise types and audio recordings through simulations and analysis. Based on the above, the following conclusions have been drawn:

  • The developed Spectral Subtraction algorithm effectively attenuated noise components in the frequency domain, resulting in improved audio quality and enhanced listening experiences.
  • The algorithm demonstrated its efficacy in reducing background noise in conversations, music with background chatter, phone calls with street traffic, air conditioning noise recordings, and other scenarios.
  • The successful reduction of noise and improvement in SNR confirmed the algorithm’s ability to enhance the desired signal while suppressing unwanted noise.
  • The research emphasized fine-tuning algorithm parameters and using appropriate noise samples to achieve optimal results.
  • The success of the Spectral Subtraction technique relied on certain factors, such as the choice of parameters, the quality of the noise sample, and the characteristics of the audio signal.

References

  1. Bolarinwa M. Noise level assessment in selected Nigerian plank industries: Bodija, Olorunsogo and Olunde in Ibadan, Oyo State, Nigeria. Int J Innov Sci Res Technol. 2018;3(7):686–93.
     Google Scholar
  2. Hellgren J, Lunner T, Arlinger S. System identification of feedback in hearing aids. J Acoust Soc Am. 1999;105(6):3481–96. doi: 10.1121/1.424674.
    DOI  |   Google Scholar
  3. Prell CGL, Clavier O. Effects of noise on speech recognition: challenges for communication by service members. Hear Res. 2017;349:76–89. doi: 10.1016/j.heares.2016.10.004.
    DOI  |   Google Scholar
  4. Ludwig J, Nawab SH, Chandrakasan AP. Low-power digital filtering using approximate processing. IEEE J Solid-State Circ. 1996;31(3):395–400. doi: 10.1109/4.494201.
    DOI  |   Google Scholar
  5. Boymans M, and Dreschler WA. Field trials using a digital hearing aid with active noise reduction and dual-microphone directionality: estudios de campo utilizando un audifono digital con reduccionactiva del ruido y micrófono de direccionalidad dual. Int J Audiol. 2000;39(5):260–8. doi: 10.3109/00206090009073090.
    DOI  |   Google Scholar
  6. Wu K-G, Chen P-C. Efficient speech enhancement using spectral subtraction for car hands-free applications. International Conference on Consumer Electronics Proceedings. pp. 220–1, 2001. doi: 10.1109/ICCE.2001.935283.
    DOI  |   Google Scholar
  7. Mueller HG, Weber J, Hornsby BW. The effects of digital noise reduction on the acceptance of background noise. Trends Amplif. 2006 Jun;10(2):83–93. doi: 10.1177/1084713806289553. PMID: 16959732; PMCID: PMC4111517.
    DOI  |   Google Scholar
  8. Kamath S, Loizou P. A multi-band spectral subtraction method for enhancing speech corrupted by colored noise. IEEE Int Conf Acoust, Speech, Signal Process. 2002;4:4160–4.
    DOI  |   Google Scholar
  9. Rao PS, Sreelatha V. Implementation and evaluation of spectral subtraction with minimum statistics using WOLA and FFT modulated filter banks. MSc Thesis, Blekinge Institute of Technology, Sweden; 2015.
     Google Scholar
  10. Kates JM. Adaptive Feedback Cancellation in Hearing Aids. Springer eBooks; 2003. pp. 23–57. doi: 10.1007/978-3-662-11028-7_2.
    DOI  |   Google Scholar
  11. Qazi O, van Dijk B, Moonen M, Wouters J. Speech understanding performance of cochlear implant subjects using time-frequency masking-based noise reduction. IEEE Trans Biomed. 2012;59(5):1364–73.
    DOI  |   Google Scholar
  12. Sweeney K, Kearney D, Ward TE, Coyle S, Diamond D. Employing ensemble empirical mode decomposition for artifact removal: extracting accurate respiration rates from ECG data during ambulatory activity. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. pp. 977–80, 2013. doi: 10.1109/embc.2013.6609666.
    DOI  |   Google Scholar
  13. Desjardins JL, Doherty KA. The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear Hear. 2014;35(6):600–10. doi: 10.1097/aud.0000000000000028.
    DOI  |   Google Scholar
  14. Ahmadi R, Jalilvand H, Mahdavi M, Ahmadi F, Baghban A. The effects of hearing aid digital noise reduction and directionality on acceptable noise level. Clin Exp Otorhinolaryngol. 2018;11(4):267–74. doi: 10.21053/ceo.2018.00052.
    DOI  |   Google Scholar
  15. Chen L, Yu Z, Chen C, Hu X, Fan J, Yang J, et al. A 1-V, 1.2-mA fully integrated SoC for digital hearing aids. Microelectron J. 2020;46(1):12–9. doi: 10.1016/j.mejo.2020.01.015.
    DOI  |   Google Scholar
  16. Jinhua W. Research on de-noising method of railway vehicle noise signal. 2019. Available from: https://doi.org/10.1109/icmtma.2019.00025.
    DOI  |   Google Scholar
  17. Lin Y, Lee Y, Liu H, Chiueh H, Chi T, Yang C. A 1.5 mW programmable acoustic signal processor for hearing assistive devices with speech intelligibility enhancement. IEEE Trans Circ Syst I: Regul Papers. 2020;67(12):4984–93. doi: 10.1109/tcsi.2020.3001160.
    DOI  |   Google Scholar
  18. Lee C, Sarwar SS, Panda P, Srinivasan G, Roy K. Enabling spike-based backpropagation for training deep neural network architectures. Front Neurosci. 2020;14:119. doi: 10.3389/fnins.2020.00119.
    DOI  |   Google Scholar
  19. Kumar ES, Surya KJ, Varma KY, Akash A, Reddy KN. Noise Reduction in Audio File Using Spectral Gatting and FFT by Python Modules. IOS Press eBooks; 2023. doi: 10.3233/atde221305.
    DOI  |   Google Scholar
  20. Li Y, Cai Y, Yu Z, Mo D, Liu R, Chen A, et al. Noise reduction with adaptive filtering scheme on interferometric fiber optic hydrophone. Optik. 2020;211:164648. doi: 10.1016/j.ijleo.2020.164648.
    DOI  |   Google Scholar
  21. Mauger SJ, Arora K, Dawson PW. Cochlear implant optimised noise reduction. J Neural Eng. 2012;9(6):065007. doi: 10.1088/1741-2560/9/6/065007.
    DOI  |   Google Scholar