Tech News
← Back to articles

Apple researchers explore how AI can predict bugs, write tests, and even fix code

read original related products more articles

Apple has published three interesting studies that offer some insight into how AI-based development could improve workflows, quality, and productivity. Here are the details.

Software Defect Prediction using Autoencoder Transformer Model

In this study, Apple’s researchers present a new AI model that overcomes the limitations of today’s LLMs (such as “hallucinations, context-poor generation, and loss of critical business relationships during retrieval”), when analyzing large-scale codebases to detect and predict bugs.

The model, called ADE-QVAET, aims to improve the accuracy of bug prediction by combining four AI techniques: Adaptive Differential Evolution (ADE), Quantum Variational Autoencoder (QVAE), a Transformer layer, and Adaptive Noise Reduction and Augmentation (ANRA).

In a nutshell, while ADE adjusts how the model learns, QVAE helps it understand deeper patterns in the data. Meanwhile, the Transformer layer ensures the model keeps track of how those patterns relate to each other, and ANRA cleans and balances the data to maintain consistent results.

Interestingly, this is not an LLM that analyzes the code directly. Instead, it looks at metrics and data about the code, such as complexity, size, and structure, and looks for patterns that may indicate where bugs are likely to occur.

According to the researchers, these were the results when they measured the model’s performance on a Kaggle dataset made specifically for software bug prediction:

“During training with a 90% training percentage, ADE-QVAET achieves high accuracy, precision, recall, and F1-score of 98.08%, 92.45%, 94.67%, and 98.12%, respectively, when compared to the Differential Evolution (DE) ML model.”

This means that the model was both highly reliable overall, and very effective at correctly identifying real bugs, while avoiding false positives.

Read the full study on Apple’s Machine Learning Research blog

... continue reading