POLYNOMIALS, GALOIS GROUPS, AND DEEP LEARNING
ELIRA SHASKA
Department of Computer Science
College of Computer Science and Engineering,
Oakland University,
Rochester, MI, 48309.
TONY SHASKA
Department of Mathematics and Statistics,
College of Arts and Sciences
Oakland University,
Rochester, MI, 48309
Abstract:Â This paper introduces a novel approach to understanding Galois theory, one of the foundational areas of algebra, through the lens of machine learning. By analyzing polynomial equations with machine learning techniques, we aim to streamline the process of determining solvability by radicals and explore broader applications within Galois theory. This summary encapsulates the background, methodology, potential applications, and challenges of using data science in Galois theory.
Contents
Contents
 1. Introduction
 2. Preliminaries
 2.1. Polynomials
 2.2. Several varianbles
 3. Equivalences of polynomials
 3.1. Binary forms
 3.2. Tschirnhaus-equivalent
 3.3. Hermite equivalence
 3.4. Julia equivalence
 4. Heights of polynomials
 5. Binary forms
 5.1.  as a weighted projective space
 5.2. Generators of the ring of invariants
 5.3. Root differences
 5.4. Heights and moduli heightsÂ
 5.5. Minimal and moduli heights of forms
 5.6. Weighted moduli height
 6. Galois groups of a polynomials
 6.1. Cubics
 6.2. Quartics
 6.3. Quintics
 7. Reduction moduloÂ
 8. Transitive groups
 9. Databases
 9.1. Datasets of irreducible polynomials
 9.2. Datasets with bounded height
 9.3. Cubics
 9.4. Quartics
 10. Neuro-symbolic networs
 11. Concluding remarks
References
1. Introduction
Galois theory, a cornerstone of modern algebra, provides profound insights into the solvability of polynomial equations. Since its inception by Évariste Galois, it has explained why there are no general formulas for polynomials of degree five or higher by radicals, unlike the well-known quadratic, cubic, and quartic formulas. This theory links the algebraic structure of field extensions to the symmetry of polynomial roots encapsulated by their Galois groups. While traditional methods allow us to determine solvability for lower-degree polynomials through invariants like discriminants, the complexity escalates dramatically for higher degrees, where the Galois group might not be solvable, leading to no radical solution.
This project embarks on an innovative journey to merge the abstract realm of Galois theory with the practical capabilities of machine learning (ML). Our goal is to harness ML’s pattern recognition and prediction abilities to address some of the most challenging aspects of Galois theory, potentially revolutionizing how we understand and approach polynomial solvability and other related problems. At the heart of Galois theory is the connection between a polynomial’s roots and its Galois group, which describes how these roots can be permuted while preserving the field operations. A polynomial is solvable by radicals if its Galois group is solvable; this means there exists a chain of normal subgroups where each quotient is cyclic, allowing for the roots to be constructed by sequential additions, multiplications, and extractions of roots. However, for degrees five and above, generic polynomials often have non-solvable groups like (symmetric group), rendering them unsolvable by radicals.
We propose an approach where we compile or generate datasets of polynomials with known Galois groups. Key to our approach will be identifying or creating features from polynomials that are indicative of Galois group properties or solvability. These might include traditional invariants like discriminants or novel features derived from root distributions or algebraic properties. Using supervised learning, we aim to predict the Galois group or solvability of polynomials, potentially using neural networks for their ability to handle complex patterns or decision trees for interpretability. Unsupervised methods could explore clustering of polynomials, perhaps revealing new mathematical insights. By learning from simpler polynomials, we hope to generalize these insights to more complex polynomials, possibly using techniques like transfer learning where models adapt knowledge from one task to another.
This integration could lead to automated solvability prediction, offering mathematicians tools to quickly assess if a polynomial can be solved by radicals, and might uncover patterns or invariants not yet recognized by traditional mathematics. The methodology could extend to other areas like field theory or algebraic geometry. However, several challenges loom, including the computational cost of handling high-degree polynomials, ensuring interpretability of ML models to enhance theoretical understanding, and balancing between providing practical tools and contributing to the theoretical body of Galois theory.
This project stands at the intersection of pure mathematics and cutting-edge computational science. By leveraging machine learning, we aim not only to solve practical problems within Galois theory but also to catalyze new theoretical advancements. This exploration could redefine how we approach some of the oldest and most fundamental questions in algebra, potentially opening new avenues for research in both mathematics and computer science.
A neuro-symbolic network is a type of artificial intelligence system that combines the strengths of neural networks (good at pattern recognition) with symbolic reasoning (based on logic and rules) to create models that can both learn from data and reason through complex situations, essentially mimicking human-like cognitive abilities by understanding and manipulating symbols to make decisions; this approach aims to overcome limitations of either method alone, providing better explainability and adaptability in AI systems.
For the full paper click here:
2024-05: Polynomials, Galois groups, and Deep Learning