Pengguna:Kekavigi/bak pasir: Perbedaan antara revisi

Konten dihapus Konten ditambahkan
~
Menyalin Determinan (oldid 23241318), en:Determinant (oldid 1183921881), dan ca:Determinant_(matemàtiques) (oldid 32510072), untuk diterjemahkan dan dipadukan.
Tag: VisualEditor pranala ke halaman disambiguasi
Baris 1:
Dalam bidang [[aljabar linear]], '''determinan''' ({{Lang-nl|determinant}}, {{Lang-en|determinant}}) adalah nilai yang dapat dihitung dari unsur suatu [[matriks persegi]]. Determinan matriks {{math|''A''}} ditulis dengan tanda {{math|det(''A'')}}, {{math|det ''A''}}, atau |''A''|. Determinan dapat dianggap sebagai faktor penskalaan transformasi yang digambarkan oleh matriks.
===Sociology and economics===
Simon has been credited for revolutionary changes in [[microeconomics]]. He is responsible for the concept of organizational decision-making as it is known today. He was the first to rigorously examine how administrators made decisions when they did not have [[Perfect information|perfect and complete information]]. It was in this area that he was awarded the Nobel Prize in 1978.<ref>{{cite web|title=Press Release: Studies of Decision-Making Lead to Prize in Economics |url= https://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/1978/press.html|publisher=Nobelprize.org|access-date=11 May 2014|date=16 October 1978}}</ref>
 
Apabila matriksnya berbetuk {{nowrap|2 × 2}}, rumus untuk mencari determinan adalah:
At the [[Cowles Commission]], Simon's main goal was to link economic theory to mathematics and statistics. His main contributions were to the fields of [[general equilibrium]] and [[econometrics]]. He was greatly influenced by the marginalist debate that began in the 1930s. The popular work of the time argued that it was not apparent empirically that entrepreneurs needed to follow the marginalist principles of profit-maximization/cost-minimization in running organizations. The argument went on to note that profit maximization was not accomplished, in part, because of the lack of complete information. In decision-making, Simon believed that agents face uncertainty about the future and costs in acquiring information in the present. These factors limit the extent to which agents may make a fully rational decision, thus they possess only "[[bounded rationality]]" and must make decisions by "[[satisficing]]", or choosing that which might not be optimal, but which will make them happy enough. Bounded rationality is a central theme in behavioral economics. It is concerned with the ways in which the actual decision-making process influences decision. Theories of bounded rationality relax one or more assumptions of standard [[Expected utility hypothesis|expected utility theory]].
 
: <math>\begin{align}|A| = \begin{vmatrix} a & b\\c & d \end{vmatrix}=ad - bc .\end{align}</math>
Further, Simon emphasized that psychologists invoke a "procedural" definition of rationality, whereas economists employ a "substantive" definition. Gustavos Barros argued that the procedural rationality concept does not have a significant presence in the economics field and has never had nearly as much weight as the concept of bounded rationality.<ref>{{cite journal|last1=Barros|first1=Gustavo|title=Herbert A. Simon and the Concept of Rationality: Boundaries and Procedures|journal=Brazilian Journal of Political Economy|date=2010|volume=30|issue=3|pages=455–472|url=http://www.scielo.br/pdf/rep/v30n3/a06v30n3.pdf|doi=10.1590/S0101-31572010000300006|s2cid=8481653|doi-access=free}}</ref> However, in an earlier article, Bhargava (1997) noted the importance of Simon's arguments and emphasized that there are several applications of the "procedural" definition of rationality in econometric analyses of data on health. In particular, economists should employ "auxiliary assumptions" that reflect the knowledge in the relevant biomedical fields, and guide the specification of econometric models for health outcomes.
 
Apabila matriksnya berbentuk 3 × 3 matrix ''A'', rumusnya adalah:
Simon was also known for his research on [[industrial organization]].<ref>{{Cite journal|last1=Anderson|first1=Marc H.|last2=Lemken|first2=Russell K.|date=2019|title=An Empirical Assessment of the Influence of March and Simon's Organizations: The Realized Contribution and Unfulfilled Promise of a Masterpiece|journal=Journal of Management Studies|language=en|volume=56|issue=8|pages=1537–1569|doi=10.1111/joms.12527|s2cid=201323442|issn=1467-6486}}</ref> He determined that the internal organization of firms and the external business decisions thereof, did not conform to the [[Neoclassical economics|neoclassical theories]] of "rational" decision-making. Simon wrote many articles on the topic over the course of his life, mainly focusing on the issue of decision-making within the behavior of what he termed "[[bounded rationality]]". "Rational behavior, in economics, means that individuals maximize their utility function under the constraints they face (e.g., their budget constraint, limited choices, ...) in pursuit of their self-interest. This is reflected in the theory of [[subjective expected utility]]. The term, [[bounded rationality]], is used to designate rational choice that takes into account the cognitive limitations of both knowledge and cognitive capacity. Bounded rationality is a central theme in [[behavioral economics]]. It is concerned with the ways in which the actual decision-making process influences decisions. Theories of bounded rationality relax one or more assumptions of standard expected utility theory".{{quote without source|date=September 2013}}
 
:: <math>\begin{align}|A| = \begin{vmatrix} a & b & c\\d & e & f\\g & h & i \end{vmatrix} &= a\,\begin{vmatrix} e & f\\h & i \end{vmatrix} - b\,\begin{vmatrix} d & f\\g & i \end{vmatrix} + c\,\begin{vmatrix} d & e\\g & h \end{vmatrix}\\ &= aei+bfg+cdh-ceg-bdi-afh.\end{align}</math>
Simon determined that the best way to study these areas was through [[computer simulation]]s. As such, he developed an interest in [[computer science]]. Simon's main interests in computer science were in artificial intelligence, [[human–computer interaction]], principles of the organization of humans and machines as information processing systems, the use of computers to study (by modeling) philosophical problems of the nature of intelligence and of [[epistemology]], and the social implications of computer technology.<ref name=":0">{{Cite web |title=Computer Pioneers - Herbert A. Simon |url=https://history.computer.org/pioneers/simon.html |access-date=2022-11-10 |website=history.computer.org}}</ref>
 
Rumus Leibniz untuk mencari determinan matriks {{nowrap|''n'' × ''n''}} adalah:
In his youth, Simon took an interest in [[land economics]] and [[Georgism]], an idea known at the time as "single tax".<ref name="Velupillai, Kumaraswamy 2000">Velupillai, Kumaraswamy. ''Computable Economics: The Arne Ryde Memorial Lectures''. New York: Oxford University Press, 2000.</ref> The system is meant to redistribute unearned [[economic rent]] to the public and improve land use. In 1979, Simon still maintained these ideas and argued that [[land value tax]] should replace taxes on wages.<ref>Simon, Herbert. [http://www.cooperative-individualism.org/batt-h-william_real-explanation-for-the-tax-rebellion-2011.pdf "Letter to the Pittsburgh City Council"], December 13, 1979. Archived in the Herbert A. Simon Collected Papers, Carnegie Mellon University Library. Quote: "It is clearly preferable to impose the additional cost on land by increasing the land tax, rather than to increase the wage tax"</ref>
 
: <math>\det(A) = \sum_{\sigma \in S_n} \left( \sgn(\sigma) \prod_{i=1}^n a_{i,\sigma_i}\right).</math>
Some of Simon's economic research was directed toward understanding technological change in general and the information processing revolution in particular.<ref name=":0" />
 
Metode [[eliminasi Gauss]] juga dapat dipakai. Sebagai contoh, determinan matriks berikut:
===Pedagogy===
Simon's work has strongly influenced [[John Mighton]], developer of a program that has achieved significant success in improving mathematics performance among elementary and high school students.<ref name="tvo">"John Mighton: The Ubiquitous Bell Curve", in ''[[Big Ideas (TV series)|Big Ideas]]'' on [[TVOntario]], broadcast 1:30 a.m., 6 November 2010.</ref> Mighton cites a 2000 paper by Simon and two coauthors that counters arguments by French mathematics educator, [[Guy Brousseau]], and others suggesting that excessive practice hampers children's understanding:<ref name="tvo" />
 
: <math>A = \begin{bmatrix}-2&2&-3\\
{{Blockquote|text=[The] criticism of practice (called "drill and kill," as if this phrase constituted empirical evaluation) is prominent in constructivist writings. Nothing flies more in the face of the last 20 years of research than the assertion that practice is bad. All evidence, from the laboratory and from extensive case studies of professionals, indicates that real competence only comes with extensive practice... In denying the critical role of practice one is denying children the very thing they need to achieve real competence. The instructional task is not to "kill" motivation by demanding drill, but to find tasks that provide practice while at the same time sustaining interest.|author=[[John Robert Anderson (psychologist)|John R. Anderson]], Lynne M. Reder, and Herbert A. Simon|source="Applications and misapplications of cognitive psychology to mathematics education", ''Texas Educational Review'' 6 (2000)<ref>"[http://act-r.psy.cmu.edu/papers/misapplied.html Applications and misapplications of cognitive psychology to mathematics education]", ''Texas Educational Review'' 6 (2000)</ref>}}
-1& 1& 3\\
2 &0 &-1\end{bmatrix} </math>
 
dapat dihitung dengan menggunakan matriks berikut:
==Awards and honors==
Simon received many top-level honors in life, including becoming a fellow of the [[American Academy of Arts and Sciences]] and a member of the [[American Philosophical Society]] in 1959;<ref>[http://www.amacad.org/publications/BookofMembers/ChapterS.pdf American Academy of Arts and Sciences 2012 Book of Members/ChapterS, amacad.org]</ref><ref>{{Cite web |title=APS Member History |url=https://search.amphilsoc.org/memhist/search?creator=Herbert+A.+Simon&title=&subject=&subdiv=&mem=&year=&year-max=&dead=&keyword=&smode=advanced |access-date=2022-12-06 |website=search.amphilsoc.org}}</ref> election as a [[Member of the National Academy of Sciences]] in 1967;<ref>[http://nas.nasonline.org/site/Dir/1715450020?pg=rslts National Academy of Sciences]. Nas.nasonline.org. Retrieved on 2013-09-23.</ref> [[APA Award for Distinguished Scientific Contributions to Psychology]] (1969); the [[Association for Computing Machinery|ACM]]'s [[Turing Award]] for making "basic contributions to artificial intelligence, the psychology of human [[cognition]], and list processing" (1975); the [[Nobel Prize in Economics|Nobel Memorial Prize in Economics]] "for his pioneering research into the decision-making process within economic organizations" (1978); the [[National Medal of Science]] (1986); the [[American Psychological Association|APA]]'s [[APA Award for Lifetime Contributions to Psychology|Award for Outstanding Lifetime Contributions to Psychology]] (1993); [[Association for Computing Machinery|ACM]] fellow (1994); and [[IJCAI Award for Research Excellence]] (1995).
 
: <math>B = \begin{bmatrix}-2&2&-3\\
*Honorary doctorate, [[Lund School of Economics and Management]], 1968.<ref>{{cite web|title=Honorary doctors at Lund School og Economics and Management|url=http://www.lusem.lu.se/research/honorary-doctors|website=Lund University|access-date=4 September 2014|archive-url=https://web.archive.org/web/20140905005826/http://www.lusem.lu.se/research/honorary-doctors|archive-date=5 September 2014|url-status=dead}}</ref>
0 & 0 & 4.5\\
*Honorary degree, [[University of Pavia]], 1988.<ref>[http://news.cornell.edu/stories/2008/10/interview-contrarian-ted-lowi interview with Ted Lowi (subsequent Cornell recipient of an Honorary degree from the University of Pavia), at news.cornell.edu]</ref>
2 &0 &-1\end{bmatrix},
*Honorary [[Doctor of Laws]] (LL.D.) degree from [[Harvard University]] in 1990.{{citation needed|date=February 2016}}
\quad
*Honorary degree, [[University of Buenos Aires]], 1999.<ref>{{cite web|title=Publicaciones, Facultad de Ciencias Económicas, Universidad de Buenos Aires, Boletín Informativo|url=http://www.econ.uba.ar/servicios/publicaciones/boletin/boletin8.htm#53|website=Universidad de Buenos Aires, Facultad de Ciencias Económicas|access-date=6 June 2015}}</ref>
C = \begin{bmatrix}-2&2&-3\\
0 & 0 & 4.5\\
0 & 2 &-4\end{bmatrix},
\quad
D = \begin{bmatrix}-2&2&-3\\
0 & 2 &-4\\
0 & 0 & 4.5
\end{bmatrix}.
</math>
 
Di sini, ''B'' diperoleh dari ''A'' dengan menambahkan −1/2× baris pertama dengan baris kedua, sehingga {{nowrap|1=det(''A'') = det(''B'')}}. ''C'' diperoleh dari ''B'' dengan menambahkan kolom pertama dengan kolom ketiga, sehingga {{nowrap|1=det(''C'') = det(''B'')}}. Sementara itu, ''D'' didapat dari ''C'' dengan menukar kolom kedua dan ketiga, sehingga {{nowrap|1=det(''D'') = −det(''C'')}}. Determinan matriks segitiga ''D'' merupakan hasil dari perkalian [[Diagonal utama|diagonal utamanya]]: {{nowrap|1=(−2) · 2 · 4.5 = −18}}. Maka dari itu, {{nowrap|1=det(''A'') = −det(''D'') = +18}}.
 
== Arti geometris ==
Jika {{nowrap|''n'' × ''n''}} [[Bilangan riil|riil]] matriks ''A'' ditulis dalam bentuk vektor kolomnya <math>A = [\begin{array}{c|c|c|c} \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n\end{array}]</math>, then
 
: <math>
A\begin{pmatrix}1 \\ 0\\ \vdots \\0\end{pmatrix} = \mathbf{a}_1, \quad
A\begin{pmatrix}0 \\ 1\\ \vdots \\0\end{pmatrix} = \mathbf{a}_2, \quad
\ldots, \quad
A\begin{pmatrix}0 \\0 \\ \vdots \\1\end{pmatrix} = \mathbf{a}_n.
</math>
 
Ini berarti <math> A </math> memetakan unit [[Hiperkubus|''n''-kubus]] ke ''n''-dimensi [[Parallepiped#Parallelotop|parallelotop]] yang ditentukan oleh vektor <math>\mathbf{a}_1, \mathbf{a}_2, \ldots, \mathbf{a}_n,</math> the region <math>P = \left\{c_1 \mathbf{a}_1 + \cdots + c_n\mathbf{a}_n \mid 0 \leq c_i\leq 1 \ \forall i\right\}.</math>
 
Determinan memberikan volume dimensi [[Orientasi (ruang vektor)|bertanda]] ''n'' dari paralelotop ini, <math>\det(A) = \pm \text{vol}(P),</math> dan karenanya menjelaskan secara lebih umum faktor skala volume dimensi '''n'' dari [[transformasi linear]] yang dihasilkan oleh ''A''.<ref>{{cite web|author=|date=|title=Determinants and Volumes|url=https://textbooks.math.gatech.edu/ila/determinants-volumes.html|website=textbooks.math.gatech.edu|accessdate=16 March 2018}}</ref> (Tanda tersebut menunjukkan apakah transformasi mempertahankan atau membalikkan [[Orientasi (ruang vektor)|orientasi]].) Secara khusus, jika determinannya nol, maka paralelotop ini memiliki volume nol dan tidak sepenuhnya berdimensi ''n'' , yang menunjukkan bahwa dimensi bayangan ''A'' lebih kecil dari ''n''. Ini [[Teorema peringkat-nulitas|berarti]] bahwa ''A'' menghasilkan transformasi linier yang bukan [[Fungsi konjektur|ke]] atau [[Fungsi injektif|satu-ke-satu]], dan begitu juga bukan bisa dibalik.
 
== Definisi ==
Ada berbagai cara yang setara untuk menentukan determinan dari [[matriks persegi]] ''A'' , yaitu satu dengan jumlah baris dan kolom yang sama. Mungkin cara termudah untuk menyatakan determinan adalah dengan mempertimbangkan elemen di baris atas dan masing-masing [[Minor (aljabar linear)|minor]]; mulai dari kiri, kalikan elemen dengan minor, lalu kurangi hasil kali elemen berikutnya dan minornya, dan secara bergantian menambah dan mengurangi produk tersebut sampai semua elemen di baris atas habis. Sebagai contoh, berikut adalah hasil untuk matriks 4 × 4:
 
: <math>
\begin{vmatrix} a & b & c & d\\ e & f & g & h\\ i & j & k & l\\ m & n & o & p \end{vmatrix} =
a\,\begin{vmatrix} f & g & h\\ j & k & l\\ n & o & p \end{vmatrix} -
b\,\begin{vmatrix} e & g & h\\ i & k & l\\ m & o & p \end{vmatrix} +
c\,\begin{vmatrix} e & f & h\\ i & j & l\\ m & n & p \end{vmatrix} -
d\,\begin{vmatrix} e & f & g\\ i & j & k\\ m & n & o \end{vmatrix}.
</math>
 
Cara lain untuk menentukan determinan dinyatakan dalam kolom-kolom matriks. Jika kita menulis berkas {{nowrap|''n'' × ''n''}} matriks ''A'' dalam hal vektor kolomnya
 
: <math>A = \begin{bmatrix} a_1 & a_2 & \cdots & a_n \end{bmatrix}</math>
 
dimana <math>a_j</math> adalah vektor dengan ukuran ''n'' , maka determinan dari ''A'' didefinisikan sehingga
 
: <math>\begin{align}
\det \begin{bmatrix} a_1 & \cdots & b a_j + c v & \cdots & a_n \end{bmatrix}
&= b\det(A) + c \det \begin{bmatrix} a_1 & \cdots & v & \cdots & a_n \end{bmatrix} \\
\det \begin{bmatrix} a_1 & \cdots & a_j & a_{j+1} & \cdots & a_n \end{bmatrix}
&= -\det \begin{bmatrix} a_1 & \cdots & a_{j+1} & a_j & \cdots & a_n \end{bmatrix} \\
\det(I) &= 1
\end{align}</math>
 
di mana ''b'' dan ''c'' adalah skalar, ''v'' adalah sembarang vektor berukuran ''n'' dan ''I'' adalah [[matriks identitas]] berukuran ''n'' . Persamaan-persamaan ini mengatakan bahwa determinannya adalah fungsi linear dari setiap kolom, bahwa menukar kolom yang berdekatan membalikkan tanda determinan, dan determinan matriks identitas adalah 1. Properti ini berarti bahwa determinan adalah fungsi multilinear bolak-balik dari kolom yang memetakan matriks identitas ke skalar unit yang mendasarinya. Ini cukup untuk menghitung determinan matriks kuadrat apa pun secara unik. Asalkan skalar yang mendasari membentuk bidang (lebih umum, [[gelanggang komutatif]]), definisi di bawah ini menunjukkan bahwa fungsi seperti itu ada, dan dapat dibuktikan unik.<ref>[[Serge Lang]], '' Linear Algebra '', 2nd Edition, Addison-Wesley, 1971, pp 173, 191.</ref>
 
Dengan kata lain, determinan dapat diekspresikan sebagai jumlah produk entri matriks di mana setiap produk memiliki suku ''n'' dan koefisien setiap produk adalah −1 atau 1 atau 0 sesuai dengan yang diberikan: itu adalah [[ekspresi polinomial]] dari entri matriks. Ekspresi ini berkembang pesat dengan ukuran matriks (sebuah {{nowrap|''n'' × ''n''}} matriks memiliki [[Faktorial|''n''!]] istilah), jadi pertama kali akan diberikan secara eksplisit untuk kasus {{nowrap|2 × 2}} matriks dan matriks {{nowrap|3 × 3}}, diikuti dengan aturan untuk matriks ukuran arbitrer, yang menggabungkan kedua kasus ini.
 
Asumsikan ''A'' adalah matriks persegi dengan baris ''n'' dan kolom ''n'' , sehingga dapat ditulis sebagai
 
: <math>A = \begin{bmatrix}
a_{1,1} & a_{1,2} & \dots & a_{1,n} \\
a_{2,1} & a_{2,2} & \dots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n,1} & a_{n,2} & \dots & a_{n,n}
\end{bmatrix}.</math>
 
Entri dapat berupa angka atau ekspresi (seperti yang terjadi ketika determinan digunakan untuk mendefinisikan [[karakteristik polinomial]]); definisi determinan hanya bergantung pada fakta bahwa mereka dapat ditambahkan dan dikalikan bersama dengan cara [[komutatif]].
 
Determinan dari ''A'' dilambangkan dengan det(''A''), atau dapat dilambangkan secara langsung dalam istilah entri matriks dengan menulis batang penutup, bukan tanda kurung:
 
: <math>\begin{vmatrix}
a_{1,1} & a_{1,2} & \dots & a_{1,n} \\
a_{2,1} & a_{2,2} & \dots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n,1} & a_{n,2} & \dots & a_{n,n}
\end{vmatrix}.</math>
 
== Dalam maktris ==
 
=== Matriks 2x2 ===
[[Berkas:Area_parallellogram_as_determinant.svg|ka|jmpl|Luas jajaran genjang adalah nilai absolut dari determinan matriks yang dibentuk oleh vektor yang merepresentasikan sisi jajaran genjang.]]
[[Rumus Leibniz untuk determinan|Rumus Leibniz]] untuk determinan a {{nowrap|2 × 2}} matriks adalah
 
: <math>\begin{vmatrix} a & b \\c & d \end{vmatrix} = ad - bc.</math>
 
Jika entri matriks adalah bilangan real, matriks {{math|A}} dapat digunakan untuk merepresentasikan dua [[peta linear]]: yang memetakan vektor [[standar dasar]] ke baris {{math|A}}, dan yang memetakannya ke kolom {{math|A}}. Dalam kedua kasus tersebut, gambar vektor basis membentuk [[jajaran genjang]] yang mewakili gambar [[satuan persegi]] di bawah pemetaan. Jajar genjang yang ditentukan oleh baris dari matriks di atas adalah yang memiliki simpul di {{math|{{nowrap|(0, 0)}},}} {{math|{{nowrap|(''a'', ''b'')}},}} {{math|{{nowrap|(''a'' + ''c'', ''b'' + ''d'')}},}} dan {{math|{{nowrap|(''c'', ''d'')}},}} seperti yang ditunjukkan pada diagram terlampir.
 
[[Nilai absolut]] dari {{math|{{nowrap|''ad'' − ''bc''}}}} adalah luas jajaran genjang, dan dengan demikian mewakili faktor skala yang luasnya diubah oleh {{math|A}}. (Jajar genjang dibentuk kolom {{math|A}} pada jajaran genjang, tetapi karena determinan simetri dari baris dan kolom, luasnya tetap sama.)
 
Nilai absolut dari determinan bersama dengan tanda menjadi ''luas berorientasi'' dari jajaran genjang. Luas orientasi sama dengan [[Luas (geometri)|luas]] biasa, kecuali bilangan negatif ketika sudut dari vektor pertama ke vektor kedua yang menentukan jajar genjang berubah searah jarum jam (yang berlawanan dengan arah yang akan didapatkan untuk [[identitas matriks]])
 
Untuk menunjukkan {{math|{{nowrap|''ad'' − ''bc''}}}} adalah luas, matriks dari dua vektor {{math|{{nowrap|'''u''' ≡ (''a'', ''b'')}}}} dan {{math|{{nowrap|'''v''' ≡ (''c'', ''d'')}}}} dengan sisi jajaran genjang. Luas {{math|{{nowrap|{{!}}'''u'''{{!}}&nbsp;{{!}}'''v'''{{!}}&nbsp;sin&nbsp;''θ''}}}} untuk sudut ''θ'' antara vektor, merupakan tinggi kali alas, panjang satu vektor dikalikan komponen tegak lurus lainnya. Karena [[sinus]] dari luas, diekspresikan menggunakan [[kosinus]] dari sudut komplementer ke vektor tegak lurus, misalnya {{math|{{nowrap|'''u'''<sup>⊥</sup> {{=}} (−''b'', ''a'')}},}} so that {{math|{{nowrap|{{!}}'''u'''<sup>⊥</sup>{{!}}&nbsp;{{!}}'''v'''{{!}}&nbsp;cos&nbsp;''θ&prime;''}},}} ditentukan dengan pola [[produk skalar]] {{math|{{nowrap|''ad'' − ''bc''}}:}}
 
: <math>\text{Tanda Luas} =
|\boldsymbol{u}|\,|\boldsymbol{v}|\,\sin\,\theta = \left|\boldsymbol{u}^\perp\right|\,\left|\boldsymbol{v}\right|\,\cos\,\theta' =
\begin{pmatrix} -b \\ a \end{pmatrix} \cdot \begin{pmatrix} c \\ d \end{pmatrix} = ad - bc.
</math>
 
Jadi determinan dari faktor skala dan orientasi yang diinduksi dengan pemetaan ''A''. Jika determinannya sama dengan satu, pemetaan linear ditentukan dengan matriks adalah [[Peta ekuiluas|ekui-luas]] dan orientasi.<ref>{{cite media|url=https://www.youtube.com/watch?v=6XghF70fqkY|series=WildLinAlg|title=Episode&nbsp;4|first=Norman J.|last=Wildberger|publisher=[[University of New South Wales]]|place=Sydney, Australia|year=2010|medium=video lecture|via=YouTube}}</ref>
 
=== Maktris {{nowrap|n}}×{{nowrap|n}} ===
[[Berkas:Determinant_parallelepiped.svg|ka|jmpl|300x300px|Volume [[parallelepiped]] ini adalah nilai absolut dari determinan matriks yang dibentuk oleh kolom yang dibangun dari vektor r1, r2, dan r3.]]
Penentu matriks dengan ukuran sembarang dapat ditentukan dengan [[Rumus Leibniz determinan|rumus Leibniz]] atau [[Ekspansi Laplace|rumus Laplace]].
 
Rumus Leibniz untuk determinan dari sebuah {{nowrap|''n'' × ''n''}} matrix ''A'' is
 
: <math>\det(A) = \sum_{\sigma \in S_n} \left( \sgn(\sigma) \prod_{i=1}^n a_{i,\sigma_i}\right).</math>
 
Jumlah dihitung atas semua [[permutasi]] s ''σ'' dari himpunan {{nowrap|{1, 2, ..., ''n''}.}} Permutasi adalah fungsi yang menyusun ulang kumpulan [[bilangan bulat]] ini. Nilai pada posisi ''i''th setelah penyusunan ulang ''σ'' dilambangkan dengan ''σ<sub>i</sub>''. Misalnya untuk {{nowrap|1=''n'' = 3}}, urutan asli 1, 2, 3 mungkin diurutkan ulang menjadi {{nowrap|1=''σ'' = [2, 3, 1]}}, dengan {{nowrap|1=''σ''<sub>1</sub> = 2}}, {{nowrap|1=''σ''<sub>2</sub> = 3}}, dan {{nowrap|1=''σ''<sub>3</sub> = 1}}. Himpunan semua permutasi semacam itu (juga dikenal sebagai [[grup simetris]] pada elemen ''n'' ) dilambangkan dengan S<sub>''n''</sub>. Untuk setiap permutasi ''σ'' , sgn( ''σ'' ) menunjukkan [[Tanda tangan (permutasi)|tanda tangan]] dari ''σ'' , nilai yang +1 setiap kali pengubahan urutan yang diberikan oleh σ dapat dicapai dengan menukar dua entri secara berurutan beberapa kali, dan −1 kapan pun itu dapat dicapai dengan bilangan ganjil dari pertukaran tersebut.
 
Salah satu ringkasan <math>n!</math>, istilah
 
: <math>\prod_{i=1}^n a_{i, \sigma_i}</math>
 
adalah notasi untuk produk entri pada posisi {{nowrap|(''i'', σ<sub>''i''</sub>)}}, di mana ''i'' berkisar dari 1 hingga ''n'':
 
: <math>a_{1,\sigma_1} \cdot a_{2,\sigma_2} \cdots a_{n,\sigma_n}.</math>
 
Misalnya, determinan a {{nowrap|3 × 3}} matrix ''A'' ({{nowrap|1=''n'' = 3}}) adalah
 
: <math>\begin{align}
&\sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i=1}^n a_{i,\sigma_i} \\
={} &\sgn([1,2,3]) \prod_{i=1}^n a_{i,[1,2,3]_i} + \sgn([1,3,2]) \prod_{i=1}^n a_{i,[1,3,2]_i} + \sgn([2,1,3]) \prod_{i=1}^n a_{i,[2,1,3]_i} +{} \\
&\sgn([2,3,1]) \prod_{i=1}^n a_{i,[2,3,1]_i} + \sgn([3,1,2]) \prod_{i=1}^n a_{i,[3,1,2]_i} + \sgn([3,2,1]) \prod_{i=1}^n a_{i,[3,2,1]_i} \\
={} &\prod_{i=1}^n a_{i,[1,2,3]_i} - \prod_{i=1}^n a_{i,[1,3,2]_i} - \prod_{i=1}^n a_{i,[2,1,3]_i} + \prod_{i=1}^n a_{i,[2,3,1]_i} + \prod_{i=1}^n a_{i,[3,1,2]_i} - \prod_{i=1}^n a_{i,[3,2,1]_i} \\[2pt]
={} & a_{1,1}a_{2,2}a_{3,3} - a_{1,1}a_{2,3}a_{3,2} - a_{1,2}a_{2,1}a_{3,3} +
a_{1,2}a_{2,3}a_{3,1} + a_{1,3}a_{2,1}a_{3,2} - a_{1,3}a_{2,2}a_{3,1}.
\end{align}</math>
 
== Aplikasi ==
 
=== Rumus Laplace ===
[[Ekspansi Laplace|Rumus Laplace]] untuk determinan a {{nowrap|3 × 3}} matriks adalah
 
: <math>
\begin{vmatrix}a&b&c\\ d&e&f\\ g&h&i\end{vmatrix} =
a\begin{vmatrix}e&f\\ h&i\end{vmatrix} - b\begin{vmatrix}d&f\\ g&i\end{vmatrix} + c\begin{vmatrix}d&e\\ g&h\end{vmatrix}
</math>
 
ini dapat diperluas untuk memberikan rumus Leibniz.
 
=== Rumus Leibniz ===
[[Rumus Leibniz untuk determinan|Rumus Leibniz]] untuk determinan a {{nowrap|3 × 3}} matriks:
 
: <math>\begin{align}
\begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}
&= a(ei - fh) - b(di - fg) + c(dh - eg) \\
&= aei + bfg + cdh - ceg - bdi - afh.
\end{align}</math>
 
=== Skema Sarrus ===
[[Kaidah Sarrus]] adalah mnemonik untuk determinan matriks {{nowrap|3 × 3}}: jumlah dari hasil kali tiga garis diagonal barat laut ke tenggara dari elemen matriks, dikurangi jumlah hasil kali tiga garis diagonal barat daya hingga timur laut elemen, bila salinan dari dua kolom pertama dari matriks ditulis di sampingnya seperti pada ilustrasi:
 
<math>\begin{align}
~~~~~~\begin{vmatrix} a & b & c \\ e & f & g \\ h & i & j \end{vmatrix} =
\end{align}</math> [[Berkas:Sarrus_ABC_red_blue_solid_dashed.svg|200x200px]] <math>\qquad= \color{red}{ afj + bgh + cei}\color{blue}{- hfc - iga- jeb}</math>
 
Skema untuk menghitung determinan matriks {{nowrap|3 × 3}} ini tidak terbawa ke dimensi yang lebih tinggi.
 
=== Simbol Levi-Civita ===
Terkadang berguna untuk memperluas rumus Leibniz ke penjumlahan yang tidak hanya permutasi, tetapi urutan indeks ''n'' dalam {{nowrap|1, ..., ''n''}}, memastikan bahwa kontribusi urutan akan menjadi nol kecuali jika menunjukkan permutasi. Jadi antisimetris [[simbol Levi-Civita]] <math>\varepsilon_{i_1,\cdots,i_n}</math> memperluas tanda tangan permutasi, dengan <math>\varepsilon_{\sigma(1),\cdots,\sigma(n)} = \operatorname{sgn}(\sigma)</math> untuk permutasi ''σ'' dari ''n'' , dan <math>\varepsilon_{i_1,\cdots,i_n} = 0</math> ketika permutasi ''σ'' seperti itu <math>\sigma(j) = i_j</math> for <math>j=1,\ldots,n</math> (atau ekuivalen, beberapa pasangan indeks). Penentu untuk {{nowrap|''n'' × ''n''}} matrix kemudian dapat diekspresikan menggunakan penjumlahan sebagai
 
: <math>\det(A) = \sum_{i_1,i_2,\ldots,i_n=1}^n \varepsilon_{i_1\cdots i_n} a_{1,i_1} \cdots a_{n,i_n},</math>
 
atau menggunakan dua simbol epsilon sebagai
 
: <math> \det(A) = \frac{1}{n!}\sum\varepsilon_{i_1\cdots i_n} \varepsilon_{j_1\cdots j_n} a_{i_1 j_1} \cdots a_{i_n j_n},</math>
 
dimana ''i<sub>r</sub>'' dan ''j<sub>r</sub>'' dijumlahkan lebih dari {{nowrap|1, ..., ''n''}}.
 
Namun, melalui penggunaan notasi [[tensor]] dan penekanan simbol penjumlahan (konvensi penjumlahan Einstein) dari ekspresi determinan kompak <math>n=3</math> ukuran, <math>a^m_n</math>;
 
: <math>\det(a^m_n)e_{rst} = e_{ijk}a_r^i a_s^j a_t^k</math>
 
dimana <math>e_{rst}</math> dan <math>e_{ijk}</math> 'sistem elektronik' dari nilai 0, +1 dan −1 berdasarkan jumlah permutasi dari <math> ijk </math> dan <math> rst </math>. Lebih spesifik, <math>e_{ijk}</math> sama dengan 0 ketika indeks berulang <math> ijk </math>; +1 ketika sejumlah permutasi <math> ijk </math>; −1 ketika jumlah permutasi ganjil dari <math> ijk </math>. Jumlah indeks dalam sistem elektronik sama dengan <math> n </math> dan karenanya dapat digeneralisasikan dengan cara ini.<ref>{{cite book|last1=McConnell|date=1957|url=https://archive.org/details/applicationoften0000mcco|title=Applications of Tensor Analysis|publisher=Dover Publications|pages=[https://archive.org/details/applicationoften0000mcco/page/10 10–17]|url-access=registration}}</ref>
 
== Catatan ==
<references group="" responsive="1"></references>
 
== Referensi ==
 
* {{Citation|last=Axler|first=Sheldon Jay|authorlink=Sheldon Axler|year=1997|title=Linear Algebra Done Right|publisher=Springer-Verlag|edition=2nd|isbn=0-387-98259-0}}
* {{Citation|last1=de Boor|first1=Carl|author1-link=Carl R. de Boor|title=An empty exercise|url=http://ftp.cs.wisc.edu/Approx/empty.pdf|doi=10.1145/122272.122273|year=1990|journal=ACM SIGNUM Newsletter|volume=25|issue=2|pages=3–7}}.
* {{Citation|last=Lay|first=David C.|date=August 22, 2005|title=Linear Algebra and Its Applications|publisher=Addison Wesley|edition=3rd|isbn=978-0-321-28713-7}}
* {{Citation|last=Meyer|first=Carl D.|date=February 15, 2001|title=Matrix Analysis and Applied Linear Algebra|publisher=Society for Industrial and Applied Mathematics (SIAM)|isbn=978-0-89871-454-8|url=http://www.matrixanalysis.com/DownloadChapters.html|deadurl=yes|archiveurl=https://web.archive.org/web/20091031193126/http://matrixanalysis.com/DownloadChapters.html|archivedate=2009-10-31|df=}}
* {{citation|last=Muir|first=Thomas|authorlink=Thomas Muir (mathematician)|title=A treatise on the theory of determinants|others=Revised and enlarged by William H. Metzler|origyear=1933|year=1960|publisher=Dover|location=New York, NY}}
* {{Citation|last=Poole|first=David|year=2006|title=Linear Algebra: A Modern Introduction|publisher=Brooks/Cole|edition=2nd|isbn=0-534-99845-3}}
* [[G. Baley Price]] (1947) "Some identities in the theory of determinants", [[American Mathematical Monthly]] 54:75–90 {{mr|id=0019078}}
* {{Citation|last1=Horn|first1=R. A.|last2=Johnson|first2=C. R.|year=2013|title=Matrix Analysis|publisher=Cambridge University Press|edition=2nd|isbn=978-0-521-54823-6}}
* {{Citation|last=Anton|first=Howard|year=2005|title=Elementary Linear Algebra (Applications Version)|publisher=Wiley International|edition=9th}}
* {{Citation|last=Leon|first=Steven J.|year=2006|title=Linear Algebra With Applications|publisher=Pearson Prentice Hall|edition=7th}}
 
----{{Short description|In mathematics, invariant of square matrices}} {{about|mathematics|determinants in epidemiology|Risk factor|determinants in immunology|Epitope}}
 
In [[mathematics]], the '''determinant''' is a [[Scalar (mathematics)|scalar value]] that is a [[Function (mathematics)|function]] of the entries of a [[square matrix]]. The determinant of a matrix {{math|''A''}} is commonly denoted {{math|det(''A'')}}, {{math|det ''A''}}, or {{math|{{abs|''A''}}}}. Its value characterizes some properties of the matrix and the [[linear map]] represented by the matrix. In particular, the determinant is nonzero [[if and only if]] the matrix is [[Invertible matrix|invertible]] and the linear map represented by the matrix is an [[Linear isomorphism|isomorphism]]. The determinant of a product of matrices is the product of their determinants (which follows directly from the above properties).
 
The determinant of a {{math|2 × 2}} matrix is
 
: <math>\begin{vmatrix} a & b\\c & d \end{vmatrix}=ad-bc,</math>
 
and the determinant of a {{math|3 × 3}} matrix is
 
: <math> \begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}= aei + bfg + cdh - ceg - bdi - afh.</math>
 
The determinant of an {{math|''n'' × ''n''}} matrix can be defined in several equivalent ways, the most common being [[Leibniz formula for determinants|Leibniz formula]], which expresses the determinant as a sum of <math>n!</math> (the [[factorial]] of ''{{mvar|n}}'') signed products of matrix entries. It can be computed by the [[Laplace expansion]], which expresses the determinant as a [[linear combination]] of determinants of submatrices, or with [[Gaussian elimination]], which expresses the determinant as the product of the diagonal entries of a [[diagonal matrix]] that is obtained by a succession of [[Elementary row operation|elementary row operations]].
 
Determinants can also be defined by some of their properties: the determinant is the unique function defined on the {{math|''n'' × ''n''}} matrices that has the four following properties. The determinant of the [[identity matrix]] is {{math|1}}; the exchange of two rows multiplies the determinant by {{math|−1}}; multiplying a row by a number multiplies the determinant by this number; and adding to a row a multiple of another row does not change the determinant. (The above properties relating to rows may be replaced by the corresponding statements with respect to columns.)
 
Determinants occur throughout mathematics. For example, a matrix is often used to represent the [[Coefficient|coefficients]] in a [[system of linear equations]], and determinants can be used to solve these equations ([[Cramer's rule]]), although other methods of solution are computationally much more efficient. Determinants are used for defining the [[characteristic polynomial]] of a matrix, whose roots are the [[Eigenvalue|eigenvalues]]. In [[geometry]], the signed ''{{mvar|n}}''-dimensional [[volume]] of a ''{{mvar|n}}''-dimensional [[parallelepiped]] is expressed by a determinant, and the determinant of (the matrix of) a [[linear transformation]] determines how the [[Orientability|orientation]] and the ''{{mvar|n}}''-dimensional volume are transformed. This is used in [[calculus]] with [[Exterior differential form|exterior differential forms]] and the [[Jacobian determinant]], in particular for [[Integration by substitution#Substitution for multiple variables|changes of variables]] in [[Multiple integral|multiple integrals]].
 
== Two by two matrices ==
The determinant of a {{math|2 × 2}} matrix <math>\begin{pmatrix} a & b \\c & d \end{pmatrix}</math> is denoted either by "{{math|det}}" or by vertical bars around the matrix, and is defined as
 
: <math>\det \begin{pmatrix} a & b \\c & d \end{pmatrix} = \begin{vmatrix} a & b \\c & d \end{vmatrix} = ad - bc.</math>
 
For example,
 
: <math>\det \begin{pmatrix} 3 & 7 \\1 & -4 \end{pmatrix} = \begin{vmatrix} 3 & 7 \\ 1 & {-4} \end{vmatrix} = 3 \cdot (-4) - 7 \cdot 1 = -19.</math>
 
=== First properties ===
The determinant has several key properties that can be proved by direct evaluation of the definition for <math>2 \times 2</math>-matrices, and that continue to hold for determinants of larger matrices. They are as follows:<ref>{{harvnb|Lang|1985|loc=§VII.1}}</ref> first, the determinant of the [[identity matrix]] <math>\begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix}</math> is 1. Second, the determinant is zero if two rows are the same:
 
: <math>\begin{vmatrix} a & b \\ a & b \end{vmatrix} = ab - ba = 0.</math>
 
This holds similarly if the two columns are the same. Moreover,
 
: <math>\begin{vmatrix}a & b + b' \\ c & d + d' \end{vmatrix} = a(d+d')-(b+b')c = \begin{vmatrix}a & b\\ c & d \end{vmatrix} + \begin{vmatrix}a & b' \\ c & d' \end{vmatrix}.</math>
 
Finally, if any column is multiplied by some number <math>r</math> (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number:
 
: <math>\begin{vmatrix} r \cdot a & b \\ r \cdot c & d \end{vmatrix} = rad - brc = r(ad-bc) = r \cdot \begin{vmatrix} a & b \\c & d \end{vmatrix}.</math>
 
== Geometric meaning ==
[[Berkas:Area_parallellogram_as_determinant.svg|ka|jmpl|The area of the parallelogram is the absolute value of the determinant of the matrix formed by the vectors representing the parallelogram's sides.]]
If the matrix entries are real numbers, the matrix {{math|A}} can be used to represent two [[Linear map|linear maps]]: one that maps the [[standard basis]] vectors to the rows of {{math|A}}, and one that maps them to the columns of {{math|A}}. In either case, the images of the basis vectors form a [[parallelogram]] that represents the image of the [[unit square]] under the mapping. The parallelogram defined by the rows of the above matrix is the one with vertices at {{math|(0, 0)}}, {{math|(''a'', ''b'')}}, {{math|(''a'' + ''c'', ''b'' + ''d'')}}, and {{math|(''c'', ''d'')}}, as shown in the accompanying diagram.
 
The absolute value of {{math|''ad'' − ''bc''}} is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by {{math|A}}. (The parallelogram formed by the columns of {{math|A}} is in general a different parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will be the same.)
 
The absolute value of the determinant together with the sign becomes the ''oriented area'' of the parallelogram. The oriented area is the same as the usual [[Area (geometry)|area]], except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the [[identity matrix]]).
 
To show that {{math|''ad'' − ''bc''}} is the signed area, one may consider a matrix containing two vectors {{math|'''u''' ≡ (''a'', ''b'')}} and {{math|'''v''' ≡ (''c'', ''d'')}} representing the parallelogram's sides. The signed area can be expressed as {{math|{{!}}'''u'''{{!}} {{!}}'''v'''{{!}} sin ''θ''}} for the angle ''θ'' between the vectors, which is simply base times height, the length of one vector times the perpendicular component of the other. Due to the [[sine]] this already is the signed area, yet it may be expressed more conveniently using the [[cosine]] of the complementary angle to a perpendicular vector, e.g. {{math|1='''u'''<sup>⊥</sup> = (−''b'', ''a'')}}, so that {{math|{{!}}'''u'''<sup>⊥</sup>{{!}} {{!}}'''v'''{{!}} cos ''θ&prime;''}}, which can be determined by the pattern of the [[scalar product]] to be equal to {{math|''ad'' − ''bc''}}:
 
: <math>\text{Signed area} =
|\boldsymbol{u}|\,|\boldsymbol{v}|\,\sin\,\theta = \left|\boldsymbol{u}^\perp\right|\,\left|\boldsymbol{v}\right|\,\cos\,\theta' =
\begin{pmatrix} -b \\ a \end{pmatrix} \cdot \begin{pmatrix} c \\ d \end{pmatrix} = ad - bc.
</math>
 
[[Berkas:Determinant_parallelepiped.svg|ka|jmpl|300x300px|The volume of this [[parallelepiped]] is the absolute value of the determinant of the matrix formed by the columns constructed from the vectors r1, r2, and r3.]]
Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by ''A''. When the determinant is equal to one, the linear mapping defined by the matrix is [[Equiareal map|equi-areal]] and orientation-preserving.
 
The object known as the ''[[bivector]]'' is related to these ideas. In 2D, it can be interpreted as an ''oriented plane segment'' formed by imagining two vectors each with origin {{math|(0, 0)}}, and coordinates {{math|(''a'', ''b'')}} and {{math|(''c'', ''d'')}}. The bivector magnitude (denoted by {{math|(''a'', ''b'') ∧ (''c'', ''d'')}}) is the ''signed area'', which is also the determinant {{math|''ad'' − ''bc''}}.<ref>{{cite AV media|url=https://www.youtube.com/watch?v=6XghF70fqkY|title=Episode&nbsp;4|last=Wildberger|first=Norman J.|publisher=[[University of New South Wales]]|year=2010|place=Sydney, Australia|series=WildLinAlg|archive-url=https://ghostarchive.org/varchive/youtube/20211211/6XghF70fqkY|archive-date=2021-12-11|url-status=live|medium=video lecture|via=YouTube}}{{cbignore}}</ref>
 
If an {{math|''n'' × ''n''}} [[Real number|real]] matrix ''A'' is written in terms of its column vectors <math>A = \left[\begin{array}{c|c|c|c} \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n\end{array}\right]</math>, then
 
: <math>
A\begin{pmatrix}1 \\ 0\\ \vdots \\0\end{pmatrix} = \mathbf{a}_1, \quad
A\begin{pmatrix}0 \\ 1\\ \vdots \\0\end{pmatrix} = \mathbf{a}_2, \quad
\ldots, \quad
A\begin{pmatrix}0 \\0 \\ \vdots \\1\end{pmatrix} = \mathbf{a}_n.
</math>
 
This means that <math>A</math> maps the unit [[Hypercube|''n''-cube]] to the ''n''-dimensional [[Parallelepiped#Parallelotope|parallelotope]] defined by the vectors <math>\mathbf{a}_1, \mathbf{a}_2, \ldots, \mathbf{a}_n,</math> the region <math>P = \left\{c_1 \mathbf{a}_1 + \cdots + c_n\mathbf{a}_n \mid 0 \leq c_i\leq 1 \ \forall i\right\}.</math>
 
The determinant gives the [[Orientation (vector space)|signed]] ''n''-dimensional volume of this parallelotope, <math>\det(A) = \pm \text{vol}(P),</math> and hence describes more generally the ''n''-dimensional volume scaling factor of the [[linear transformation]] produced by ''A''.<ref>{{cite web|title=Determinants and Volumes|url=https://textbooks.math.gatech.edu/ila/determinants-volumes.html|website=textbooks.math.gatech.edu|access-date=16 March 2018}}</ref> (The sign shows whether the transformation preserves or reverses [[Orientation (vector space)|orientation]].) In particular, if the determinant is zero, then this parallelotope has volume zero and is not fully ''n''-dimensional, which indicates that the dimension of the image of ''A'' is less than ''n''. This [[Rank–nullity theorem|means]] that ''A'' produces a linear transformation which is neither [[Surjective function|onto]] nor [[Injective function|one-to-one]], and so is not invertible.
 
== Definition ==
Let ''A'' be a [[square matrix]] with ''n'' rows and ''n'' columns, so that it can be written as
 
: <math>A = \begin{bmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n,1} & a_{n,2} & \cdots & a_{n,n}
\end{bmatrix}.</math>
 
The entries <math>a_{1,1}</math> etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are in a [[commutative ring]].
 
The determinant of ''A'' is denoted by det(''A''), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:
 
: <math>\begin{vmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n,1} & a_{n,2} & \cdots & a_{n,n}
\end{vmatrix}.</math>
 
There are various equivalent ways to define the determinant of a square matrix ''A'', i.e. one with the same number of rows and columns: the determinant can be defined via the [[Leibniz formula for determinants|Leibniz formula]], an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.
 
=== Leibniz formula ===
{{main|Leibniz formula for determinants}}
 
==== 3 × 3 matrices ====
The ''Leibniz formula'' for the determinant of a {{math|3 × 3}} matrix is the following:
 
: <math>\begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}
= aei + bfg + cdh - ceg - bdi - afh.\ </math>
 
In this expression, each term has one factor from each row, all in different columns, arranged in increasing row order. For example, ''bdi'' has ''b'' from the first row second column, ''d'' from the second row first column, and ''i'' from the third row third column. The signs are determined by how many transpositions of factors are necessary to arrange the factors in increasing order of their columns (given that the terms are arranged left-to-right in increasing row order): positive for an even number of transpositions and negative for an odd number. For the example of ''bdi'', the single transposition of ''bd'' to ''db'' gives ''dbi,'' whose three factors are from the first, second and third columns respectively; this is an odd number of transpositions, so the term appears with negative sign.
[[Berkas:Sarrus_rule1.svg|jmpl|[[Rule of Sarrus]]]]
The [[rule of Sarrus]] is a mnemonic for the expanded form of this determinant: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration. This scheme for calculating the determinant of a {{math|3 × 3}} matrix does not carry over into higher dimensions.
 
==== ''n'' × ''n'' matrices ====
Generalizing the above to higher dimensions, the determinant of an <math>n \times n</math> matrix is an expression involving [[Permutation|permutations]] and their [[Signature (permutation)|signatures]]. A permutation of the set <math>\{1, 2, \dots, n \}</math> is a [[Bijection|bijective function]] <math>\sigma</math> from this set to itself, with values <math>\sigma(1), \sigma(2),\ldots,\sigma(n)</math> exhausting the entire set. The set of all such permutations, called the [[symmetric group]], is commonly denoted <math>S_n</math>. The signature <math>\sgn(\sigma)</math> of a permutation <math>\sigma</math> is <math>+1,</math> if the permutation can be obtained with an even number of transpositions (exchanges of two entries); otherwise, it is <math>-1.</math>
 
Given a matrix
 
: <math>A=\begin{bmatrix}
a_{1,1}\ldots a_{1,n}\\
\vdots\qquad\vdots\\
a_{n,1}\ldots a_{n,n}
\end{bmatrix},</math>
 
the Leibniz formula for its determinant is, using [[sigma notation]] for the sum,
 
: <math>\det(A)=\begin{vmatrix}
a_{1,1}\ldots a_{1,n}\\
\vdots\qquad\vdots\\
a_{n,1}\ldots a_{n,n}
\end{vmatrix} = \sum_{\sigma \in S_n}\sgn(\sigma)a_{1,\sigma(1)}\cdots a_{n,\sigma(n)}.</math>
 
Using [[pi notation]] for the product, this can be shortened into
 
: <math>\det(A) = \sum_{\sigma \in S_n} \left( \sgn(\sigma) \prod_{i=1}^n a_{i,\sigma(i)}\right)</math>.
 
The [[Levi-Civita symbol]] <math>\varepsilon_{i_1,\ldots,i_n}</math> is defined on the ''{{mvar|n}}''-[[Tuple|tuples]] of integers in <math>\{1,\ldots,n\}</math> as {{math|0}} if two of the integers are equal, and otherwise as the signature of the permutation defined by the ''n-''tuple of integers. With the Levi-Civita symbol, the Leibniz formula becomes
 
: <math>\det(A) = \sum_{i_1,i_2,\ldots,i_n} \varepsilon_{i_1\cdots i_n} a_{1,i_1} \!\cdots a_{n,i_n},</math>
 
where the sum is taken over all ''{{mvar|n}}''-tuples of integers in <math>\{1,\ldots,n\}.</math> <ref>{{cite book|last1=McConnell|date=1957|url=https://archive.org/details/applicationoften0000mcco|title=Applications of Tensor Analysis|publisher=Dover Publications|pages=[https://archive.org/details/applicationoften0000mcco/page/10 10–17]|url-access=registration}}</ref><ref>{{harvnb|Harris|2014|loc=§4.7}}</ref>
 
== Properties of the determinant ==
 
=== Characterization of the determinant ===
The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an <math>n \times n</math>-matrix ''A'' as being composed of its <math>n</math> columns, so denoted as
 
: <math>A = \big ( a_1, \dots, a_n \big ),</math>
 
where the [[column vector]] <math>a_i</math> (for each ''i'') is composed of the entries of the matrix in the ''i''-th column.
 
#
# <math>\det\left(I\right) = 1</math>, where <math>I</math> is an [[identity matrix]].
#
# The determinant is ''[[Multilinear map|multilinear]]'': if the ''j''th column of a matrix <math>A</math> is written as a [[linear combination]] <math>a_j = r \cdot v + w</math> of two [[Column vector|column vectors]] ''v'' and ''w'' and a number ''r'', then the determinant of ''A'' is expressible as a similar linear combination:
#: <math>\begin{align}|A|
&= \big | a_1, \dots, a_{j-1}, r \cdot v + w, a_{j+1}, \dots, a_n | \\
&= r \cdot | a_1, \dots, v, \dots a_n | + | a_1, \dots, w, \dots, a_n |
\end{align}</math>
#
# The determinant is ''[[Alternating form|alternating]]'': whenever two columns of a matrix are identical, its determinant is 0:
#: <math>| a_1, \dots, v, \dots, v, \dots, a_n| = 0.</math>
 
If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any <math>n \times n</math>-matrix ''A'' a number that satisfies these three properties.<ref>[[Serge Lang]], ''Linear Algebra'', 2nd Edition, Addison-Wesley, 1971, pp 173, 191.</ref> This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula.
 
To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a [[standard basis]] vector. These determinants are either 0 (by property&nbsp;9) or else ±1 (by properties 1 and&nbsp;12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.{{citation needed|date=May 2021}}
 
=== Immediate consequences ===
These rules have several further consequences:
 
* The determinant is a [[homogeneous function]], i.e., <math display="block">\det(cA) = c^n\det(A)</math> (for an <math>n \times n</math> matrix <math>A</math>).
* Interchanging any pair of columns of a matrix multiplies its determinant by&nbsp;−1. This follows from the determinant being multilinear and alternating (properties 2 and 3 above): <math display="block">|a_1, \dots, a_j, \dots a_i, \dots, a_n| = - |a_1, \dots, a_i, \dots, a_j, \dots, a_n|.</math> This formula can be applied iteratively when several columns are swapped. For example <math display="block">|a_3, a_1, a_2, a_4 \dots, a_n| = - |a_1, a_3, a_2, a_4, \dots, a_n| = |a_1, a_2, a_3, a_4, \dots, a_n|.</math> Yet more generally, any permutation of the columns multiplies the determinant by the [[Parity of a permutation|sign]] of the permutation.
* If some column can be expressed as a linear combination of the ''other'' columns (i.e. the columns of the matrix form a [[Linearly independent|linearly dependent]] set), the determinant is 0. As a special case, this includes: if some column is such that all its entries are zero, then the determinant of that matrix is 0.
* Adding a scalar multiple of one column to ''another'' column does not change the value of the determinant. This is a consequence of multilinearity and being alternative: by multilinearity the determinant changes by a multiple of the determinant of a matrix with two equal columns, which determinant is 0, since the determinant is alternating.
* If <math>A</math> is a [[triangular matrix]], i.e. <math>a_{ij}=0</math>, whenever <math>i>j</math> or, alternatively, whenever <math>i<j</math>, then its determinant equals the product of the diagonal entries: <math display="block">\det(A) = a_{11} a_{22} \cdots a_{nn} = \prod_{i=1}^n a_{ii}.</math> Indeed, such a matrix can be reduced, by appropriately adding multiples of the columns with fewer nonzero entries to those with more entries, to a [[diagonal matrix]] (without changing the determinant). For such a matrix, using the linearity in each column reduces to the identity matrix, in which case the stated formula holds by the very first characterizing property of determinants. Alternatively, this formula can also be deduced from the Leibniz formula, since the only permutation <math>\sigma</math> which gives a non-zero contribution is the identity permutation.
 
==== Example ====
These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, [[Gaussian elimination]] can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix <math>A</math> using that method:
 
: <math>A = \begin{bmatrix}
-2 & -1 & 2 \\
2 & 1 & 4 \\
-3 & 3 & -1
\end{bmatrix}. </math>
 
{| class="wikitable"
|+Computation of the determinant of matrix <math>A</math>
|Matrix
|<math>B = \begin{bmatrix}
-3 & -1 & 2 \\
3 & 1 & 4 \\
0 & 3 & -1
\end{bmatrix} </math>
|<math>C = \begin{bmatrix}
-3 & 5 & 2 \\
3 & 13 & 4 \\
0 & 0 & -1
\end{bmatrix} </math>
|<math>D = \begin{bmatrix}
5 & -3 & 2 \\
13 & 3 & 4 \\
0 & 0 & -1
\end{bmatrix} </math>
|<math>E = \begin{bmatrix}
18 & -3 & 2 \\
0 & 3 & 4 \\
0 & 0 & -1
\end{bmatrix} </math>
|-
|Obtained by
|add the second column to the first
|add 3 times the third column to the second
|swap the first two columns
|add <math>-\frac{13} 3</math> times the second column to the first
|-
|Determinant
|<math>|A| = |B|</math>
|<math>|B| = |C|</math>
|<math>|D| = -|C|</math>
|<math>|E| = |D|</math>
|}
Combining these equalities gives <math>|A| = -|E| = -(18 \cdot 3 \cdot (-1)) = 54.</math>
 
=== Transpose ===
The determinant of the [[transpose]] of <math>A</math> equals the determinant of ''A'':
 
: <math>\det\left(A^\textsf{T}\right) = \det(A)</math>.
 
This can be proven by inspecting the Leibniz formula.<ref>{{harvnb|Lang|1987|loc=§VI.7, Theorem 7.5}}</ref> This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an {{math|''n'' × ''n''}} matrix as being composed of ''n'' rows, the determinant is an ''n''-linear function.
 
=== Multiplicativity and matrix groups ===
The determinant is a ''multiplicative map'', i.e., for square matrices <math>A</math> and <math>B</math> of equal size, the determinant of a [[matrix product]] equals the product of their determinants:
 
: <math>\det(AB) = \det (A) \det (B)</math>
 
This key fact can be proven by observing that, for a fixed matrix <math>B</math>, both sides of the equation are alternating and multilinear as a function depending on the columns of <math>A</math>. Moreover, they both take the value <math>\det B</math> when <math>A</math> is the identity matrix. The above-mentioned unique characterization of alternating multilinear maps therefore shows this claim.<ref>Alternatively, {{harvnb|Bourbaki|1998|loc=§III.8, Proposition 1}} proves this result using the [[functoriality]] of the exterior power.</ref>
 
A matrix <math>A</math> with entries in a [[Field (mathematics)|field]] is [[Invertible matrix|invertible]] precisely if its determinant is nonzero. This follows from the multiplicativity of the determinant and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by
 
: <math>\det\left(A^{-1}\right) = \frac{1}{\det(A)} = [\det(A)]^{-1}</math>.
 
In particular, products and inverses of matrices with non-zero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size <math>n</math> over a field <math>K</math>) forms a group known as the [[general linear group]] <math>\operatorname{GL}_n(K)</math> (respectively, a [[subgroup]] called the [[special linear group]] <math>\operatorname{SL}_n(K) \subset \operatorname{GL}_n(K)</math>. More generally, the word "special" indicates the subgroup of another [[matrix group]] of matrices of determinant one. Examples include the [[special orthogonal group]] (which if ''n'' is 2 or 3 consists of all [[Rotation matrix|rotation matrices]]), and the [[special unitary group]].
 
Because the determinant respects multiplication and inverses, it is in fact a [[group homomorphism]] from <math>\operatorname{GL}_n(K)</math> into the multiplicative group <math>K^\times</math> of nonzero elements of <math>K</math>. This homomorphism is surjective and its kernel is <math>\operatorname{SL}_n(K)</math> (the matrices with determinant one). Hence, by the [[first isomorphism theorem]], this shows that <math>\operatorname{SL}_n(K)</math> is a [[normal subgroup]] of <math>\operatorname{GL}_n(K)</math>, and that the [[quotient group]] <math>\operatorname{GL}_n(K)/\operatorname{SL}_n(K)</math> is isomorphic to <math>K^\times</math>.
 
The [[Cauchy–Binet formula]] is a generalization of that product formula for ''rectangular'' matrices. This formula can also be recast as a multiplicative formula for [[Compound matrix|compound matrices]] whose entries are the determinants of all quadratic submatrices of a given matrix.<ref>{{harvnb|Horn|Johnson|2018|loc=§0.8.7}}</ref><ref>{{harvnb|Kung|Rota|Yan|2009|p=306}}</ref>
 
=== Laplace expansion ===
[[Laplace expansion]] expresses the determinant of a matrix <math>A</math> [[Recursion|recursively]] in terms of determinants of smaller matrices, known as its [[Minor (matrix)|minors]]. The minor <math>M_{i,j}</math> is defined to be the determinant of the <math>(n-1) \times (n-1)</math>-matrix that results from <math>A</math> by removing the <math>i</math>-th row and the <math>j</math>-th column. The expression <math>(-1)^{i+j}M_{i,j}</math> is known as a [[Cofactor (linear algebra)|cofactor]]. For every <math>i</math>, one has the equality
 
: <math>\det(A) = \sum_{j=1}^n (-1)^{i+j} a_{i,j} M_{i,j},</math>
 
which is called the ''Laplace expansion along the ''{{mvar|i}}''th row''. For example, the Laplace expansion along the first row (<math>i=1</math>) gives the following formula:
 
: <math>
\begin{vmatrix}a&b&c\\ d&e&f\\ g&h&i\end{vmatrix} =
a\begin{vmatrix}e&f\\ h&i\end{vmatrix} - b\begin{vmatrix}d&f\\ g&i\end{vmatrix} + c\begin{vmatrix}d&e\\ g&h\end{vmatrix}
</math>
 
Unwinding the determinants of these <math>2 \times 2</math>-matrices gives back the Leibniz formula mentioned above. Similarly, the ''Laplace expansion along the <math>j</math>-th column'' is the equality
 
: <math>\det(A)= \sum_{i=1}^n (-1)^{i+j} a_{i,j} M_{i,j}.</math>
 
Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the [[Vandermonde matrix]]<math display="block">\begin{vmatrix}
1 & 1 & 1 & \cdots & 1 \\
x_1 & x_2 & x_3 & \cdots & x_n \\
x_1^2 & x_2^2 & x_3^2 & \cdots & x_n^2 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_1^{n-1} & x_2^{n-1} & x_3^{n-1} & \cdots & x_n^{n-1}
\end{vmatrix} =
\prod_{1 \leq i < j \leq n} \left(x_j - x_i\right).
</math>The ''n''-term Laplace expansion along a row or column can be [[Laplace expansion#Laplace expansion of a determinant by complementary minors|generalized]] to write an ''n'' x ''n'' determinant as a sum of <math>\tbinom nk</math> [[Binomial coefficient|terms]], each the product of the determinant of a ''k'' x ''k'' [[Minor (linear algebra)|submatrix]] and the determinant of the complementary (''n−k'') x (''n−k'') submatrix.
 
==== Adjugate matrix ====
The [[adjugate matrix]] <math>\operatorname{adj}(A)</math> is the transpose of the matrix of the cofactors, that is,
 
: <math>(\operatorname{adj}(A))_{i,j} = (-1)^{i+j} M_{ji}.</math>
 
For every matrix, one has<ref>{{harvnb|Horn|Johnson|2018|loc=§0.8.2}}.</ref>
 
: <math>(\det A) I = A\operatorname{adj}A = (\operatorname{adj}A)\,A. </math>
 
Thus the adjugate matrix can be used for expressing the inverse of a [[nonsingular matrix]]:
 
: <math>A^{-1} = \frac 1{\det A}\operatorname{adj}A. </math>
 
=== Block matrices ===
The formula for the determinant of a <math>2 \times 2</math>-matrix above continues to hold, under appropriate further assumptions, for a [[block matrix]], i.e., a matrix composed of four submatrices <math>A, B, C, D</math> of dimension <math>m \times m</math>, <math>m \times n</math>, <math>n \times m</math> and <math>n \times n</math>, respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the [[Schur complement]], is
 
: <math>\det\begin{pmatrix}A& 0\\ C& D\end{pmatrix} = \det(A) \det(D) = \det\begin{pmatrix}A& B\\ 0& D\end{pmatrix}.</math>
 
If <math>A</math> is [[Invertible matrix|invertible]], then it follows with results from the section on multiplicativity that
 
: <math>\begin{align}
\det\begin{pmatrix}A& B\\ C& D\end{pmatrix}
& = \det(A)\det\begin{pmatrix}A& B\\ C& D\end{pmatrix}
\underbrace{\det\begin{pmatrix}A^{-1}& -A^{-1} B\\ 0& I_n\end{pmatrix}}_{=\,\det(A^{-1})\,=\,(\det A)^{-1}}\\
& = \det(A) \det\begin{pmatrix}I_m& 0\\ C A^{-1}& D-C A^{-1} B\end{pmatrix}\\
& = \det(A) \det(D - C A^{-1} B),
\end{align}</math>
 
which simplifies to <math>\det (A) (D - C A^{-1} B)</math> when <math>D</math> is a <math>1 \times 1</math>-matrix.
 
A similar result holds when <math>D</math> is invertible, namely
 
: <math>\begin{align}
\det\begin{pmatrix}A& B\\ C& D\end{pmatrix}
& = \det(D)\det\begin{pmatrix}A& B\\ C& D\end{pmatrix}
\underbrace{\det\begin{pmatrix}I_m& 0\\ -D^{-1} C& D^{-1}\end{pmatrix}}_{=\,\det(D^{-1})\,=\,(\det D)^{-1}}\\
& = \det(D) \det\begin{pmatrix}A - B D^{-1} C& B D^{-1}\\ 0& I_n\end{pmatrix}\\
& = \det(D) \det(A - B D^{-1} C).
\end{align}</math>
 
Both results can be combined to derive [[Sylvester's determinant theorem]], which is also stated below.
 
If the blocks are square matrices of the ''same'' size further formulas hold. For example, if <math>C</math> and <math>D</math> [[Commutativity|commute]] (i.e., <math>CD=DC</math>), then<ref>{{Cite journal|last=Silvester|first=J. R.|year=2000|title=Determinants of Block Matrices|url=https://hal.archives-ouvertes.fr/hal-01509379/document|journal=Math. Gaz.|volume=84|issue=501|pages=460–467|doi=10.2307/3620776|jstor=3620776|s2cid=41879675}}</ref>
 
: <math>\det\begin{pmatrix}A& B\\ C& D\end{pmatrix} = \det(AD - BC).</math>
 
This formula has been generalized to matrices composed of more than <math>2 \times 2</math> blocks, again under appropriate commutativity conditions among the individual blocks.<ref>{{cite journal|last1=Sothanaphan|first1=Nat|date=January 2017|title=Determinants of block matrices with noncommuting blocks|journal=Linear Algebra and Its Applications|volume=512|pages=202–218|arxiv=1805.06027|doi=10.1016/j.laa.2016.10.004|s2cid=119272194}}</ref>
 
For <math>A = D </math> and <math>B = C</math>, the following formula holds (even if <math>A</math> and <math>B</math> do not commute){{citation needed|date=May 2021}}
 
: <math>\det\begin{pmatrix}A& B\\ B& A\end{pmatrix} = \det(A - B) \det(A + B).</math>
 
=== Sylvester's determinant theorem ===
[[Sylvester's determinant theorem]] states that for ''A'', an {{math|''m'' × ''n''}} matrix, and ''B'', an {{math|''n'' × ''m''}} matrix (so that ''A'' and ''B'' have dimensions allowing them to be multiplied in either order forming a square matrix):
 
: <math>\det\left(I_\mathit{m} + AB\right) = \det\left(I_\mathit{n} + BA\right),</math>
 
where ''I<sub>m</sub>'' and ''I<sub>n</sub>'' are the {{math|''m'' × ''m''}} and {{math|''n'' × ''n''}} identity matrices, respectively.
 
From this general result several consequences follow.{{ordered list|For the case of column vector ''c'' and row vector ''r'', each with ''m'' components, the formula allows quick calculation of the determinant of a matrix that differs from the identity matrix by a matrix of rank 1:
:<math>\det\left(I_\mathit{m} + cr\right) = 1 + rc.</math>|More generally,<ref>Proofs can be found in http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/proof003.html</ref> for any invertible {{math|''m'' × ''m''}} matrix ''X'',
:<math>\det(X + AB) = \det(X) \det\left(I_\mathit{m} + BX^{-1}A\right),</math>|For a column and row vector as above:
: <math>\det(X + cr) = \det(X) \det\left(1 + rX^{-1}c\right) = \det(X) + r\,\operatorname{adj}(X)\,c.</math>|For square matrices <math>A</math> and <math>B</math> of the same size, the matrices <math>AB</math> and <math>BA</math> have the same characteristic polynomials (hence the same eigenvalues).|list-style-type=lower-alpha}}
 
=== Sum ===
The determinant of the sum <math>A+B</math> of two square matrices of the same size is not in general expressible in terms of the determinants of ''A'' and of ''B''. However, for [[Positive-definite matrix|positive semidefinite matrices]] <math>A</math>, <math>B</math> and <math>C</math> of equal size,<math display="block">\det(A + B + C) + \det(C) \geq \det(A + C) + \det(B + C)\text{,}</math>with the corollary<ref>{{cite arXiv|last1=Lin|first1=Minghua|last2=Sra|first2=Suvrit|title=Completely strong superadditivity of generalized matrix functions|eprint=1410.1958|class=math.FA|year=2014}}</ref><ref>{{cite journal|last1=Paksoy|last2=Turkmen|last3=Zhang|year=2014|title=Inequalities of Generalized Matrix Functions via Tensor Products|url=https://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1062&context=math_facarticles|journal=Electronic Journal of Linear Algebra|volume=27|pages=332–341|doi=10.13001/1081-3810.1622}}</ref><math display="block">\det(A + B) \geq \det(A) + \det(B)\text{.}</math>Conversely, if <math>A</math> and <math>B</math> are [[Hermitian matrix|Hermitian]], positive-definite, and size <math>n\times n</math>, then the determinant has concave <math>n</math><sup>th</sup> root;<ref>{{cite web|last1=Serre|first1=Denis|date=Oct 18, 2010|title=Concavity of det<sup><sup>1</sup>&frasl;<sub>''n''</sub></sup> over HPD<sub>''n''</sub>.|url=https://mathoverflow.net/questions/42594/concavity-of-det1-n-over-hpd-n|website=MathOverflow}}</ref> this implies<math display="block">\sqrt[n]{\det{\!(A+B)}}\geq\sqrt[n]{\det{\!(A)}}+\sqrt[n]{\det{\!(B)}}</math>by homogeneity.
 
==== Sum identity for 2×2 matrices ====
For the special case of <math>2\times 2</math> matrices with complex entries, the determinant of the sum can be written in terms of determinants and traces in the following identity:
 
: <math>\det(A+B) = \det(A) + \det(B) + \text{tr}(A)\text{tr}(B) - \text{tr}(AB).</math>
{{math proof|title=Proof of identity|proof=This can be shown by writing out each term in components <math>A_{ij}, B_{ij}</math>. The left-hand side is
:<math>(A_{11} + B_{11})(A_{22} + B_{22}) - (A_{12} + B_{12})(A_{21} + B_{21}).</math>
Expanding gives
:<math>A_{11}A_{22} + B_{11}A_{22} + A_{11}B_{22} + B_{11}B_{22} - A_{12}A_{21} - B_{12}A_{21} - A_{12}B_{21} - B_{12}B_{21}.</math>
The terms which are quadratic in <math>A</math> are seen to be <math>\det(A)</math>, and similarly for <math>B</math>, so the expression can be written
:<math>\det(A) + \det(B) + A_{11}B_{22} + B_{11}A_{22} - A_{12}B_{21} - B_{12}A_{21}.</math>
We can then write the cross-terms as
:<math>(A_{11} + A_{22})(B_{11} + B_{22}) - (A_{11}B_{11} + A_{12}B_{21} + A_{21}B_{12} + A_{22}B_{22})</math>
which can be recognized as
:<math>\text{tr}(A)\text{tr}(B) - \text{tr}(AB).</math>
which completes the proof.}}This has an application to <math>2\times 2</math> matrix algebras. For example, consider the complex numbers as a matrix algebra. The complex numbers have a representation as matrices of the form<math display="block">aI + b\mathbf{i} := a\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} + b\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}</math>with <math>a</math> and <math>b</math> real. Since <math>\text{tr}(\mathbf{i}) = 0</math>, taking <math>A = aI</math> and <math>B = b\mathbf{i}</math> in the above identity gives
 
: <math>\det(aI + b\mathbf{i}) = a^2\det(I) + b^2\det(\mathbf{i}) = a^2 + b^2.</math>
 
This result followed just from <math>\text{tr}(\mathbf{i}) = 0</math> and <math>\det(I) = \det(\mathbf{i}) = 1</math>.
 
== Properties of the determinant in relation to other notions ==
 
=== Eigenvalues and characteristic polynomial ===
The determinant is closely related to two other central concepts in linear algebra, the [[Eigenvalue|eigenvalues]] and the [[characteristic polynomial]] of a matrix. Let <math>A</math> be an <math>n \times n</math>-matrix with [[Complex number|complex]] entries. Then, by the Fundamental Theorem of Algebra, <math>A</math> must have exactly ''n'' [[Eigenvectors|eigenvalues]] <math>\lambda_1, \lambda_2, \ldots, \lambda_n</math>. (Here it is understood that an eigenvalue with [[algebraic multiplicity]] ''{{mvar|μ}}'' occurs ''{{mvar|μ}}'' times in this list.) Then, it turns out the determinant of ''{{mvar|A}}'' is equal to the ''product'' of these eigenvalues,
 
: <math>\det(A) = \prod_{i=1}^n \lambda_i=\lambda_1\lambda_2\cdots\lambda_n.</math>
 
The product of all non-zero eigenvalues is referred to as [[pseudo-determinant]].
 
From this, one immediately sees that the determinant of a matrix <math>A</math> is zero if and only if <math>0</math> is an eigenvalue of <math>A</math>. In other words, <math>A</math> is invertible if and only if <math>0</math> is not an eigenvalue of <math>A</math>.
 
The characteristic polynomial is defined as<ref>{{harvnb|Lang|1985|loc=§VIII.2}}, {{harvnb|Horn|Johnson|2018|loc=Def. 1.2.3}}</ref>
 
: <math>\chi_A(t) = \det(t \cdot I - A).</math>
 
Here, <math>t</math> is the [[Indeterminate (variable)|indeterminate]] of the polynomial and <math>I</math> is the identity matrix of the same size as <math>A</math>. By means of this polynomial, determinants can be used to find the [[Eigenvalue|eigenvalues]] of the matrix <math>A</math>: they are precisely the [[Root of a polynomial|roots]] of this polynomial, i.e., those complex numbers <math>\lambda</math> such that
 
: <math>\chi_A(\lambda) = 0.</math>
 
A [[Hermitian matrix]] is [[Positive definite matrix|positive definite]] if all its eigenvalues are positive. [[Sylvester's criterion]] asserts that this is equivalent to the determinants of the submatrices
 
: <math>A_k := \begin{bmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,k} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,k} \\
\vdots & \vdots & \ddots & \vdots \\
a_{k,1} & a_{k,2} & \cdots & a_{k,k}
\end{bmatrix}</math>
 
being positive, for all <math>k</math> between <math>1</math> and <math>n</math>.<ref>{{harvnb|Horn|Johnson|2018|loc=Observation 7.1.2, Theorem 7.2.5}}</ref>
 
=== Trace ===
The [[Trace (linear algebra)|trace]] tr(''A'') is by definition the sum of the diagonal entries of ''{{mvar|A}}'' and also equals the sum of the eigenvalues. Thus, for complex matrices ''{{mvar|A}}'',
 
: <math>\det(\exp(A)) = \exp(\operatorname{tr}(A))</math>
 
or, for real matrices ''{{mvar|A}}'',
 
: <math>\operatorname{tr}(A) = \log(\det(\exp(A))).</math>
 
Here exp(''{{mvar|A}}'') denotes the [[matrix exponential]] of ''{{mvar|A}}'', because every eigenvalue ''{{mvar|λ}}'' of ''{{mvar|A}}'' corresponds to the eigenvalue exp(''{{mvar|λ}}'') of exp(''{{mvar|A}}''). In particular, given any [[Matrix logarithm|logarithm]] of ''{{mvar|A}}'', that is, any matrix ''{{mvar|L}}'' satisfying
 
: <math>\exp(L) = A</math>
 
the determinant of ''{{mvar|A}}'' is given by
 
: <math>\det(A) = \exp(\operatorname{tr}(L)).</math>
 
For example, for {{math|1=''n'' = 2}}, {{math|1=''n'' = 3}}, and {{math|1=''n'' = 4}}, respectively,
 
: <math>\begin{align}
\det(A) &= \frac{1}{2}\left(\left(\operatorname{tr}(A)\right)^2 - \operatorname{tr}\left(A^2\right)\right), \\
\det(A) &= \frac{1}{6}\left(\left(\operatorname{tr}(A)\right)^3 - 3\operatorname{tr}(A) ~ \operatorname{tr}\left(A^2\right) + 2 \operatorname{tr}\left(A^3\right)\right), \\
\det(A) &= \frac{1}{24}\left(\left(\operatorname{tr}(A)\right)^4 - 6\operatorname{tr}\left(A^2\right)\left(\operatorname{tr}(A)\right)^2 + 3\left(\operatorname{tr}\left(A^2\right)\right)^2 + 8\operatorname{tr}\left(A^3\right)~\operatorname{tr}(A) - 6\operatorname{tr}\left(A^4\right)\right).
\end{align}</math>
 
cf. [[Cayley–Hamilton theorem#Illustration for specific dimensions and practical applications|Cayley-Hamilton theorem]]. Such expressions are deducible from combinatorial arguments, [[Newton's identities#Computing coefficients|Newton's identities]], or the [[Faddeev–LeVerrier algorithm]]. That is, for generic ''{{mvar|n}}'', {{math|det''A'' {{=}} (−1)<sup>''n''</sup>''c''<sub>0</sub>}} the signed constant term of the [[characteristic polynomial]], determined recursively from
 
: <math>c_n = 1; ~~~c_{n-m} = -\frac{1}{m}\sum_{k=1}^m c_{n-m+k} \operatorname{tr}\left(A^k\right) ~~(1 \le m \le n)~.</math>
 
In the general case, this may also be obtained from<ref>A proof can be found in the Appendix B of {{cite journal|last1=Kondratyuk|first1=L. A.|last2=Krivoruchenko|first2=M. I.|year=1992|title=Superconducting quark matter in SU(2) color group|journal=Zeitschrift für Physik A|volume=344|issue=1|pages=99–115|bibcode=1992ZPhyA.344...99K|doi=10.1007/BF01291027|s2cid=120467300}}</ref>
 
: <math>\det(A) = \sum_{\begin{array}{c}k_1,k_2,\ldots,k_n \geq 0\\k_1+2k_2+\cdots+nk_n=n\end{array}}\prod_{l=1}^n \frac{(-1)^{k_l+1}}{l^{k_l}k_l!} \operatorname{tr}\left(A^l\right)^{k_l},</math>
 
where the sum is taken over the set of all integers {{math|''k<sub>l</sub>'' ≥ 0}} satisfying the equation
 
: <math>\sum_{l=1}^n lk_l = n.</math>
 
The formula can be expressed in terms of the complete exponential [[Bell polynomial]] of ''n'' arguments ''s<sub>l</sub>'' = −(''l'' – 1)! tr(''A<sup>l</sup>'') as
 
: <math>\det(A) = \frac{(-1)^n}{n!} B_n(s_1, s_2, \ldots, s_n).</math>
 
This formula can also be used to find the determinant of a matrix {{math|''A<sup>I</sup><sub>J</sub>''}} with multidimensional indices {{math|1=''I'' = (''i''<sub>1</sub>, ''i''<sub>2</sub>, ..., ''i<sub>r</sub>'')}} and {{math|1=''J'' = (''j''<sub>1</sub>, ''j''<sub>2</sub>, ..., ''j<sub>r</sub>'')}}. The product and trace of such matrices are defined in a natural way as
 
: <math>(AB)^I_J = \sum_K A^I_K B^K_J, \operatorname{tr}(A) = \sum_I A^I_I.</math>
 
An important arbitrary dimension ''{{mvar|n}}'' identity can be obtained from the [[Mercator series]] expansion of the logarithm when the expansion converges. If every eigenvalue of ''A'' is less than 1 in absolute value,
 
: <math>\det(I + A) = \sum_{k=0}^\infty \frac{1}{k!} \left(-\sum_{j=1}^\infty \frac{(-1)^j}{j} \operatorname{tr}\left(A^j\right)\right)^k\,,</math>
 
where {{math|''I''}} is the identity matrix. More generally, if
 
: <math>\sum_{k=0}^\infty \frac{1}{k!} \left(-\sum_{j=1}^\infty \frac{(-1)^j s^j}{j}\operatorname{tr}\left(A^j\right)\right)^k\,,</math>
 
is expanded as a formal [[power series]] in ''{{mvar|s}}'' then all coefficients of ''{{mvar|s}}<sup>{{mvar|m}}</sup>'' for {{math|''m'' &gt; ''n''}} are zero and the remaining polynomial is {{math|det(''I'' + ''sA'')}}.
 
=== Upper and lower bounds ===
For a positive definite matrix {{math|''A''}}, the trace operator gives the following tight lower and upper bounds on the log determinant
 
: <math>\operatorname{tr}\left(I - A^{-1}\right) \le \log\det(A) \le \operatorname{tr}(A - I)</math>
 
with equality if and only if {{math|1=''A'' = ''I''}}. This relationship can be derived via the formula for the [[Kullback-Leibler divergence]] between two [[multivariate normal]] distributions.
 
Also,
 
: <math>\frac{n}{\operatorname{tr}\left(A^{-1}\right)} \leq \det(A)^\frac{1}{n} \leq \frac{1}{n}\operatorname{tr}(A) \leq \sqrt{\frac{1}{n}\operatorname{tr}\left(A^2\right)}.</math>
 
These inequalities can be proved by expressing the traces and the determinant in terms of the eigenvalues. As such, they represent the well-known fact that the [[harmonic mean]] is less than the [[geometric mean]], which is less than the [[arithmetic mean]], which is, in turn, less than the [[root mean square]].
 
=== Derivative ===
The Leibniz formula shows that the determinant of real (or analogously for complex) square matrices is a [[polynomial]] function from <math>\mathbf R^{n \times n}</math> to <math>\mathbf R</math>. In particular, it is everywhere [[differentiable]]. Its derivative can be expressed using [[Jacobi's formula]]:<ref>{{harvnb|Horn|Johnson|2018|loc=§&nbsp;0.8.10}}</ref>
 
: <math>\frac{d \det(A)}{d \alpha} = \operatorname{tr}\left(\operatorname{adj}(A) \frac{d A}{d \alpha}\right).</math>
 
where <math>\operatorname{adj}(A)</math> denotes the [[adjugate]] of <math>A</math>. In particular, if <math>A</math> is invertible, we have
 
: <math>\frac{d \det(A)}{d \alpha} = \det(A) \operatorname{tr}\left(A^{-1} \frac{d A}{d \alpha}\right).</math>
 
Expressed in terms of the entries of <math>A</math>, these are
 
: <math> \frac{\partial \det(A)}{\partial A_{ij}}= \operatorname{adj}(A)_{ji} = \det(A)\left(A^{-1}\right)_{ji}.</math>
 
Yet another equivalent formulation is
 
: <math>\det(A + \epsilon X) - \det(A) = \operatorname{tr}(\operatorname{adj}(A) X) \epsilon + O\left(\epsilon^2\right) = \det(A) \operatorname{tr}\left(A^{-1} X\right) \epsilon + O\left(\epsilon^2\right)</math>,
 
using [[big O notation]]. The special case where <math>A = I</math>, the identity matrix, yields
 
: <math>\det(I + \epsilon X) = 1 + \operatorname{tr}(X) \epsilon + O\left(\epsilon^2\right).</math>
 
This identity is used in describing [[Lie algebra|Lie algebras]] associated to certain matrix [[Lie group|Lie groups]]. For example, the special linear group <math>\operatorname{SL}_n</math> is defined by the equation <math>\det A = 1</math>. The above formula shows that its Lie algebra is the [[special linear Lie algebra]] <math>\mathfrak{sl}_n</math> consisting of those matrices having trace zero.
 
Writing a <math>3 \times 3</math>-matrix as <math>A = \begin{bmatrix}a & b & c\end{bmatrix}</math> where <math>a, b,c</math> are column vectors of length 3, then the gradient over one of the three vectors may be written as the [[cross product]] of the other two:
 
: <math>\begin{align}
\nabla_\mathbf{a}\det(A) &= \mathbf{b} \times \mathbf{c} \\
\nabla_\mathbf{b}\det(A) &= \mathbf{c} \times \mathbf{a} \\
\nabla_\mathbf{c}\det(A) &= \mathbf{a} \times \mathbf{b}.
\end{align}</math>
 
== History ==
Historically, determinants were used long before matrices: A determinant was originally defined as a property of a [[system of linear equations]]. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero). In this sense, determinants were first used in the Chinese mathematics textbook ''[[The Nine Chapters on the Mathematical Art]]'' (九章算術, Chinese scholars, around the 3rd century BCE). In Europe, solutions of linear systems of two equations were expressed by [[Gerolamo Cardano|Cardano]] in 1545 by a determinant-like entity.<ref>{{harvnb|Grattan-Guinness|2003|loc=§6.6}}</ref>
 
Determinants proper originated from the work of [[Seki Takakazu]] in 1683 in Japan and parallelly of [[Gottfried Leibniz|Leibniz]] in 1693.<ref>Cajori, F. [[iarchive:ahistorymathema02cajogoog/page/n94|''A History of Mathematics'' p.&nbsp;80]]</ref><ref name="Campbell">Campbell, H: "Linear Algebra With Applications", pages 111–112. Appleton Century Crofts, 1971</ref><ref>{{harvnb|Eves|1990|p=405}}</ref><ref>A Brief History of Linear Algebra and Matrix Theory at: {{cite web|title=A Brief History of Linear Algebra and Matrix Theory|url=http://darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html|archive-url=https://web.archive.org/web/20120910034016/http://darkwing.uoregon.edu/~vitulli/441.sp04/LinAlgHistory.html|archive-date=2012-09-10|access-date=2012-01-24|url-status=dead|df=dmy-all}}</ref> {{harvtxt|Cramer|1750}} stated, without proof, Cramer's rule.<ref>{{harvnb|Kleiner|2007|p=80}}</ref> Both Cramer and also {{harvtxt|Bezout|1779}} were led to determinants by the question of [[Plane curve|plane curves]] passing through a given set of points.<ref>{{harvtxt|Bourbaki|1994|p=59}}</ref>
 
[[Vandermonde]] (1771) first recognized determinants as independent functions.<ref name="Campbell" /> {{harvtxt|Laplace|1772}} gave the general method of expanding a determinant in terms of its complementary [[Minor (matrix)|minors]]: Vandermonde had already given a special case.<ref>Muir, Sir Thomas, ''The Theory of Determinants in the historical Order of Development'' [London, England: Macmillan and Co., Ltd., 1906]. {{JFM|37.0181.02}}</ref> Immediately following, [[Joseph Louis Lagrange|Lagrange]] (1773) treated determinants of the second and third order and applied it to questions of [[elimination theory]]; he proved many special cases of general identities.
 
[[Carl Friedrich Gauss|Gauss]] (1801) made the next advance. Like Lagrange, he made much use of determinants in the [[theory of numbers]]. He introduced the word "determinant" (Laplace had used "resultant"), though not in the present signification, but rather as applied to the [[discriminant]] of a [[Algebraic form|quantic]].<ref>{{harvnb|Kleiner|2007|loc=§5.2}}</ref> Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.{{Clarify|date=June 2023|reason=What is "the multiplication theorem"?}}
 
The next contributor of importance is [[Jacques Philippe Marie Binet|Binet]] (1811, 1812), who formally stated the theorem relating to the product of two matrices of ''m'' columns and ''n'' rows, which for the special case of {{math|1=''m'' = ''n''}} reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, [[Cauchy]] also presented one on the subject. (See [[Cauchy–Binet formula]].) In this he used the word "determinant" in its present sense,<ref>The first use of the word "determinant" in the modern sense appeared in: Cauchy, Augustin-Louis "Memoire sur les fonctions qui ne peuvent obtenir que deux valeurs égales et des signes contraires par suite des transpositions operées entre les variables qu'elles renferment," which was first read at the Institute de France in Paris on November 30, 1812, and which was subsequently published in the ''Journal de l'Ecole Polytechnique'', Cahier 17, Tome 10, pages 29–112 (1815).</ref><ref>Origins of mathematical terms: http://jeff560.tripod.com/d.html</ref> summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's.<ref name="Campbell" /><ref>History of matrices and determinants: http://www-history.mcs.st-and.ac.uk/history/HistTopics/Matrices_and_determinants.html</ref> With him begins the theory in its generality.
 
{{harvtxt|Jacobi|1841}} used the functional determinant which Sylvester later called the [[Jacobian matrix and determinant|Jacobian]].<ref>{{harvnb|Eves|1990|p=494}}</ref> In his memoirs in ''[[Crelle's Journal]]'' for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called ''alternants''. About the time of Jacobi's last memoirs, [[James Joseph Sylvester|Sylvester]] (1839) and [[Arthur Cayley|Cayley]] began their work. {{harvnb|Cayley|1841}} introduced the modern notation for the determinant using vertical bars.<ref>{{harvnb|Cajori|1993|loc=Vol. II, p. 92, no. 462}}</ref><ref>History of matrix notation: http://jeff560.tripod.com/matrices.html</ref>
 
The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by [[Lebesgue]], [[Otto Hesse|Hesse]], and Sylvester; [[persymmetric]] determinants by Sylvester and [[Hermann Hankel|Hankel]]; [[Circulant|circulants]] by [[Eugène Charles Catalan|Catalan]], [[William Spottiswoode|Spottiswoode]], [[James Whitbread Lee Glaisher|Glaisher]], and Scott; skew determinants and [[Pfaffian|Pfaffians]], in connection with the theory of [[orthogonal transformation]], by Cayley; continuants by Sylvester; [[Wronskian|Wronskians]] (so called by [[Thomas Muir (mathematician)|Muir]]) by [[Elwin Bruno Christoffel|Christoffel]] and [[Ferdinand Georg Frobenius|Frobenius]]; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and [[Hessian matrix|Hessians]] by Sylvester; and symmetric gauche determinants by [[Trudi]]. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.
 
== Applications ==
 
=== Cramer's rule ===
Determinants can be used to describe the solutions of a [[linear system of equations]], written in matrix form as <math>Ax = b</math>. This equation has a unique solution <math>x</math> if and only if <math>\det (A)</math> is nonzero. In this case, the solution is given by [[Cramer's rule]]:
 
: <math>x_i = \frac{\det(A_i)}{\det(A)} \qquad i = 1, 2, 3, \ldots, n</math>
 
where <math>A_i</math> is the matrix formed by replacing the <math>i</math>-th column of <math>A</math> by the column vector <math>b</math>. This follows immediately by column expansion of the determinant, i.e.
 
: <math>\det(A_i) =
\det\begin{bmatrix}a_1 & \ldots & b & \ldots & a_n\end{bmatrix} =
\sum_{j=1}^n x_j\det\begin{bmatrix}a_1 & \ldots & a_{i-1} & a_j & a_{i+1} & \ldots & a_n\end{bmatrix} =
x_i\det(A)
</math>
 
where the vectors <math>a_j</math> are the columns of ''A''. The rule is also implied by the identity
 
: <math>A\, \operatorname{adj}(A) = \operatorname{adj}(A)\, A = \det(A)\, I_n.</math>
 
Cramer's rule can be implemented in <math>\operatorname O(n^3)</math> time, which is comparable to more common methods of solving systems of linear equations, such as [[LU decomposition|LU]], [[QR decomposition|QR]], or [[singular value decomposition]].<ref>{{harvnb|Habgood|Arel|2012}}</ref>
 
=== Linear independence ===
Determinants can be used to characterize [[Linear independence|linearly dependent]] vectors: <math>\det A</math> is zero if and only if the column vectors (or, equivalently, the row vectors) of the matrix <math>A</math> are linearly dependent.<ref>{{harvnb|Lang|1985|loc=§VII.3}}</ref> For example, given two linearly independent vectors <math>v_1, v_2 \in \mathbf R^3</math>, a third vector <math>v_3</math> lies in the [[Plane (geometry)|plane]] [[Linear span|spanned]] by the former two vectors exactly if the determinant of the <math>3 \times 3</math>-matrix consisting of the three vectors is zero. The same idea is also used in the theory of [[Differential equation|differential equations]]: given functions <math>f_1(x), \dots, f_n(x)</math> (supposed to be <math>n-1</math> times [[Differentiable function|differentiable]]), the [[Wronskian]] is defined to be
 
: <math>W(f_1, \ldots, f_n)(x) =
\begin{vmatrix}
f_1(x) & f_2(x) & \cdots & f_n(x) \\
f_1'(x) & f_2'(x) & \cdots & f_n'(x) \\
\vdots & \vdots & \ddots & \vdots \\
f_1^{(n-1)}(x) & f_2^{(n-1)}(x) & \cdots & f_n^{(n-1)}(x)
\end{vmatrix}.</math>
 
It is non-zero (for some <math>x</math>) in a specified interval if and only if the given functions and all their derivatives up to order <math>n-1</math> are linearly independent. If it can be shown that the Wronskian is zero everywhere on an interval then, in the case of [[Analytic function|analytic functions]], this implies the given functions are linearly dependent. See [[Wronskian#The Wronskian and linear independence|the Wronskian and linear independence]]. Another such use of the determinant is the [[resultant]], which gives a criterion when two [[Polynomial|polynomials]] have a common [[Root of a function|root]].<ref>{{harvnb|Lang|2002|loc=§IV.8}}</ref>
 
=== Orientation of a basis ===
{{Main|Orientation (vector space)}}
The determinant can be thought of as assigning a number to every [[sequence]] of ''n'' vectors in '''R'''<sup>''n''</sup>, by using the square matrix whose columns are the given vectors. The determinant will be nonzero if and only if the sequence of vectors is a ''basis'' for '''R'''<sup>''n''</sup>. In that case, the sign of the determinant determines whether the [[Orientation (vector space)|orientation]] of the basis is consistent with or opposite to the orientation of the [[standard basis]]. In the case of an orthogonal basis, the magnitude of the determinant is equal to the ''product'' of the lengths of the basis vectors. For instance, an [[orthogonal matrix]] with entries in '''R'''<sup>''n''</sup> represents an [[orthonormal basis]] in [[Euclidean space]], and hence has determinant of ±1 (since all the vectors have length 1). The determinant is +1 if and only if the basis has the same orientation. It is −1 if and only if the basis has the opposite orientation.
 
More generally, if the determinant of ''A'' is positive, ''A'' represents an orientation-preserving [[linear transformation]] (if ''A'' is an orthogonal {{math|2 × 2}} or {{math|3 × 3}} matrix, this is a [[Rotation (mathematics)|rotation]]), while if it is negative, ''A'' switches the orientation of the basis.
 
=== Volume and Jacobian determinant ===
As pointed out above, the [[absolute value]] of the determinant of real vectors is equal to the volume of the [[parallelepiped]] spanned by those vectors. As a consequence, if <math>f : \mathbf R^n \to \mathbf R^n</math> is the linear map given by multiplication with a matrix <math>A</math>, and <math>S \subset \mathbf R^n</math> is any [[Lebesgue measure|measurable]] [[subset]], then the volume of <math>f(S)</math> is given by <math>|\det(A)|</math> times the volume of <math>S</math>.<ref>{{harvnb|Lang|1985|loc=§VII.6, Theorem 6.10}}</ref> More generally, if the linear map <math>f : \mathbf R^n \to \mathbf R^m</math> is represented by the <math>m \times n</math>-matrix <math>A</math>, then the <math>n</math>-[[Dimension|dimensional]] volume of <math>f(S)</math> is given by:
 
: <math>\operatorname{volume}(f(S)) = \sqrt{\det\left(A^\textsf{T} A\right)} \operatorname{volume}(S).</math>
 
By calculating the volume of the [[tetrahedron]] bounded by four points, they can be used to identify [[Skew line|skew lines]]. The volume of any tetrahedron, given its [[Vertex (geometry)|vertices]] <math>a, b, c, d</math>, <math>\frac 1 6 \cdot |\det(a-b,b-c,c-d)|</math>, or any other combination of pairs of vertices that form a [[spanning tree]] over the vertices.
[[Berkas:Jacobian_determinant_and_distortion.svg|ka|jmpl|350x350px|A nonlinear map <math>f \colon \mathbf{R}^{2} \to \mathbf{R}^{2}</math> sends a small square (left, in red) to a distorted parallelogram (right, in red). The Jacobian at a point gives the best linear approximation of the distorted parallelogram near that point (right, in translucent white), and the Jacobian determinant gives the ratio of the area of the approximating parallelogram to that of the original square.]]
For a general [[differentiable function]], much of the above carries over by considering the [[Jacobian matrix]] of ''f''. For
 
: <math>f: \mathbf R^n \rightarrow \mathbf R^n,</math>
 
the Jacobian matrix is the {{math|''n'' × ''n''}} matrix whose entries are given by the [[Partial derivative|partial derivatives]]
 
: <math>D(f) = \left(\frac {\partial f_i}{\partial x_j}\right)_{1 \leq i, j \leq n}.</math>
 
Its determinant, the [[Jacobian determinant]], appears in the higher-dimensional version of [[integration by substitution]]: for suitable functions ''f'' and an [[open subset]] ''U'' of '''R'''<sup>''n''</sup> (the domain of ''f''), the integral over ''f''(''U'') of some other function {{math|''φ'' : '''R'''<sup>''n''</sup> → '''R'''<sup>''m''</sup>}} is given by
 
: <math>\int_{f(U)} \phi(\mathbf{v})\, d\mathbf{v} = \int_U \phi(f(\mathbf{u})) \left|\det(\operatorname{D}f)(\mathbf{u})\right| \,d\mathbf{u}.</math>
 
The Jacobian also occurs in the [[inverse function theorem]].
 
When applied to the field of [[Cartography]], the determinant can be used to measure the rate of expansion of a map near the poles.<ref>{{Cite book|last=Lay|first=David|year=2021|title=Linear Algebra and It's Applications 6th Edition|publisher=Pearson|pages=172|language=English}}</ref>
 
== Abstract algebraic aspects {{anchor|Abstract formulation}} ==
 
=== Determinant of an endomorphism ===
The above identities concerning the determinant of products and inverses of matrices imply that [[Matrix similarity|similar matrices]] have the same determinant: two matrices ''A'' and ''B'' are similar, if there exists an invertible matrix ''X'' such that {{math|1=''A'' = ''X''<sup>−1</sup>''BX''}}. Indeed, repeatedly applying the above identities yields
 
: <math>\det(A) = \det(X)^{-1} \det(B)\det(X) = \det(B) \det(X)^{-1} \det(X) = \det(B).</math>
 
The determinant is therefore also called a [[Similarity invariance|similarity invariant]]. The determinant of a [[linear transformation]]
 
: <math>T : V \to V</math>
 
for some finite-dimensional [[vector space]] ''V'' is defined to be the determinant of the matrix describing it, with respect to an arbitrary choice of [[Basis (linear algebra)|basis]] in ''V''. By the similarity invariance, this determinant is independent of the choice of the basis for ''V'' and therefore only depends on the endomorphism ''T''.
 
=== Square matrices over commutative rings ===
The above definition of the determinant using the Leibniz rule holds works more generally when the entries of the matrix are elements of a [[commutative ring]] <math>R</math>, such as the integers <math>\mathbf Z</math>, as opposed to the [[Field (mathematics)|field]] of real or complex numbers. Moreover, the characterization of the determinant as the unique alternating multilinear map that satisfies <math>\det(I) = 1</math> still holds, as do all the properties that result from that characterization.<ref>{{harvnb|Dummit|Foote|2004|loc=§11.4}}</ref>
 
A matrix <math>A \in \operatorname{Mat}_{n \times n}(R)</math> is invertible (in the sense that there is an inverse matrix whose entries are in <math>R</math>) if and only if its determinant is an [[Unit (ring theory)|invertible element]] in <math>R</math>.<ref>{{harvnb|Dummit|Foote|2004|loc=§11.4, Theorem 30}}</ref> For <math>R = \mathbf Z</math>, this means that the determinant is +1 or −1. Such a matrix is called [[Unimodular matrix|unimodular]].
 
The determinant being multiplicative, it defines a [[group homomorphism]]
 
: <math>\operatorname{GL}_n(R) \rightarrow R^\times, </math>
 
between the [[general linear group]] (the group of invertible <math>n \times n</math>-matrices with entries in <math>R</math>) and the [[multiplicative group]] of units in <math>R</math>. Since it respects the multiplication in both groups, this map is a [[group homomorphism]].
[[Berkas:Determinant_as_a_natural_transformation.svg|ka|jmpl|300x300px|The determinant is a natural transformation.]]
Given a [[ring homomorphism]] <math>f : R \to S</math>, there is a map <math>\operatorname{GL}_n(f) : \operatorname{GL}_n(R) \to \operatorname{GL}_n(S)</math> given by replacing all entries in <math>R</math> by their images under <math>f</math>. The determinant respects these maps, i.e., the identity
 
: <math>f(\det((a_{i,j}))) = \det ((f(a_{i,j})))</math>
 
holds. In other words, the displayed commutative diagram commutes.
 
For example, the determinant of the [[complex conjugate]] of a complex matrix (which is also the determinant of its conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction modulo <math>m</math> of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo <math>m</math> (the latter determinant being computed using [[modular arithmetic]]). In the language of [[category theory]], the determinant is a [[natural transformation]] between the two functors <math>\operatorname{GL}_n</math> and <math>(-)^\times</math>.<ref>{{harvnb|Mac Lane|1998|loc=§I.4}}. See also ''{{section link|Natural transformation#Determinant}}''.</ref> Adding yet another layer of abstraction, this is captured by saying that the determinant is a morphism of [[Algebraic group|algebraic groups]], from the general linear group to the [[multiplicative group]],
 
: <math>\det: \operatorname{GL}_n \to \mathbb G_m.</math>
 
=== Exterior algebra ===
{{See also|Exterior algebra#Linear algebra}}
The determinant of a linear transformation <math>T : V \to V</math> of an <math>n</math>-dimensional vector space <math>V</math> or, more generally a [[free module]] of (finite) [[Rank of a module|rank]] <math>n</math> over a commutative ring <math>R</math> can be formulated in a coordinate-free manner by considering the <math>n</math>-th [[Exterior algebra|exterior power]] <math>\bigwedge^n V</math> of <math>V</math>.<ref>{{harvnb|Bourbaki|1998|loc=§III.8}}</ref> The map <math>T</math> induces a linear map
 
: <math>\begin{align}
\bigwedge^n T: \bigwedge^n V &\rightarrow \bigwedge^n V \\
v_1 \wedge v_2 \wedge \dots \wedge v_n &\mapsto T v_1 \wedge T v_2 \wedge \dots \wedge T v_n.
\end{align}</math>
 
As <math>\bigwedge^n V</math> is one-dimensional, the map <math>\bigwedge^n T</math> is given by multiplying with some scalar, i.e., an element in <math>R</math>. Some authors such as {{harv|Bourbaki|1998}} use this fact to ''define'' the determinant to be the element in <math>R</math> satisfying the following identity (for all <math>v_i \in V</math>):
 
: <math>\left(\bigwedge^n T\right)\left(v_1 \wedge \dots \wedge v_n\right) = \det(T) \cdot v_1 \wedge \dots \wedge v_n.</math>
 
This definition agrees with the more concrete coordinate-dependent definition. This can be shown using the uniqueness of a multilinear alternating form on <math>n</math>-tuples of vectors in <math>R^n</math>. For this reason, the highest non-zero exterior power <math>\bigwedge^n V</math> (as opposed to the determinant associated to an endomorphism) is sometimes also called the determinant of <math>V</math> and similarly for more involved objects such as [[Vector bundle|vector bundles]] or [[Chain complex|chain complexes]] of vector spaces. Minors of a matrix can also be cast in this setting, by considering lower alternating forms <math>\bigwedge^k V</math> with <math>k < n</math>.<ref>{{harvnb|Lombardi|Quitté|2015|loc=§5.2}}, {{harvnb|Bourbaki|1998|loc=§III.5}}</ref>
 
== Generalizations and related notions ==
Determinants as treated above admit several variants: the [[Permanent (mathematics)|permanent]] of a matrix is defined as the determinant, except that the factors <math>\sgn(\sigma)</math> occurring in Leibniz's rule are omitted. The [[Immanant of a matrix|immanant]] generalizes both by introducing a [[Character theory|character]] of the [[symmetric group]] <math>S_n</math> in Leibniz's rule.
 
=== Determinants for finite-dimensional algebras ===
For any [[associative algebra]] <math>A</math> that is [[Dimension|finite-dimensional]] as a vector space over a field <math>F</math>, there is a determinant map <ref>{{harvnb|Garibaldi|2004}}</ref>
 
: <math>\det : A \to F.</math>
 
This definition proceeds by establishing the characteristic polynomial independently of the determinant, and defining the determinant as the lowest order term of this polynomial. This general definition recovers the determinant for the [[matrix algebra]] <math>A = \operatorname{Mat}_{n \times n}(F)</math>, but also includes several further cases including the determinant of a [[quaternion]],
 
: <math>\det (a + ib+jc+kd) = a^2 + b^2 + c^2 + d^2</math>,
 
the [[Field norm|norm]] <math>N_{L/F} : L \to F</math> of a [[field extension]], as well as the [[Pfaffian]] of a skew-symmetric matrix and the [[reduced norm]] of a [[central simple algebra]], also arise as special cases of this construction.
 
=== Infinite matrices ===
For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over directly. For example, in the Leibniz formula, an infinite sum (all of whose terms are infinite products) would have to be calculated. [[Functional analysis]] provides different extensions of the determinant for such infinite-dimensional situations, which however only work for particular kinds of operators.
 
The [[Fredholm determinant]] defines the determinant for operators known as [[Trace class operator|trace class operators]] by an appropriate generalization of the formula
 
: <math>\det(I+A) = \exp(\operatorname{tr}(\log(I+A))). </math>
 
Another infinite-dimensional notion of determinant is the [[functional determinant]].
 
=== Operators in von Neumann algebras ===
For operators in a finite [[Von Neumann algebra#Factors|factor]], one may define a positive real-valued determinant called the [[Fuglede−Kadison determinant]] using the canonical trace. In fact, corresponding to every [[State (functional analysis)#tracial state|tracial state]] on a [[von Neumann algebra]] there is a notion of Fuglede−Kadison determinant.
 
=== Related notions for non-commutative rings ===
For matrices over non-commutative rings, multilinearity and alternating properties are incompatible for {{math|''n'' ≥ 2}},<ref>In a non-commutative setting left-linearity (compatibility with left-multiplication by scalars) should be distinguished from right-linearity. Assuming linearity in the columns is taken to be left-linearity, one would have, for non-commuting scalars ''a'', ''b'':
 
: <math>ab =
ab \begin{vmatrix}1&0 \\ 0&1\end{vmatrix} =
a \begin{vmatrix}1&0 \\ 0&b\end{vmatrix} =
\begin{vmatrix}a&0 \\ 0&b\end{vmatrix} =
b \begin{vmatrix}a&0 \\ 0&1\end{vmatrix} =
ba \begin{vmatrix}1&0 \\ 0&1\end{vmatrix} = ba,
</math>a contradiction. There is no useful notion of multi-linear functions over a non-commutative ring.</ref> so there is no good definition of the determinant in this setting.For square matrices with entries in a non-commutative ring, there are various difficulties in defining determinants analogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order for the product is specified, and similarly for other definitions of the determinant, but non-commutativity then leads to the loss of many fundamental properties of the determinant, such as the multiplicative property or that the determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is no reasonable notion of a multilinear form (existence of a nonzero {{clarify span|text=bilinear form|explain=What exactly is meant by this term must be specified. This statement is valid only if the bilinear form is required to be linear on the same side for both arguments; in contrast, Bourbaki defines a bilinear form B as having the property B(ax,yb) = aB(x,y)b, i.e., left-linear in the left argument and right-linear in the other.|date=October 2017}}
with a [[Regular element (ring theory)|regular element]] of ''R'' as value on some pair of arguments implies that ''R'' is commutative). Nevertheless, various notions of non-commutative determinant have been formulated that preserve some of the properties of determinants, notably [[Quasideterminant|quasideterminants]] and the [[Dieudonné determinant]]. For some classes of matrices with non-commutative elements, one can define the determinant and prove linear algebra theorems that are very similar to their commutative analogs. Examples include the ''q''-determinant on quantum groups, the [[Capelli determinant]] on Capelli matrices, and the [[Berezinian]] on [[supermatrices]] (i.e., matrices whose entries are elements of <math>\mathbb Z_2</math>-[[Graded ring|graded rings]]).<ref>{{Citation|url=https://books.google.com/books?id=sZ1-G4hQgIIC&q=Berezinian&pg=PA116|title=Supersymmetry for mathematicians: An introduction|isbn=978-0-8218-3574-6|last1=Varadarajan|first1=V. S|year=2004|publisher=American Mathematical Soc.|postscript=.}}</ref> [[Manin matrices]] form the class closest to matrices with commutative elements.
 
== Calculation ==
Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in [[numerical linear algebra]], where for applications such as checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques.<ref>"... we mention that the determinant, though a convenient notion theoretically, rarely finds a useful role in numerical algorithms.", see {{harvnb|Trefethen|Bau III|1997|loc=Lecture 1}}.</ref> [[Computational geometry]], however, does frequently use calculations related to determinants.<ref>{{harvnb|Fisikopoulos|Peñaranda|2016|loc=§1.1, §4.3}}</ref>
 
While the determinant can be computed directly using the Leibniz rule this approach is extremely inefficient for large matrices, since that formula requires calculating <math>n!</math> (<math>n</math> [[factorial]]) products for an <math>n \times n</math>-matrix. Thus, the number of required operations grows very quickly: it is [[Big O notation|of order]] <math>n!</math>. The Laplace expansion is similarly inefficient. Therefore, more involved techniques have been developed for calculating determinants.
 
=== Decomposition methods ===
Some methods compute <math>\det(A)</math> by writing the matrix as a product of matrices whose determinants can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the [[LU decomposition]], the [[QR decomposition]] or the [[Cholesky decomposition]] (for [[Positive definite matrix|positive definite matrices]]). These methods are of order <math>\operatorname O(n^3)</math>, which is a significant improvement over <math>\operatorname O (n!)</math>.<ref>{{cite arXiv|last=Camarero|first=Cristóbal|date=2018-12-05|title=Simple, Fast and Practicable Algorithms for Cholesky, LU and QR Decomposition Using Fast Rectangular Matrix Multiplication|class=cs.NA|eprint=1812.02056}}</ref>
 
For example, LU decomposition expresses <math>A</math> as a product
 
: <math> A = PLU. </math>
 
of a [[permutation matrix]] <math>P</math> (which has exactly a single <math>1</math> in each column, and otherwise zeros), a lower triangular matrix <math>L</math> and an upper triangular matrix <math>U</math>. The determinants of the two triangular matrices <math>L</math> and <math>U</math> can be quickly calculated, since they are the products of the respective diagonal entries. The determinant of <math>P</math> is just the sign <math>\varepsilon</math> of the corresponding permutation (which is <math>+1</math> for an even number of permutations and is <math> -1 </math> for an odd number of permutations). Once such a LU decomposition is known for <math>A</math>, its determinant is readily computed as
 
: <math> \det(A) = \varepsilon \det(L)\cdot\det(U). </math>
 
=== Further methods ===
The order <math>\operatorname O(n^3)</math> reached by decomposition methods has been improved by different methods. If two matrices of order <math>n</math> can be multiplied in time <math>M(n)</math>, where <math>M(n) \ge n^a</math> for some <math>a>2</math>, then there is an algorithm computing the determinant in time <math>O(M(n))</math>.<ref>{{harvnb|Bunch|Hopcroft|1974}}</ref> This means, for example, that an <math>\operatorname O(n^{2.376})</math> algorithm for computing the determinant exists based on the [[Coppersmith–Winograd algorithm]]. This exponent has been further lowered, as of 2016, to 2.373.<ref>{{harvnb|Fisikopoulos|Peñaranda|2016|loc=§1.1}}</ref>
 
In addition to the complexity of the algorithm, further criteria can be used to compare algorithms. Especially for applications concerning matrices over rings, algorithms that compute the determinant without any divisions exist. (By contrast, Gauss elimination requires divisions.) One such algorithm, having complexity <math>\operatorname O(n^4)</math> is based on the following idea: one replaces permutations (as in the Leibniz rule) by so-called [[Closed ordered walk|closed ordered walks]], in which several items can be repeated. The resulting sum has more terms than in the Leibniz rule, but in the process several of these products can be reused, making it more efficient than naively computing with the Leibniz rule.<ref>{{harvnb|Rote|2001}}</ref> Algorithms can also be assessed according to their [[bit complexity]], i.e., how many bits of accuracy are needed to store intermediate values occurring in the computation. For example, the [[Gaussian elimination]] (or LU decomposition) method is of order <math>\operatorname O(n^3)</math>, but the bit length of intermediate values can become exponentially long.<ref>{{Cite conference|first1=Xin Gui|last1=Fang|first2=George|last2=Havas|title=On the worst-case complexity of integer Gaussian elimination|book-title=Proceedings of the 1997 international symposium on Symbolic and algebraic computation|conference=ISSAC '97|pages=28–31|publisher=ACM|year=1997|location=Kihei, Maui, Hawaii, United States|url=http://perso.ens-lyon.fr/gilles.villard/BIBLIOGRAPHIE/PDF/ft_gateway.cfm.pdf|doi=10.1145/258726.258740|isbn=0-89791-875-4|access-date=2011-01-22|archive-url=https://web.archive.org/web/20110807042828/http://perso.ens-lyon.fr/gilles.villard/BIBLIOGRAPHIE/PDF/ft_gateway.cfm.pdf|archive-date=2011-08-07|url-status=dead}}</ref> By comparison, the [[Bareiss Algorithm]], is an exact-division method (so it does use division, but only in cases where these divisions can be performed without remainder) is of the same order, but the bit complexity is roughly the bit size of the original entries in the matrix times <math>n</math>.<ref>{{harvnb|Fisikopoulos|Peñaranda|2016|loc=§1.1}}, {{harvnb|Bareiss|1968}}</ref>
 
If the determinant of ''A'' and the inverse of ''A'' have already been computed, the [[matrix determinant lemma]] allows rapid calculation of the determinant of {{math|''A'' + ''uv''<sup>T</sup>}}, where ''u'' and ''v'' are column vectors.
 
Charles Dodgson (i.e. [[Lewis Carroll]] of ''[[Alice's Adventures in Wonderland]]'' fame) invented a method for computing determinants called [[Dodgson condensation]]. Unfortunately this interesting method does not always work in its original form.<ref>{{Cite journal|last=Abeles|first=Francine F.|date=2008|title=Dodgson condensation: The historical and mathematical development of an experimental method|url=https://www.academia.edu/10352246|journal=Linear Algebra and Its Applications|language=en|volume=429|issue=2–3|pages=429–438|doi=10.1016/j.laa.2007.11.022}}</ref>
 
== See also ==
{{portal|Mathematics}} {{colbegin}}
* [[Cauchy determinant]]
* [[Cayley–Menger determinant]]
* [[Dieudonné determinant]]
* [[Slater determinant]]
* [[Determinantal conjecture]]
{{colend}}
 
== Notes ==
<references group="nb" responsive="1"></references>
<references group="" responsive="0"></references>
 
== References ==
{{See also|Linear algebra#Further reading}}
 
* {{Citation|last=Anton|first=Howard|year=2005|title=Elementary Linear Algebra (Applications Version)|publisher=Wiley International|edition=9th}}
* {{Cite book|last=Axler|first=Sheldon Jay|year=2015|title=Linear Algebra Done Right|publisher=[[Springer Science+Business Media|Springer]]|isbn=978-3-319-11079-0|edition=3rd|author-link=Sheldon Axler}}
* {{citation|first=Erwin|last=Bareiss|title=Sylvester's Identity and Multistep Integer-Preserving Gaussian Elimination|pages=565–578|url=https://www.ams.org/journals/mcom/1968-22-103/S0025-5718-1968-0226829-0/S0025-5718-1968-0226829-0.pdf|archive-url=https://web.archive.org/web/20121025053848/http://www.ams.org/journals/mcom/1968-22-103/S0025-5718-1968-0226829-0/S0025-5718-1968-0226829-0.pdf|archive-date=2012-10-25|url-status=live|journal=Mathematics of Computation|year=1968|volume=22|issue=102|doi=10.2307/2004533|jstor=2004533}}
* {{Citation|last1=de Boor|first1=Carl|author1-link=Carl R. de Boor|title=An empty exercise|url=http://ftp.cs.wisc.edu/Approx/empty.pdf|archive-url=https://web.archive.org/web/20060901214854/http://ftp.cs.wisc.edu/Approx/empty.pdf|archive-date=2006-09-01|url-status=live|doi=10.1145/122272.122273|year=1990|journal=ACM SIGNUM Newsletter|volume=25|issue=2|pages=3–7|s2cid=62780452}}
* {{Citation|last1=Bourbaki|first1=Nicolas|title=Algebra I, Chapters 1-3|isbn=9783540642435|publisher=Springer|year=1998}}
* {{cite journal|last1=Bunch|first1=J. R.|last2=Hopcroft|first2=J. E.|year=1974|title=Triangular Factorization and Inversion by Fast Matrix Multiplication|journal=[[Mathematics of Computation]]|volume=28|issue=125|pages=231–236|doi=10.1090/S0025-5718-1974-0331751-8|doi-access=free}}
* {{Citation|title=Abstract algebra|last1=Dummit|first1=David S.|last2=Foote|first2=Richard M.|date=2004|publisher=Wiley|isbn=9780471452348|edition=3rd|location=Hoboken, NJ|oclc=248917264}}
* {{Citation|last1=Fisikopoulos|journal=[[Computational Geometry (journal)|Computational Geometry]]|volume=54|year=2016|pages=1–16|title=Faster geometric algorithms via dynamic determinant computation|first1=Vissarion|first2=Luis|last2=Peñaranda|doi=10.1016/j.comgeo.2015.12.001|doi-access=free}}
* {{Citation|last1=Garibaldi|first1=Skip|title=The characteristic polynomial and determinant are not ad hoc constructions|journal=American Mathematical Monthly|volume=111|year=2004|issue=9|pages=761–778|mr=2104048|doi=10.2307/4145188|jstor=4145188|arxiv=math/0203276}}
* {{Cite journal|last1=Habgood|first1=Ken|last2=Arel|first2=Itamar|year=2012|title=A condensation-based application of Cramer's rule for solving large-scale linear systems|url=https://hal.archives-ouvertes.fr/hal-01500199/file/HA.pdf|journal=Journal of Discrete Algorithms|volume=10|pages=98–109|doi=10.1016/j.jda.2011.06.007|archive-url=https://web.archive.org/web/20190505060158/https://hal.archives-ouvertes.fr/hal-01500199/file/HA.pdf|archive-date=2019-05-05|url-status=live|doi-access=free}}
* {{Citation|last=Harris|first=Frank E.|year=2014|title=Mathematics for Physical Science and Engineering|publisher=Elsevier|isbn=9780128010495}}
* {{Citation|last=Kleiner|first=Israel|editor1-first=Israel|editor1-last=Kleiner|title=A history of abstract algebra|year=2007|publisher=Birkhäuser|isbn=978-0-8176-4684-4|mr=2347309|doi=10.1007/978-0-8176-4685-1}}
* {{Citation|first1=Joseph P.S.|last1=Kung|first2=Gian-Carlo|last2=Rota|first3=Catherine|last3=Yan|author3-link=Catherine H. Yan|title=[[Combinatorics: The Rota Way]]|publisher=Cambridge University Press|year=2009|isbn=9780521883894}}
* {{Citation|last=Lay|first=David C.|date=August 22, 2005|title=Linear Algebra and Its Applications|publisher=Addison Wesley|edition=3rd|isbn=978-0-321-28713-7}}
* {{Citation|last1=Lombardi|first1=Henri|last2=Quitté|first2=Claude|title=Commutative Algebra: Constructive Methods|year=2015|isbn=9789401799447|publisher=Springer}}
* {{Citation|last=Mac Lane|first=Saunders|title=Categories for the Working Mathematician|year=1998|series=Graduate Texts in Mathematics '''5'''|edition=2nd|publisher=Springer-Verlag|isbn=0-387-98403-8|author-link=Saunders Mac Lane|title-link=Categories for the Working Mathematician}}
* {{Citation|last=Meyer|first=Carl D.|date=February 15, 2001|title=Matrix Analysis and Applied Linear Algebra|publisher=Society for Industrial and Applied Mathematics (SIAM)|isbn=978-0-89871-454-8|url=http://www.matrixanalysis.com/DownloadChapters.html|url-status=dead|archive-url=https://web.archive.org/web/20091031193126/http://matrixanalysis.com/DownloadChapters.html|archive-date=2009-10-31}}
* {{citation|last=Muir|first=Thomas|author-link=Thomas Muir (mathematician)|title=A treatise on the theory of determinants|others=Revised and enlarged by William H. Metzler|orig-year=1933|year=1960|publisher=Dover|location=New York, NY}}
* {{Citation|last=Poole|first=David|year=2006|title=Linear Algebra: A Modern Introduction|publisher=Brooks/Cole|edition=2nd|isbn=0-534-99845-3}}
* [[G. Baley Price]] (1947) "Some identities in the theory of determinants", [[American Mathematical Monthly]] 54:75–90 {{mr|id=0019078}}
* {{Cite book|last1=Horn|first1=Roger Alan|last2=Johnson|first2=Charles Royal|year=2018|title=Matrix Analysis|publisher=[[Cambridge University Press]]|isbn=978-0-521-54823-6|edition=2nd|author-link=Roger Horn|author-link2=Charles Royal Johnson|orig-year=1985}}
* {{Citation|last=Lang|first=Serge|title=Introduction to Linear Algebra|edition=2|year=1985|publisher=Springer|isbn=9780387962054|series=Undergraduate Texts in Mathematics}}
* {{Citation|last=Lang|first=Serge|title=Linear Algebra|edition=3|year=1987|publisher=Springer|isbn=9780387964126|series=Undergraduate Texts in Mathematics}}
* {{cite book|last1=Lang|first1=Serge|date=2002|title=Algebra|location=New York, NY|publisher=Springer|isbn=978-0-387-95385-4|series=Graduate Texts in Mathematics}}
* {{Citation|last=Leon|first=Steven J.|year=2006|title=Linear Algebra With Applications|publisher=Pearson Prentice Hall|edition=7th}}
* {{Citation|last1=Rote|first1=Günter|chapter=Division-free algorithms for the determinant and the Pfaffian: algebraic and combinatorial approaches|title=Computational discrete mathematics|series=Lecture Notes in Comput. Sci.|volume=2122|pages=119–135|publisher=Springer|year=2001|mr=1911585|doi=10.1007/3-540-45506-X_9|isbn=978-3-540-42775-9|doi-access=free|chapter-url=https://page.inf.fu-berlin.de/~rote/Papers/pdf/Division-free+algorithms.pdf|access-date=2020-06-04|archive-date=2007-02-01|archive-url=https://web.archive.org/web/20070201145100/http://page.inf.fu-berlin.de/~rote/Papers/pdf/Division-free+algorithms.pdf|url-status=dead}}
* {{Citation|last1=Trefethen|first1=Lloyd|last2=Bau III|first2=David|location=Philadelphia|isbn=978-0-89871-361-9|year=1997|title=Numerical Linear Algebra|publisher=SIAM|edition=1st}}
 
=== Historical references ===
 
* {{Citation|last=Bourbaki|first=Nicolas|title=Elements of the history of mathematics|translator-first=John|translator=Meldrum|publisher=Springer|year=1994|isbn=3-540-19376-6|doi=10.1007/978-3-642-61693-8}}
* {{Citation|last=Cajori|first=Florian|title=A history of mathematical notations: Including Vol. I. Notations in elementary mathematics; Vol. II. Notations mainly in higher mathematics, Reprint of the 1928 and 1929 originals|publisher=Dover|year=1993|isbn=0-486-67766-4|mr=3363427}}
* {{Citation|last=Bezout|first=Étienne|year=1779|location=Paris|title=Théorie générale des equations algébriques|url=https://gallica.bnf.fr/ark:/12148/bpt6k106053p.image}}
* {{Citation|last=Cayley|first=Arthur|title=On a theorem in the geometry of position|journal=Cambridge Mathematical Journal|volume=2|pages=267–271|year=1841}}
* {{Citation|last=Cramer|first=Gabriel|title=Introduction à l'analyse des lignes courbes algébriques|year=1750|doi=10.3931/e-rara-4048|location=Genève|publisher=Frères Cramer & Cl. Philibert}}
* {{Citation|last=Eves|first=Howard|title=An introduction to the history of mathematics|edition=6|publisher=Saunders College Publishing|year=1990|isbn=0-03-029558-0|mr=1104435}}
* {{Citation|editor1-last=Grattan-Guinness|editor1-first=I.|title=Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences|volume=1|year=2003|isbn=9780801873966|publisher=[[Johns Hopkins University Press]]}}
* {{Citation|last1=Jacobi|first1=Carl Gustav Jakob|author1-link=Carl Gustav Jakob Jacobi|title=De Determinantibus functionalibus|url=https://www.digizeitschriften.de/dms/img/?PID=GDZPPN002142724&physid=phys325#navi|journal=Journal für die reine und angewandte Mathematik|year=1841|volume=1841|issue=22|pages=320–359|doi=10.1515/crll.1841.22.319|s2cid=123637858}}
* {{Citation|last=Laplace|first=Pierre-Simon, de|author-link=Pierre-Simon Laplace|title=Recherches sur le calcul intégral et sur le systéme du monde|journal=Histoire de l'Académie Royale des Sciences|location=Paris|year=1772|issue=seconde partie|pages=267–376|url=https://gallica.bnf.fr/ark:/12148/bpt6k77596b/f374}}
 
== External links ==
{{Wikibooks|Linear Algebra|Determinants}} {{EB1911 poster|Determinant}}
 
* {{SpringerEOM|title=Determinant|id=Determinant&oldid=12692|last=Suprunenko|first=D.A.}}
* '''{{MathWorld|title=Determinant|urlname=Determinant}}'''
* {{MacTutor||class=HistTopics|id=Matrices_and_determinants|title=Matrices and determinants}}
* [http://people.revoledu.com/kardi/tutorial/LinearAlgebra/MatrixDeterminant.html Determinant Interactive Program and Tutorial]
* [http://www.umat.feec.vutbr.cz/~novakm/determinanty/en/ Linear algebra: determinants.] {{Webarchive|url=https://web.archive.org/web/20081204081902/http://www.umat.feec.vutbr.cz/~novakm/determinanty/en/|date=2008-12-04}} Compute determinants of matrices up to order 6 using Laplace expansion you choose.
* [https://physandmathsolutions.com/Menus/matrix_determinant_calculator.php Determinant Calculator] Calculator for matrix determinants, up to the 8th order.
* [http://www.economics.soton.ac.uk/staff/aldrich/matrices.htm Matrices and Linear Algebra on the Earliest Uses Pages]
* [http://algebra.math.ust.hk/course/content.shtml Determinants explained in an easy fashion in the 4th chapter as a part of a Linear Algebra course.]
 
{{Linear algebra}}{{authority control}}
----[[Fitxer:Area_parallellogram_as_determinant.svg|pra=[https://ca.wiki-indonesia.club/wiki/Fitxer:Area_parallellogram_as_determinant.svg%7Cjmpl%7CL'%C3%A0rea https://ca.wiki-indonesia.club/wiki/Fitxer:Area_parallellogram_as_determinant.svg%7Cjmpl%7CL'àrea] del [[:ca:Paral·lelogram|paral·lelogram]] és el valor absolut del determinant de la matriu formada pels vectors que representen els costats del paral·lelogram.]] En [[:ca:Matemàtiques|matemàtiques]], el '''determinant''' és una eina molt potent en nombrosos dominis (estudi d'[[:ca:Endomorfisme|endomorfismes]], recerca de [[:ca:Valor_propi|valors propis]], [[:ca:Càlcul_infinitesimal|càlcul diferencial]]). És així com es defineixen el determinant d'un sistema d'equacions, el determinant d'un endomorfisme, o el determinant d'un sistema de vectors. Va ser introduïda inicialment a l'[[:ca:Àlgebra|àlgebra]] per a resoldre el problema de determinar el nombre de solucions d'un [[:ca:Sistema_d'equacions_lineals|sistema d'equacions lineals]].
 
Com en moltes altres operacions, el determinant pot ser definit per una col·lecció de propietats [[:ca:Axioma|axiomes]] que es resumeixen amb l'expressió «forma n - lineal alternada». Aquesta definició permet de fer-ne un estudi teòric complet i ampliar encara més els seus camps d'aplicació. Però el determinant també es pot concebre com una generalització en l'espai de dimensió ''n'' de la noció de [[:ca:Superfície|superfície]] o de [[:ca:Volum|volum]] orientats. Aquest aspecte, sovint negligit, és un enfocament pràctic i lluminós de les propietats del determinant.
 
== Història dels determinants ==
Els determinants van ser introduïts a Occident a partir del {{segle|XVI|s}}. Aquesta iniciativa es produí molt abans que les [[:ca:Matriu_(matemàtiques)|matrius]], que no apareixen fins al {{segle|XIX}}. Convé recordar que els [[:ca:Xina|xinesos]] van ser els primers a utilitzar taules de nombres i a aplicar un [[:ca:Algorisme|algorisme]] ara conegut sota el nom de procediment d'[[:ca:Eliminació_de_Gauss-Jordan|eliminació de Gauss-Jordan]].
 
=== Primers càlculs de determinants ===
Tot i que se'n pot trobar algun antecedent remot en l'obra de [[:ca:Yang_Hui|Yang Hui]] (Xina, {{segle|XIII}}), en el seu sentit original, el determinant "determina" la unicitat de la solució d'un [[:ca:Sistema_d'equacions_lineals|sistema d'equacions lineals]]. Va ser introduït en el cas de la dimensió 2 per [[:ca:Girolamo_Cardano|Cardan]] l'any [[:ca:1545|1545]] en la seva obra ''Ars magna'', en forma d'una "regla" per a la resolució de sistemes de dues equacions amb dues incògnites.<ref>E. Knobloch. "Determinants in I Grattan-Guinness" (ed.), ''Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences'', Londres, 1994, pàg. 766-774 {{ISBN|0415037859}}</ref> Aquesta primera fórmula porta el nom de ''regula de modo''. [[Fitxer:Seki.jpeg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Seki.jpeg%7Cjmpl%7CEl japonès [[:ca:Takakazu_Seki|Kowa Seki]] introdueix els primers determinants de dimensió 3 i 4, a la mateixa època que l'alemany [[:ca:Gottfried_Leibniz|Leibniz]]]] L'aparició dels determinants de dimensió superior tardarà encara més de cent anys. Curiosament el japonès [[:ca:Takakazu_Seki|Kowa Seki]] i l'alemany [[:ca:Gottfried_Leibniz|Leibniz]] en van donar els primers exemples de manera gairebé simultània.
 
Leibniz estudia nombrosos sistemes d'equacions lineals. En absència de notació matricial, representa els coeficients desconeguts amb una parella d'índex: escriu així ''ij'' per a ''a<sub>i, j</sub>''. El [[:ca:1678|1678]], s'interessa per un sistema de tres equacions i tres incògnites i dona, sobre aquest exemple, la fórmula de desenvolupament seguint una columna. El mateix any, escriu un determinant de dimensió 4.<ref>E. Knobloch. ''Der Beginn der Determinantentheorie, Leibnizens nachgelassene Studien zum Determinantenkalkül'', Hildesheim, 1980.</ref> Leibniz no publica aquests treballs, que semblen haver estat oblidat abans que els resultats siguin descoberts independentment cinquanta anys més tard.
 
Al mateix període, [[:ca:Takakazu_Seki|Kowa Seki]] publica un manuscrit sobre els determinants, on troba una formulació general difícil d'interpretar. Sembla donar fórmules correctes per a determinants de dimensió 3 i 4, i els signes erronis per als determinants de dimensió superior.<ref>Y. Mikami. ''The development of Mathematics in China and Japan'', 1913, 2e ed., Chelsea Pub. Company, 1974.</ref> El descobriment quedarà sense continuïtat, a causa de l'aïllament del Japó amb el món exterior.
 
=== Determinants d'una dimensió qualsevol ===
El [[:ca:1748|1748]], un tractat d'àlgebra pòstuma de [[:ca:Colin_Maclaurin|MacLaurin]] rellança la teoria dels determinants, amb l'escriptura correcta de la solució d'un sistema de quatre equacions i quatre incògnites.<ref>[./Carl_Boyer C. B. Boyer]. ''A History of Mathematics'', John Wiley, 1968.</ref>
 
El [[:ca:1750|1750]], [[:ca:Gabriel_Cramer|Cramer]] formula les regles que permeten resoldre un sistema de ''n'' equacions i ''n'' incògnites, però sense donar-ne la demostració.<ref>[./Gabriel_Cramer Gabriel Cramer]. ''Introduction to the analysis of algebraic curves'', 1750</ref> Els mètodes de càlcul dels determinants són llavors delicats, ja que es fonamenten en la noció de [[:ca:Permutacions_parells_i_senars|permutacions parells i senars]].<ref>M. Cantor. ''Geschichte der Mathematik'', Teubner, 1913.</ref>
 
Els matemàtics s'apoderen d'aquest nou objecte, amb articles de [[:ca:Étienne_Bézout|Bézout]] el 1764<ref>[./Étienne_Bézout Étienne Bézout]. "Recherches sur le degré des équations résultantes de l'évanouuissement des inconnues, et sur le moyens qu'il convenient d'employer pour trouver ces équations", ''Mém. Acad. Roy. Sci'', París, 1764, pàg. 288–338.</ref> i de [[:ca:Alexandre-Théophile_Vandermonde|Vandermonde]] el [[:ca:1771|1771]],<ref>[./Alexandre-Théophile_Vandermonde Alexandre-Théophile Vandermonde]. "Mémoire sur l'élimination", ''Hist. de l'Acad. Roy. des Sciences'', París, 1772, 2a part, pàg. 516-532.</ref> sorprenentment sense obtenir el càlcul del determinant de la [[:ca:Matriu_de_Vandermonde|matriu de Vandermonde]] actual.{{cita|El gran reconeixement en matemàtiques s'assegura només als noms associats amb un mètode, un teorema, a una qualificació. Poc importa que el premi es fonamenti o no, i el nom de Vandermonde sigui ignorat per la gran majoria dels matemàtics, a menys que se li hagués atribuït aquest determinant que coneixeu bé i que no és seu!|V.A. Lebesgue. ''Conférence d'Utrecht'', 1837}}<ref>V.A. Lebesgue. ''Conférence d'Utrecht'', 1837</ref>
 
El 1772, [[:ca:Pierre-Simon_Laplace|Laplace]] estableix les fórmules de recurrència que porten el seu nom. L'any següent, [[:ca:Joseph-Louis_Lagrange|Lagrange]] descobreix la relació entre el càlcul dels determinants i dels volums.<ref>[./Joseph-Louis_Lagrange Joseph-Louis Lagrange]. "Nouvelle solution du problème du mouvement de rotation d'un corps de figure quelconque qui n'est animé par aucune force accélératrice". ''Nouveaux mémoires de l'Académie royale des sciences et des belles-lettres de Berlin'', 1773</ref>
 
[[:ca:Carl_Friedrich_Gauss|Gauss]] utilitza per primera vegada la paraula «determinant», en els ''[[:ca:Disquisitiones_arithmeticae|Disquisitiones arithmeticae]]'' el [[:ca:1801|1801]]. L'utilitza per al que qualifiquem avui de [[:ca:Discriminant|discriminant]] d'una [[:ca:Quàdrica|quàdrica]] que és un cas particular del determinant modern. És igualment a prop d'obtenir el teorema sobre el determinant d'un producte.<ref name="St Andrew">La major part de la informació d'aquesta secció es basa en aquesta referència:[http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html Matrices and determinants] {{Webarchive|url=https://web.archive.org/web/20150308120526/http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html|date=2015-03-08}} {{en}}</ref>
 
=== Aparició de la noció moderna de determinant ===
[[:ca:Augustin_Louis_Cauchy|Cauchy]] és el primer a fer servir la paraula determinant en el seu sentit modern. Es pot llegir al seu article de síntesi de més de vuitanta pàgines sobre la qüestió:{{cita|''El Sr. Gauss se n'ha servit amb avantatge en les seves Investigacions analítiques per descobrir les propietats generals de les formes del segon grau, és a dir dels [[polinomi]]s de segon grau de dues o més variables, i ha designat aquestes mateixes funcions sota el nom de determinants. Conservaré aquesta denominació que subministra a un mitjà fàcil d'enunciar els resultats; observaré només que es dona de vegades a les funcions de què es tracta el nom de resultants a dos o a diverses variables. Així les dues expressions següents: determinant i [[resultant]], hauran de ser vistes com a sinònimes.''<ref>[[Augustin Louis Cauchy]]. ''Mémoire sur les fonctions qui ne peuvent obtenir que deux valeurs égales et des signes contraires par suite des transpositions opérées entre les variables qu'elles renferment'', 1812, publicat a ''Journal de l'Ecole Poytechnique'', quadern XVII, volum X, París, 1815. Vegeu informació a [http://gallica.bnf.fr/ark:/12148/bpt6k90193x Gallica]</ref>}} [[Fitxer:Carl Jacobi.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Carl_Jacobi.jpg|jmpl|Carl Gustav Jacob Jacobi]] Representa una síntesi dels coneixements anteriors, així com de proposicions noves com el fet que l'[[:ca:Aplicació_transposada|aplicació transposada]] no modifica el determinant així com la fórmula del determinant d'un producte. [[:ca:Jacques_Philippe_Marie_Binet|Binet]] proposa igualment una demostració aquest mateix any. Més tard, Cauchy posa les bases de l'estudi de la [[:ca:Reducció_d'endomorfismes|reducció d'endomorfismes]].<ref>[./Augustin_Louis_Cauchy Cauchy] ''Application du calcul des résidus à l'intégration des équations différentielles linéaires à coefficients constants'', 1826. Vegeu informació a [http://visualiseur.bnf.fr/StatutConsulter?N=VERESS6-1196284532151&B=1&E=PDF&O=NUMM-90198 Gallica], ''visualiseur.bnf.fr''</ref>
 
Publicant els seus tres tractats sobre els determinants el 1841 al [[:ca:Journal_de_Crelle|journal de Crelle]], [[:ca:Charles_Gustave_Jacob_Jacobi|Jacobi]] dona una verdadera notorietat a la noció.<ref name="St Andrew" /> Per primera vegada, presenta mètodes de càlcul sistemàtics, sota forma algorítmica. Es fa igualment possible avaluar determinants de funcions amb el naixement del [[:ca:Jacobià|jacobià]].
 
El quadre matricial és introduït pels treballs de [[:ca:Arthur_Cayley|Cayley]] i [[:ca:James_Joseph_Sylvester|Sylvester]]. Cayley és igualment l'inventor de la notació dels determinants amb barres verticals; estableix la fórmula de càlcul de la inversa.
 
La teoria es completa amb l'estudi de determinants que tenen propietats de simetria particulars i amb la introducció del determinant en nous camps de les matemàtiques, com el [[:ca:Wronskià|wronskià]] per a les equacions diferencials lineals.
 
== Primers exemples: àrees i volums ==
Els càlculs d'[[:ca:Àrea|àrees]] i de [[:ca:Volum|volums]] en forma de determinants en espais euclidians apareixeran com casos particulars d'una noció més general de determinant. La lletra majúscula D (Det) es fa servir de vegades reservada per indicar-los.
 
=== Determinant de dos vectors en el pla euclidià ===
[[Fitxer:Determinant de vecteur dim 2.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Determinant_de_vecteur_dim_2.jpg|jmpl|Figura 1. El determinant és l'àrea blava orientada.]] Sigui ''P'' el pla euclidià amb l'orientació usual. El determinant dels vectors ''X'' i ''X ''' ve donat per l'expressió analítica
 
: <math>\det(X,X')=\begin{vmatrix} x & x' \\ y & y'\end{vmatrix}=xy'-yx' </math>
 
O, de forma equivalent, per l'expressió geomètrica
 
: <math>\det(X,X')=\|X\|\cdot\|X'\|\cdot\sin \theta</math>
 
En la qual <math>\theta</math> és l'[[:ca:Angle|angle]] orientat format pels vectors ''X'' i ''X&nbsp;'''.
 
==== Propietats ====
 
* el valor absolut del determinant és igual a l'[[:ca:Àrea|àrea]] del [[:ca:Paral·lelogram|paral·lelogram]] definit per ''X'' i ''X ''' ((<math> X\sin \theta</math> és en efecte l'alçada del paral·lelogram, d'on A = Base*Altura).
* el determinant és nul si i només si els dos vectors són col·lineals (el paral·lelogram es fa una línia).
 
: En efecte aquesta anul·lació apareix com un senzill test de proporcionalitat dels components dels vectors pel '''producte vectorial'''.
 
* El seu signe és estrictament positiu si i només si la mesura de l'angle ''(X, X ')'' és compresa en ]0,<math>\pi</math>[.
* l'aplicació determinant és [[:ca:Bilineal|bilineal]]: la linearitat respecte al primer vector s'escriu
 
: <math>\det(aX+bY,X')=a\det(X,X')+b\det(Y,X')\;</math>
 
i respecte al segon vector s'escriu
 
: <math>\det(X,aX'+bY')=a\det(X,X')+b\det(X,Y')\;</math>
 
[[Fitxer:Deux parallelogrammes-det.png|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Deux_parallelogrammes-det.png|jmpl|Figura 2. Suma de les àrees de dos paral·lelograms adjacents. Fixeu-vos que tots els vectors estan en el mateix pla, no es tracta d'una figura tridimensional, altament l'afirmació no seria certa ni el valor de les àrees seria el que dona el determinant]] La figura, en el pla, il·lustra un cas particular d'aquesta fórmula. Representa dos paral·lelograms adjacents, l'un definit pels vectors u i v (en verd), l'altre pels vectors u' i v (en blau). És fàcil visualitzar sobre aquest exemple l'àrea del paral·lelogram definit pels vectors u+u' i v (en gris): és igual a la suma de les àrees dels dos paral·lelograms precedents, a la qual es treu l'àrea d'un triangle, i s'afegeix l'àrea d'un altre triangle igual. Els dos triangles es corresponen per translació, es verifica la fórmula següent Det(u+u', v)=Det(u, v)+Det(u', v).
 
Aquest dibuix correspon a un cas particular de la fórmula de bilinealitat, ja que les orientacions han estat escollides per tal que les àrees tinguin el mateix signe, però ajuda a entendre el significat geomètric.
 
==== Generalització ====
És possible definir la noció de determinant en un pla euclidià orientat proveït d'una [[:ca:Base_ortonormal|base ortonormal]] directa ''B'', utilitzant les coordenades dels vectors en aquesta base. El càlcul del determinant dona el mateix resultat independentment de quina sigui la base ortonormal directa escollida per al càlcul.
 
=== Determinant de tres vectors en l'espai euclidià ===
Sigui ''E'' l'espai euclidià amb l'orientació usual de dimensió 3. El determinant de tres vectors d'''E'' ve donat per
 
: <math>\det(X,X ',X '')=\begin{vmatrix} x & x' &x''\\ y & y'&y''\\ z&z'&z''
\end{vmatrix}=x \begin{vmatrix} y' & y'' \\ z' & z''\end{vmatrix} - x' \begin{vmatrix} y & y'' \\ z & z''\end{vmatrix} + x'' \begin{vmatrix} y & y' \\ z & z'\end{vmatrix} = xy'z''+x'y''z+x''yz'-xy''z'-x'yz''-x''y'z. </math>
 
[[Fitxer:Déterminant-3D.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:D%C3%A9terminant-3D.jpg|jmpl|Figura 3. Il·lustració gràfica de la trilinealitat]] Aquest determinant també porta el nom de [[:ca:Producte_mixt|producte mixt]].
 
==== Propietats ====
 
* el valor absolut del determinant és igual al [[:ca:Volum|volum]] del [[:ca:Paral·lelepípede|paral·lelepípede]] definit pels tres vectors.
* el determinant és nul si i només si els tres vectors estan continguts en un mateix pla (paral·lelepípede «pla»)
* L'aplicació determinant és [[:ca:Aplicació_multilineal|trilineal]]:
 
: <math>
\det(aX+bY,X ',X '')=a\det(X,X ',X '')+b\det(Y,X ',X '')\,</math>
 
Una il·lustració geomètrica d'aquesta propietat es dona a la figura 3, per dos paral·lelepípedes adjacents, és a dir posseint una cara comuna. La igualtat següent esdevé intuïtiva
 
: <math>\det(u+u', v,w)=\det(u, v,w)+\det(u', v,w)\,</math>
 
=== Interpretació del signe del determinant: orientació ===
En el pla, el signe del determinant s'interpreta com el signe de l'angle orientat.
 
En l'espai a tres dimensions, el cub unitat serveix de referència. El seu determinant val un. Un paral·lelepípede no pla posseeix un determinant positiu si és possible obtenir-lo deformant contínuament (sense aixafar-lo mai) el cubica unitat.
 
El determinant és en canvi negatiu si és necessari aplicar a més una simetria, és a dir si el cub unitat no pot ser obtingut més que deformant el paral·lelepípede, i després observant el resultat d'aquesta deformació en un mirall. [[Fitxer:Orientation-déterminant.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Orientation-d%C3%A9terminant.jpg|pus|jmpl|450x450px|Figura 4. És possible passar del cub groc al paral·lelepípede verd per deformació continua. Això no és possible per al paral·lelepípede vermell que és la imatge especular del verd.]]
 
=== Plantejament intuïtiu del determinant d'una aplicació lineal ===
Una [[:ca:Aplicació_lineal|aplicació lineal]] és una aplicació que transforma les coordenades d'un vector de manera lineal. Per exemple en l'espai de dimensió 3, l'aplicació és lineal si les coordenades x, y i z d'un vector tenen per a imatge x', y' i z' amb:
 
: <math>\begin{matrix} x'= ax + by +cz\\ y'= dx + ey+fz \\z'=gx+hy+iz \end{matrix}</math>
 
on a, b,c..., i són nombres. La figura següent il·lustra dos casos d'aplicacions lineals.
 
En el primer cas, el cub groc és transformat en un paral·lelepípede il·lustrat en verd. En el segon cas, el cub groc és transformat en un volum aixafat, un quadrat vermell (és a dir que alguns dels vèrtexs del cub inicial tenen la mateixa imatge per l'aplicació lineal). Aquests dos casos corresponen a situacions diferents en matemàtiques. La primera funció del determinant és de subministrar un mitjà de separar aquests casos. [[Fitxer:Determinant (1).jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Determinant_(1).jpg|jmpl|Figura 5. Exemple d'aplicacions lineals: La primera transforma el cub groc en un volum verd la segona en un volum aixafat vermell.]] Per ser més precís, el determinant d'una aplicació lineal és un nombre, que representa un factor multiplicatiu per als volums. Si el cub groc és de volum 1, llavors el volum de la imatge del cub verd és el valor absolut del determinant de la primera aplicació. La segona aplicació té un determinant nul, el que correspon a un aixafament dels volums.
 
El signe del determinant és positiu si és possible deformar contínuament el cub groc per obtenir el verd. En canvi és negatiu si és necessari aplicar-hi a més una simetria.
 
De fet aquesta propietat no és verdadera només per al cub unitat groc. Tot volum transformat per una aplicació lineal resulta multiplicat pel valor absolut del determinant.
 
El determinant existeix per a les aplicacions lineals d'un espai en si mateix fins i tot en el cas de més de tres dimensions, sempre que es tracti d'un nombre finit de dimensions. En efecte, la noció de volum pot ser generalitzada: així un «hipercub» que tingui les seves arestes de longitud 2 en un espai euclidià de dimensió ''n'' tindria un determinant (mena de «hipervolum») de ''2<sup>n</sup>''. Per contra si l'espai conté una infinitat de dimensions, llavors el determinant no té sentit.
 
== Marc d'utilització ==
 
=== Determinant i equacions lineals ===
Existeix un cas de càlcul numèric molt freqüent per als enginyers, els físics o els economistes. Es tracta de la resolució d'un [[:ca:Sistema_d'equacions_lineals|sistema d'equacions lineals]]. Si el sistema posseeix tantes equacions com variables, es pot esperar l'existència i la unicitat d'una solució. Però no és sempre el cas, per exemple en cas de repetició de la mateixa equació, hi haurà una multiplicitat de solucions.
 
Més precisament, en un sistema de ''n'' equacions i ''n'' incògnites es pot associar un determinant. L'existència i la unicitat de la solució s'obté si i només si el determinant és diferent de 0. És possible, no només de garantir l'existència i la unicitat de la solució, sinó que la [[:ca:Regla_de_Cramer|regla de Cramer]] permet un càlcul exacte de la solució amb l'ajuda de determinants. Aquest mètode no és ni el més ràpid, ni el més senzill, es fa servir poc per als càlculs explícits, no obstant això és útil per establir certs resultats teòrics, tal com la dependència respecte als paràmetres.
 
==== Relació amb l'aixafament dels volums ====
Un [[:ca:Sistema_d'equacions_lineals|sistema de 3 equacions lineals]] amb 3 incògnites pot ser posat en forma d'una [[:ca:Equació_lineal|equació lineal]] ''u(X)=B'' on ''X=(x, y,z)'' és un vector, els components del qual són les incògnites del sistema, ''u'' una aplicació lineal de l'espai i ''B'' un vector. La resolució del sistema pot ser formulada de manera geomètrica: el vector ''B'' és la imatge d'un cert vector ''X'' per ''u''? Aquest últim és únic ? El determinant de ''u'' dona la resposta: l'existència i la unicitat s'obtenen si i només si no és nul.
 
La figura 5 permet un enfocament intuïtiu d'aquest resultat. N'hi ha prou amb considerar una [[:ca:Pavimentació|pavimentació]] de l'espai amb el cub groc i les seves imatges per translacions segons les tres direccions. Una família de cubs grocs adjacents omplen llavors tot l'espai.
 
* Si el determinant no és nul, llavors la imatge d'aquest paviment és un paviment de paral·lelepípedes de color verd, omplint igualment tot l'espai. Això significa que tots els vectors de l'espai són vectors imatges. Sobretot, la incògnita està ben recoberta per un dels volums verds. És imatge d'un vector.
* Per contra, si el determinant és nul, llavors la imatge del paviment no omple l'espai sencer. En l'exemple del cub aixafat vermell, no omple més que un pla. Certs vectors mai no són imatge de cap vector, altres són la imatge de diversos vectors alhora.
 
Més generalment, per a un sistema de n equacions i n incògnites, el determinant indica si les imatges per ''u'' omplen l'espai sencer o només un subespai.
 
=== Determinant i reducció ===
Les aplicacions lineals apareixen no només en geometria elemental sinó també en nombrosos àmbits avançats com certes resolucions d'[[:ca:Equació_diferencial|equacions diferencials]], la definició d'algoritmes ràpids o la resolució de problemes teòrics. És important comprendre el seu comportament.
 
Una eina d'anàlisi fecunda consisteix a catalogar els eixos privilegiats, segons els quals l'aplicació es comporta com una dilatació, multiplicant les longituds dels vectors per una de constant. Aquesta relació de dilatació es diu [[:ca:Valor_propi|valor propi]] i els vectors als quals s'aplica [[:ca:Vectors_propis|vectors propis]].
 
El fenomen d'aixafament dels volums pot ser mesurat per un determinant. Correspon en cas que, segons una certa direcció, els vectors són multiplicats per una relació de dilatació igual a 0 (valor propi nul). Més generalment, tots els valors propis poden ser obtinguts pel càlcul d'un determinant a paràmetre, dit [[:ca:Polinomi_característic|polinomi característic]].
 
=== Determinant i integral múltiple ===
[[Fitxer:Jacobien.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Jacobien.jpg|jmpl|Figura. 6. Jacobià.]] Tal com mostra l'enfocament intuïtiu, el determinant caracteritza la modificació de volum d'un paral·lelepípede per un endomorfisme. La [[:ca:Integral_múltiple|integral múltiple]] és una eina de determinació dels volums en el cas general. Utilitza la noció de determinant en el marc del [[:ca:Canvi_de_variable|canvi de variables]]. Llavors el determinant pren el nom de [[:ca:Jacobià|jacobià]]. Pot ser imaginat com la relació dels volums elementals abans i després de canvi de variables, usant la terminologia dels elements diferencials.
 
Més precisament, el comportament d'una aplicació diferenciable a l'entorn d'un punt és en primer ordre, equivalent al terme de modificació de volum, d'una aplicació lineal que té com a determinant el jacobià.
 
=== Determinant i esmorteïment en les equacions diferencials ===
[[Fitxer:Wronskien.png|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Wronskien.png|kiri|bingkai|Figura 7. Exemple del pèndol de longitud variable, sense esmorteïment. En blau i en vermell es representen dues solucions particulars, en l'espai de les fases. L'àrea formada per les dues solucions continua sent constant en el transcurs del temps]] En física, sobretot en mecànica del punt, és freqüent l'[[:ca:Equació_diferencial_lineal_d'ordre_dos|equació diferencial lineal d'ordre dos]] Es presenta sota la forma <math>y''=ay'+by+c</math> on a,b,c poden ser coeficients constants o més generalment funcions (per exemple del temps). El terme <math>a</math> es diu factor d'esmorteïment.
 
Aquesta equació diferencial s'associa a un determinant, dit wronskià. S'interpreta com una àrea en el pla (y, y') dit [[:ca:Espai_de_les_fases|espai de les fases]] pels físics. Aquesta àrea continua sent constant en el transcurs del temps si el terme d'esmorteïment és nul, disminueix de manera exponencial si és estrictament positiu. Encara que no sempre és possible presentar una solució explícita, el wronskià sempre és calculable.
 
El wronskià pot ser generalitzat a totes les [[:ca:Equacions_diferencials_lineals|equacions diferencials lineals]].
 
== Definició del determinant ==
 
=== Origen de la construcció del determinant ===
Les nocions de paral·lelogram i de paral·lelepípede es generalitzen a un [[:ca:Espai_vectorial|espai vectorial]] ''E'' de dimensió finita ''n'' sobre <math>\mathbb{R}</math>. A ''n'' vectors ''x<sub>1</sub>, ..., x<sub>n</sub>'' de ''E'' s'associa un [[:ca:Paral·lelòtop|paral·lelòtop]]. Es defineix com la part de ''E'' formada pel conjunt de les combinacions dels ''x<sub>i</sub> amb coeficients compresos entre 0 i 1''
 
: <math>P=\left\{x=\sum_{i=1}^n t_i x_i \Bigg|\, \forall i, 0\leq t_i \leq 1\right\}</math>
 
Convé veure en aquest paral·lelòtop una mena de llamborda obliqua.
 
Quan l'espai està proveït d'un [[:ca:Producte_escalar|producte escalar]], és possible definir el volum d'aquest paral·lelòtop, de vegades dit el seu hipervolum per subratllar que la dimensió de l'espai concernit no és per força 3. Verifica les propietats següents:
 
* els volums de dues llambordes adjacents per una cara, s'afegeixen
* la multiplicació d'un dels vectors que defineixen la llamborda per una de constant produeix la multiplicació del volum per aquesta constant
* el volum d'una llamborda formada per la repetició del mateix vector (que constitueix un cas particular de llamborda plana), és nul.
 
Un canvi de producte escalar sobre l'espai ''E'' modifica les mesures de longituds, angles, i per tant de volums. Tanmateix la teoria dels determinants ensenya que tret d'una constant multiplicativa, no existeix més que un únic mètode de càlcul dels volums en un espai vectorial de dimensió ''n''.
 
Reprenent un espai vectorial ''sense estructura particular'', la noció de determinant té per objectiu donar un sentit intrínsec al «volum» del paral·lelòtop, sense referència a un producte escalar per exemple, és a dir de construir una funció ''f'', que a ''x<sub>1</sub>, ..., x<sub>n</sub>'' els associa un nombre real, i verifica les propietats precedents. Tal aplicació es diu una forma ''n'' - lineal alternada.
 
=== Formes ''n'' - lineals alternades ===
La noció de '''forma n - lineal alternada''' generalitza les propietats precedents. Es defineix com una aplicació de ''E<sup>n</sup>'' en <math>\mathbb{R}</math> que és:
 
* [[:ca:Aplicació_lineal|lineal]] en cada variable. Així per la vectors ''x<sub>1</sub>, ..., x<sub>n</sub>, x'<sub>i</sub>'' i dos escalars ''a'' i ''b''
 
: <math>f(x_1,\dots,x_{i-1}, ax_i+bx'_i,x_{i+1}, \dots, x_n)=a f(x_1, \dots, x_n) + bf(x_1, \dots,x'_i,\dots x_n) \;</math>
 
* alternada, significa que s'anul·la cada vegada que és avaluada sobre una tupla que contingui dos vectors idèntics
 
: <math>[\exists i\neq j, x_i=x_j] \Rightarrow f(x_1,\dots, x_n)=0</math>
 
L'article [[:ca:Aplicació_multilineal|aplicació multilineal]] procedeix a l'estudi sistemàtic de les formes n- lineals alternades sobre un espai vectorial de dimensió ''n''.
 
El resultat principal és la possibilitat de remetre el càlcul de la imatge de <math>(x_1,..., x_n)</math> al d'imatges dels vectors de base per n- linealitat. A més a més el caràcter alternat permet canviar l'ordre dels vectors, de manera que n'hi ha prou amb conèixer la imatge <math>f(e_1, ..., e_n)</math> dels vectors d'una base, pres en l'ordre, per conèixer ''f''. Posar els vectors en l'ordre fa intervenir la noció de [[:ca:Permutació|permutació]].
 
'''Teorema'''
 
El conjunt ''A<sub>n</sub>(E)'' de les formes n- lineals alternades sobre un espai vectorial de dimensió ''n'' constitueix un espai vectorial de dimensió 1. Ames, si <math>(e_{1},\dots,e_{n})</math> és una base de ''E'', es pot expressar la imatge d'una tupla de vectors per
 
: <math>f(x_1,\dots,x_n)= \left(\sum_{\sigma\in \mathfrak{S}_n} \varepsilon(\sigma) \prod_{j=1}^n X_{\sigma(j),j} \right) f(e_{1},\dots,e_{n})</math>
 
amb ''X<sub>ij</sub>'' la ''i''-ena component de ''x<sub>j</sub>'' i <math>\varepsilon(\sigma)</math> que denota el signe de la permutació (un per a una permutació parell, -1 per a una de senar).
 
=== Determinant d'una família de ''n'' vectors en una base ===
'''Definició'''
 
Se suposa ''E'' proveït d'una base <math>B=(e_{1},\dots,e_{n})</math>. L'aplicació que '''determina en base ''B''''' és l'única forma n- lineal alternada sobre ''E'' que verifica <math>\det{}_B(e_1,..., e_n)=1</math>, abreviat en <math> \det{}_B(B)=1</math> Cal representar-se aquesta quantitat com una mena de volum de llamborda, relativament a la base ''B''.
 
'''Formula de Leibniz''' [[Fitxer:Gottfried_Wilhelm_von_Leibniz.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Gottfried_Wilhelm_von_Leibniz.jpg%7Ckiri%7Cjmpl%7C[[:ca:Gottfried_Leibniz|Gottfried Leibniz]] introdueix els primers determinants de dimensió 3 i més]] Siguin''x<sub>1</sub>,...x<sub>n</sub>'' els vectors de ''E''. És possible representar aquests ''n'' vectors per ''n'' matrius columna, formant per juxtaposició una matriu quadrada ''X''. El determinant de ''x<sub>1</sub>,...x<sub>n</sub>'' relatiu a la base ''B'' val llavors
 
: <math>\det{}_B(x_1,\dots, x_n)=\sum_{\sigma\in \mathfrak{S}_n} \varepsilon(\sigma) \prod_{j=1}^n X_{\sigma(j),j} </math>
 
Aquesta fórmula porta de vegades el nom de [[:ca:Gottfried_Wilhelm_von_Leibniz|Leibniz]]. Presenta poc interès pel càlcul pràctic dels determinants, però permet establir diversos resultats teòrics.
 
En física, es troba sovint la fórmula de [[:ca:Gottfried_Wilhelm_von_Leibniz|Leibniz]] expressada amb l'ajuda del [[:ca:Símbol_de_Levi-Civita|símbol de Levi-Civita]], utilitzant la [[:ca:Conveni_de_sumació_d'Einstein|convenció d'Einstein]] per al sumatori dels índexs:
 
: <math>\det(A)=\epsilon^{i_1\cdots i_n}{A^{1}}_{i_1}\cdots {A^{n}}_{i_n}</math>
 
'''Fórmula de canvi de base'''
 
Si ''B'' i ''B ''' són dues bases de ''E'', les aplicacions determinants corresponents són proporcionals (amb una relació de proporcionalitat no nul·la)
 
: <math>\det{}_{B'}(x_1,\dots, x_n)=\det{}_{B'}(B)\times \det{}_{B}(x_1,\dots, x_n)\,</math>
 
Aquest resultat és conforme a la interpretació en termes de volum relatiu.
 
=== Determinant d'una matriu ===
Sigui una [[:ca:Matriu_(matemàtiques)|matriu]] ''A''=(a<sub>ij</sub>) quadrada d'ordre ''n'' de coeficients reals. Els [[:ca:Vector_columna|vectors columna]] de la matriu es poden identificar amb elements de l'espai vectorial <math>\mathbb{R}^n</math>. Aquest últim és proveït d'una [[:ca:Base_canònica|base canònica]].
 
Llavors és possible definir el '''determinant de la matriu ''A''''' com el determinant del sistema dels seus vectors columnes relativament a la base canònica. S'escriu det (''A'') ja que no hi ha ambigüitat sobre la base de referència.
 
Sigui <math>K</math> un [[:ca:Cos_(matemàtiques)|cos]], el determinant també es pot definir com una aplicació <math>\det(-): M_{n\times n}(K) \longrightarrow K</math>, és a dir, una aplicació que assigna un nombre del cos <math>K</math> a cada matriu quadrada d'ordre <math>n</math>, que compleix les següents propietats:
 
# Per a tot <math>i=1,...,n</math>, i sigui la matriu <math>A=(C_i)\in M_n(K)</math> la matriu quadràda que té per columnes els vectors <math>C_i</math>: <math>\det(C_1,...,C_i^'+C_i^{''},...,C_n)=\det(C_1,...,C_i^',...,C_n)+\det(C_1,...,C_i^{''},...,C_n)</math>.
# Per a tot <math>i=1,...,n</math>, tot <math>\lambda\in K</math> i sigui la matriu <math>A</math> definida com el punt 1: <math>\det(C_1,...,\lambda C_i,...,C_n)=\lambda \det(C_1,...,C_i,...,C_n)</math>.
# Si <math>A</math> té dues columnes iguals, aleshores <math>\det(A)=0</math>.
# <math>\det(I_n)=1</math>.
 
Les propietats (1) i (2) són equivalents a dir que l'aplicació <math>\det(-)</math> és lineal respecte les columnes de la matriu <math>A</math>.
 
A partir de les propietats anteriors es pot arribar a la fórmula de Leibniz
 
: <math>\det(A)=\sum_{\sigma \in \mathfrak{S}_n}
\varepsilon(\sigma) \prod_{i=1}^n a_{ \sigma(i),i}</math>
 
Aquest determinant s'escriu freqüentment amb barres verticals:
 
: <math>\det \begin{bmatrix} m_{1;1} & \cdots & m_{1;n} \\ \vdots & \ddots & \vdots \\ m_{n;1} & \cdots & m_{n;n} \end{bmatrix} = \begin{vmatrix} m_{1;1} & \cdots & m_{1;n} \\ \vdots & \ddots & \vdots \\ m_{n;1} & \cdots & m_{n;n} \end{vmatrix}</math>
 
La presentació matricial aporta una propietat essencial: una matriu té igual determinant que la seva [[:ca:Matriu_transposada|transposada]]
 
: <math>\det A = \det \left(A^t\right)\,</math>
 
El que significa que el determinant de ''A'' es veu també com el determinant del sistema dels vectors lineals, relativament a la base canònica.{{caixa desplegable|align=left|títol=Fórmula del determinant de la transposada - demostració|contingut=Aplicant la fórmula de Leibniz a la transposada
:<math>\det(A^t)=\sum_{\sigma \in \mathfrak{S}_n}
\varepsilon(\sigma) \prod_{i=1}^n a_{i,\sigma(i)}</math>
S'efectua un canvi d'índex posant <math>j=\sigma(i)</math>. Per bijectivitat de, <math>\sigma</math> condueix a
:<math>\det(A^t)=\sum_{\sigma \in \mathfrak{S}_n}
\varepsilon(\sigma) \prod_{j=1}^n a_{\sigma^{-1}(j),j}</math>
Una segona reindexació imposa: prendre <math>\tau = \sigma^{-1}</math>. L'aplicació que s'associa a la seva inversa és una bijecció de <math>\mathfrak{S}_n</math>, es pot doncs efectuar aquest canvi d'índexi i així
:<math>\det(A^t)=\sum_{\tau \in \mathfrak{S}_n}
\varepsilon(\tau^{-1}) \prod_{j=1}^n a_{\tau(j),j} =\sum_{\tau \in \mathfrak{S}_n}
\varepsilon(\tau) \prod_{j=1}^n a_{\tau(j),j}=\det A </math>.
 
Una manera equivalent per a la demostració fent servir les matrius elementals (veure apartat següent) és:
 
Recordem que <math>A\in M_n(K)</math> és invertible si i només si el seu rang és igual a <math>n</math>. Per tant, <math>A</math> és invertible si i només si <math>A^t</math> és invertible. En conseqüència si <math>A</math> no és invertible, aleshores <math>\det(A)=0=\det(A^t)</math>. Si, en canvi, <math>A</math> és invertible aleshores és producte de matrius elementals i <math>\det(A)=\det(E_1...E_r)=\det(E_1)...\det(E_r)=\det(E_1^t)...\det(E_r^t)=\det(E_r^t)...\det(E_1^t)=\det(E_r^t...E_1^t)=\det((E_1...E_r)^t)=\det(A^t)</math>}}{{caixa desplegable|align=left|títol=Exemples|contingut=1. Sigui <math>(a)</math> una matriu <math>1\times 1</math>, per les propietats (2) i (4)
:<math>\det(a)=a\det(I_1)=a\cdot 1=a</math>
 
2. Sigui <math>A=\begin{pmatrix} x & y \\ z & v \end{pmatrix}\in M_2(K)</math>, aleshores, per les propietats (1) i (2)
:<math>\det\begin{pmatrix} x & y \\ z & v \end{pmatrix}=
\det\begin{pmatrix} x+0 & y \\ 0+z & v \end{pmatrix}=
\det\begin{pmatrix} x & y \\ 0 & v \end{pmatrix}+\det\begin{pmatrix} 0 & y \\ z & v \end{pmatrix}=
x\det\begin{pmatrix} 1 & y \\ 0 & v \end{pmatrix}+z\det\begin{pmatrix} 0 & y \\ 1 & v \end{pmatrix}</math>
:Fent el mateix raonament arribem a la conclusió que
:<math>\det\begin{pmatrix} 1 & y \\ 0 & v \end{pmatrix}=y\det\begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}+v\det\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}=v</math>
:<math>\det\begin{pmatrix} 0 & y \\ 1 & v \end{pmatrix}=y\det\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}+v\det\begin{pmatrix} 0 & 0 \\ 1 & 1 \end{pmatrix}=-y\det\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}=-y</math>
: On, al final, hem aplicat les propietats (3) i (4) junt amb la propietat que ens permet intercanviar dues columnes multiplicant per -1 (veure Propietats, Caràcter n-lineal alternat). Juntant tots els resultats arribem a la conclusió que:
:<math>\det\begin{pmatrix} x & y \\ z & v \end{pmatrix}=xv+z(-y)=xv-yz</math>
 
3. Sigui <math>P</math> una matriu no invertible, això implica que les columnes de <math>P</math> són linealment dependents, en concret,si suposem que <math>C_n=\sum_{i=1}^{n-1}\alpha_iC_i</math>, per les propietats (1) i (2)
:<math>\det(C_1,...,C_i,...,\sum_{i=1}^{n-1}\alpha_iC_i)= \sum_{i=1}^{n-1}\det(C_1,...,C_i,...,C_i)=0</math>
:Si la columna <math>C_n</math> no és linealment dependent, podrem trobar una columna <math>C_k</math> que sí ho sigui. Com que intercanviar dues columnes només afecta el signe del determinant, fent el canvi de columnes <math>C_k \leftrightarrow C_n</math> estem en el cas anterior i el determinant val 0. En conclusió; si <math>P\notin GL_n(K)</math> el seu determinant val 0.}}
 
==== Càlcul del determinant de les matrius elementals ====
A l'hora de calcular determinants és molt útil conèixer el determinant de les [[:ca:Matrius_elementals|matrius elementals]].
 
Sigui <math>E_{ij}</math> una matriu elemental de tipus 1. Com que <math>E_{ij}</math> s'aconsegueix intercanviant les columnes <math>i,j</math> de la matriu identitat, es compleix que<blockquote><math>\det(E_{ij})=-1</math></blockquote>Sigui <math>E_{i}(\lambda)</math> una matriu elemental de tipus 2. Com que <math>E_i(\lambda)</math> és el resultat de multiplicar la columna <math>C_i</math> de la matriu identitat per <math>\lambda</math>. Aleshores,<blockquote><math>\det(E_i(\lambda))=\lambda</math></blockquote>Sigui <math>E_{ij}(\lambda)</math> una matriu elemental de tipus 3. Com que <math>E_{ij}(\lambda)=(C_1,...,C_i+\lambda C_j,...,C_n)</math> amb <math>C_k</math> les columnes de la matriu identitat, aleshores<blockquote><math>\det(E_{ij}(\lambda))=1</math></blockquote>A més, sigui <math>A\in M_n(K)</math> i siguin <math>E, E_1,...E_n</math> matrius elementals (de qualsevol tipus), es compleix que
 
# <math>\det(AE)=\det(A)\det(E)</math>
# <math>\det(AE_1...E_n)=\det(A)\det(E_1)...\det(E_n)</math>
# <math>\det(E)=\det(E^t)</math>
 
{{Caixa desplegable|títol=Demostració|contingut=1. -Sigui <math>E=E_{ij}</math> una matriu elemental de tipus 1.
:Per definició d'aquest tipus de matrius sabem que la matriu <math>AE_{ij}</math> és la matriu <math>A</math> amb les columnes <math>i,j</math> intercanviades, per tant, es compleix que <math>\det(AE_{ij})=-\det(A)=\det(E_{ij})\det(A)=\det(A)\det(E_{ij})</math> ja que <math>-1=\det(E_{ij}).</math>
:-Sigui <math>E=E_{i}(\lambda)</math> una matriu elemental de tipus 2. Per definició d'aquest tipus de matrius sabem que la matriu <math>AE_{i}(\lambda)</math> és la matriu <math>A</math> amb la columna <math>i</math> multiplicada per <math>\lambda</math>, per tant, es compleix que <math>\det(AE_{i}(\lambda))=\lambda\det(A)=\det(E_{i}(\lambda))\det(A)=\det(A)\det(E_{i}(\lambda))</math> ja que <math>\lambda=\det(E_{i}(\lambda)).</math>
:-Sigui <math>E=E_{ij}(\lambda)</math> una matriu elemental de tipus 3. Per definició d'aquest tipus de matrius sabem que la matriu <math>AE_{ij}(\lambda)</math> és la matriu <math>A</math> amb la columna <math>C_i=C_i+\lambda C_j</math>, per tant, es compleix que <math>\det(AE_{ij}(\lambda))=\det(C_1,...,C_i+\lambda C_j,...,C_n)=\det(C_1,...,C_i,...,C_n)+\lambda\det(C_1,...,C_j,...,C_j,...,C_n)=\det(A)=\det(A)\det(E_{ij}(\lambda))</math> ja que <math>1=\det(E_{ij}(\lambda)).</math>
 
2. Per inducció, l'enunciat per a <math>n=1</math> l'hem demostrat a 1. Suposem que és cert per a <math>n=m</math>, aleshores, per 1 sabem que
:<math>\det(AE_1,...,E_{m+1})=\det(AE_1,...,E_m)\det(E_{m+1})=\det(A)\det(E_1)...\det(E_m)\det(E_{m+1})</math>
 
3. Les matrius elementals de tipus 1 i 3 són simètriques, per tant <math>E=E^t</math> i és clar que <math>\det(E)=\det(E^t)</math>.
:Si <math>E</math> és una matriu elemental de tipus 2, aleshores, per les propietats d'aquestes <math>E^t</math> és també una matriu elemental de tipus 2. Per tant, com que el determinant d'aquestes matrius és sempre 1. <math>\det(E)=1=\det(E^t)</math>}}
 
=== Determinant d'un endomorfisme ===
Sigui ''u'' un [[:ca:Endomorfisme|endomorfisme]] d'un espai vectorial de dimensió finita. Totes les matrius representatives de ''u'' tenen el mateix determinant. Aquest valor comú es diu el determinant de ''u''. El determinant de ''u'' és el valor pel qual ''u'' multiplica els determinants dels vectors
 
: <math>\det{}_B(u(x_1),\dots, u(x_n))=\det u \times \det{}_B(x_1,\dots, x_n)\,</math>
 
{{Caixa desplegable|align=left|títol=Demostració d'aquestes dues propietats|contingut=S'introdueix l'aplicació <math>d_{u,B}</math> que a ''x<sub>1</sub>, ..., x<sub>n</sub>'' associa
:<math>d_{u,B}(x_1,\dots,x_n) = \det{}_B(u(x_1), \dots, u(x_n))\,</math>
És una forma ''n'' - lieal alternada i el seu valor sobre els vectors de ''B'', que s'escriu <math>d_{u,B}(B)</math>, és justament el determinant de la matriu representativa de ''u'' a la base ''B''.
La forma <math>d_{u,B}</math> és doncs proporcional al determinant en base ''B'', la relació de proporcionalitat es calcula prenent la imatge dels vectors de ''B''
:<math>d_ {u,B}= d_{u,B}(B) \times \det{}_B\,</math>
El que significa, per a una ''n''túpla de vectors
:<math>\det{}_B(u(x_1),\dots, u(x_n))=d_{u,B}(B) \times \det{}_B(x_1,\dots, x_n) \qquad (1)</math>
 
Queda per provar que si ''B' ''és una altra base de ''E'', ''d<sub>u, B</sub>(B)'' és idèntic a ''d<sub>u, B ' </sub>(B ')''. Per això s'utilitza la fórmula de canvi de base en els dos membres de (1).}}Els endomorfismes de determinant 1 conserven el determinant dels vectors. Formen un subgrup de ''Gl(E)'', notat ''Sl(E)'', i dit [[:ca:Grup_especial_lineal|grup especial lineal]]. En un espai real de dimensió dos, es conceben com les aplicacions lineals que conserven les àrees orientades, en dimensió tres els volums orientats.
 
Es demostra que aquest grup és generat per les [[:ca:Transvections|transvections]], dels quals la matriu en una base adaptada és de la forma
 
: <math>\begin{bmatrix}
1 & & & & \\
& 1 & \lambda & \\
& &. & & \\
& & & 1 & \\
& & & & 1
\end{bmatrix}=I_n+\lambda E_{ij}</math>
 
{| border="0"
|+Efecte d'una transvecció en l'espai (conservació del volum)
|
!
|-
|[[Fitxer:Transvection-1.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Transvection-1.jpg|jmpl|Figura 8. Cub abans de la transvecció]]
|[[Fitxer:Transvection-2.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Transvection-2.jpg|jmpl|Cub després de la transvecció]]
|}
Per construcció del propi determinat dels endomorfismes, dues [[:ca:Semblança_de_matrius|matrius semblants]] tenen el mateix determinant.
 
== Propietats ==
Tret d'efectuar la tria d'una base, és possible enunciar aquestes propietats al marc matricial.
 
=== Caràcter n - lineal alternat ===
L'aplicació determinant sobre les famílies de vectors és una forma multilineal alternada. Utilitzar aquesta propietat sobre una matriu demana d'expressar el sistema de vectors columnes, o de vectors línies. Per exemple si la matriu ''A'' admet per a columnes ''C<sub>1</sub>, ..., C<sub>n</sub>'' amb ''C<sub>i</sub>'' de la forma ''C<sub>i</sub>=aC '<sub>i</sub>+C ' '<sub>i</sub>''
 
: <math>\det(C_1,C_2,\dots,aC'_i+C''_i,\dots,C_n)=a\cdot\det(C_1,\dots,C'_i,\dots, C_n)+\det(C_1,C_2,\dots,C''_i,\dots,C_n)\,</math>
 
Una propietat important del determinant és que al intercanviar dues columnes el determinant queda multiplicat per -1.{{Caixa desplegable|align=left|títol=Demostració|contingut=En efecte, això es veu clarament aplicant les propietats (1) i (3) de la definició de determinant al determinant
<math>\det(C_{1},...,C_{i}+C_{j},...,C_{i}+C_{j},...,C_{n})
</math>, aquest determinant és nul per la propietat (3) (té dues columnes iguals), per tant, aplicant la propietat (1):
 
<math> 0=\det(C_{1},...,C_{i}+C_{j},...,C_{i}+C_{j},...,C_{n})</math>
<math> =\det(C_{1},...,C_{i},...,C_{i},...,C_{n})+\det(C_{1},...,C_{i},...,C_{j},...,C_{n})+\det(C_{1},...,C_{j},...,C_{i},...,C_{n})+\det(C_{1},...,C_{j},...,C_{j},...,C_{n}) </math>
 
De nou per la propietat (3):
 
<math>0=\det(C_{1},...,C_{i},...,C_{j},...,C_{n})+\det(C_{1},...,C_{j},...,C_{i},...,C_{n})\Longrightarrow \det(C_{1},...,C_{i},...,C_{j},...,C_{n})=-\det(C_{1},...,C_{j},...,C_{i},...,C_{n}) </math>}}Heus aquí l'efecte de les [[:ca:Operació_elemental|operacions elementals]] sobre les columnes de la matriu
 
* multiplicar una columna per ''a'', implica la multiplicació del determinant pel mateix valor
* intercanviar dues columnes, implica la multiplicació del determinant per -1
* afegir en una columna una combinació lineal de les altres columnes no modifica el determinant.
 
Si totes les columnes són multiplicades per ''a'', el resultat és una multiplicació per ''a<sup>n</sup>'' del determinant
 
: <math>\det (a \times M) = a^n \times \det{M}</math>
 
Per contra, no existeix cap fórmula senzilla per expressar el determinant de la suma A+B de dues matrius. En efecte, aplicar la multilinearitat respecte a les columnes demana d'escriure les columnes de la suma com a ''A<sub>i</sub>+B<sub>i</sub>'', després d'aplicar ''n'' vegades la propietat de linearitat. Finalment, el determinant de ''A+B'' s'escindeix en una suma de ''2<sup>n</sup>'' determinants híbrids det(''A<sub>1</sub>, A₂, B₃, A₄,..., B<sub>n</sub>''), formats d'un cert nombre de columnes de ''A'' i de ''B''. És possible efectuar igualment operacions elementals sobre les files, que tenen les mateixes propietats que les operacions sobre les columnes. Operar sobre les files seguint la tècnica del [[:ca:Pivot_de_Gauss|pivot de Gauss]] dona un mètode sistemàtic càlcul dels determinants; és el mètode més eficaç per regla general.
 
=== Propietats de morfisme i d'anul·lació ===
[[Fitxer:Augustin_Louis_Cauchy.JPG|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Augustin_Louis_Cauchy.JPG%7Cjmpl%7C[[:ca:Augustin_Louis_Cauchy|Augustin Louis Cauchy]] va demostrar que el determinant constitueix un morfisme de grups]] Cas d'anul·lació dels determinants:
 
* el determinant d'un sistema de ''n'' vectors és nul si i només si aquest sistema és [[:ca:Dependència_lineal|linealment dependent]] (i això és vàlid sigui quina sigui la base de referència)
* el determinant d'una matriu (o d'un endomorfisme) és nul si i només si aquesta matriu (o endomorfisme) és no invertible.
 
Aquestes propietats expliquen el paper essencial que poden jugar els determinants en àlgebra lineal. Constitueixen una eina fonamental per provar que una família de vectors és una base.{{Caixa desplegable|títol=Demostració del cas d'anul·lació|contingut=1. Si el sistema és linealment dependent, una columna és combinació lineal de les altres. Per una operació elemental, és possible de transformar-lo en un determinant que tingui una columna nul·la, per tant el determinant és nul. <br/>
 
2. Al parlar del determinant d'una matriu ja hem vist que
:<math>A \notin GL_n(K) \Longrightarrow \det(A)=0</math>. Demostrem ara que <math>A \notin GL_n(K) \Longleftarrow \det(A)=0</math> o, equivalentment <math>A \in GL_n(K) \Longrightarrow \det(A)\neq0</math>
:Hem vist que el determinant de qualsevol matriu elemental és diferent de zero. Com que qualsevol matriu invertible és producte de matrius elementals, si <math>A\in GL_n(K)</math> aleshores <math>\det(A)=\det(E_1...E_r)=\det(E_1)...\det(E_r)\neq0</math>}}Propietat de morfisme:
 
* <math>\det (M \times N) = \det {M} \times \det{N}</math>
* així si M és invertible llavors <math>\det {M^{-1}} = (\det{M})^{-1}\,</math>
* i el determinant és un [[:ca:Morfisme|morfisme]] de grups de <math>(M_n(K),\times)</math> en <math>(K,\times)</math>
 
{{Caixa desplegable|títol=Demostració de la propietat de morfisme|contingut=1. Per les propietats de les matrius, si la matriu <math>N</math> no és invertible, aleshores <math>MN</math> tampoc. En aquest cas
:<math>\det(MN)=0=\det(M)\det(N)</math> ja que <math>\det(N)=0</math>.
:Si la matriu <math>N</math> és invertible, aleshores és producte de matrius elementals; <math>N=E_1...E_r</math> i es compleix que:
:<math>\det(MN)=\det(ME_1...E_r)=\det(M)\det(E_1)...\det(E_r)=\det(M)\det(E_1...E_r)=\det(M)\det(N)</math>
 
2. És conseqüència de l'anterior, sigui <math>N=M^{-1}</math>, aleshores
:<math>1=\det(I_n)=\det(MM^{-1})=\det(M)\det(M^{-1})\Longrightarrow \det(M^{-1})=\frac{1}{\det(M)}=(\det(M))^{-1}</math>
 
3. Un morfisme de grups és, per definició, una aplicació entre dos grups que compleix que <math>f(a*b)=f(a)\star f(b)</math> amb <math>*</math> una operació definida al primer grup i <math>\star</math> una operació definida al segon grup. Per tant, com que <math>\det(A\times B)=\det(A)*\det(B)</math>. L'aplicació determinant és un morfisme de grups entre el grup de matrius quadràdes <math>M_n(K)</math> amb el producte de matrius <math>\times</math> i el grup del conjunt <math>K</math> amb l'operació <math>*</math>.}}Existeix una generalització de la fórmula de determinant d'un producte per al cas de dues matrius rectangulars: és la [[:ca:Fórmula_de_Binet-Cauchy|fórmula de Binet-Cauchy]].
 
=== Adjunts i fórmula de recurrència ===
{{article principal|Matriu d'adjunts}} Sigui ''A'' una matriu quadrada de dimensió ''n'', i ''A(x)'' la matriu que té els mateixos coeficients que ''A'', excepte el terme d'índex ''i, j'' que val ''a<sub>i, j</sub>+x'' (és la modificació d'un dels coeficients de la matriu, tota la resta es conserva igual). Per la fórmula de linearitat per a la ''j'' - èsima columna, és possible establir
 
: <math>\det A(x)=\det A + x(-1)^{i+j}\begin{vmatrix}a_{1,1} & \dots & a_{1,j-1}& a_{1,j+1}& \dots & a_{1,n} \\\vdots & & \vdots & \vdots& &\vdots\\
a_{i-1,1} & \dots & a_{i-1,j-1}& a_{i-1,j+1}& \dots & a_{i-1,n} \\
a_{i+1,1} & \dots & a_{i+1,j-1}& a_{i+1,j+1}& \dots & a_{i+1,n} \\
\vdots & & \vdots & \vdots &&\vdots\\
a_{n,1} & \dots & a_{n,j-1}& a_{n,j+1}& \dots & a_{n,n}\end{vmatrix} = \det A+x {\rm Cof}_{i,j}</math>
 
El terme escrit Cof<sub>i, j</sub> es diu l'<nowiki/>'''adjunt d'índex ''i, j'''''. Es calcula de la manera següent: Escrivint M(i;j) determinant de la submatriu deduïda de M per supressió la línia i i la columna j, ''adjunt és'' (-1)<sup>i+j</sup> ''vegades M(i;j).'' Admet les interpretacions següents
 
* multiplicant per x el coeficient d'índex ''i, j'' de la matriu (conservant tota la resta igual) torna a resultar l'augment del determinant de x vegades l'adjunt corresponent
* l'adjunt és la derivada del determinant de la matriu ''A(x)''
 
'''Fórmules de Laplace''' [[Fitxer:Pierre-Simon Laplace.jpg|pra=https://ca.wiki-indonesia.club/wiki/Fitxer:Pierre-Simon_Laplace.jpg|jmpl|Pierre-Simon Laplace]] Si n>1 i ''A'' és una matriu quadrada de dimensió ''n'' llavors és possible de calcular el seu determinant en funció dels coeficients d'una sola columna i dels adjunts corresponents. Aquesta fórmula, anomenada fórmula de [[:ca:Pierre-Simon_Laplace|Laplace]], permet transformar el càlcul del determinant a ''n'' càlculs de determinants de dimensió ''n-1''.
 
* Fórmula de desenvolupament respecte a la columna ''j''
 
: <math>\det{A}=\sum_{i=1}^{n} a_{i;j} {\rm Cof}_{i,j}</math>
 
* Es pot donar igualment una fórmula de desenvolupament respecte a la fila ''i''
 
: <math>\det{A}=\sum_{j=1}^{n} a_{i;j} {\rm Cof}_{i,j}</math>
 
'''Matriu adjunta i càlcul de la inversa'''
 
La [[:ca:Matriu_d'adjunts|matriu d'adjunts]] de ''A'', o '''comatriu de ''A''''' és la matriu constituïda pels adjunts de ''A''. Generalitza les fórmules de desenvolupament del determinant respecte a les files o columnes
 
: <math>A \times {}^t{{\rm com} A} = {}^t{{\rm com} A}\times A =\det{A} \times I_n</math>
 
La matriu transposada de la matriu d'adjunts es diu '''matriu complementària''' de ''A''. Si ''A'' és invertible, la inversa de ''A'' és un múltiple de la matriu complementària. Aquest enfocament ofereix una fórmula de la matriu inversa, que no requereix res més que càlculs de determinants
 
: <math>A^{-1}=\frac1{\det A} \, {}^t{{\rm com} A}</math>
 
=== Variacions de la funció determinant ===
La fórmula de Leibniz mostra que el determinant d'una matriu ''A'' s'expressa com a sumes i productes de components de ''A''. No és doncs sorprenent que el determinant tingui bones propietats de regularitat.
 
==== Determinant dependent d'un paràmetre ====
Si <math>t\mapsto A(t)</math> és una funció de classe <math>\mathcal C^k</math> amb valors a les matrius quadrades d'ordre ''n'', llavors <math>t\mapsto \det A(t)</math> és igualment de classe <math>\mathcal C^k</math>.
 
La fórmula de derivació s'obté fent intervenir les columnes de ''A''
 
: <math>\frac{{\rm d}}{{\rm d}t} \left(\det (A_1(t),\dots, A_n(t)) \right)= \sum_{i=1}^n \det (A_1(t),\dots, A_{i-1}(t),A'_i(t),A_{i+1}(t),\dots, A_n(t))</math>
 
Aquesta fórmula és formalment anàloga a la [[:ca:Derivada|derivada]] d'un producte de ''n'' funcions numèriques.
 
==== Aplicació determinant sobre l'espai matrius ====
 
* L'aplicació que a la matriu ''A'' li associa el seu determinant és contínua.
 
Aquesta propietat presenta conseqüències topològiques interessants: així el [[:ca:Grup_general_lineal|grup GL<sub>n</sub>]] (<math>\mathbb{R}</math>) és un [[:ca:Obert_(matemàtiques)|obert]], el subgrup SL<sub>n</sub>(<math>\mathbb{R}</math>) és un [[:ca:Tancat_(matemàtiques)|tancat]].
 
* Aquesta aplicació és [[:ca:Derivada|derivable]] i fins i tot <math>\mathcal C^\infty</math>.
 
El desenvolupament en sèrie de Taylor limitat al primer terme del determinant a l'entorn de ''A'' s'escriu
 
: <math>\det (A+H)=\det A + {\rm tr } ({}^t{\rm Com }(A).H)+o(\|H\|)</math>
 
És a dir que en M<sub>n</sub>(<math>\mathbb{R}</math>) proveït del seu producte escalar canònic, la [[:ca:Matriu_d'adjunts|matriu d'adjunts]] s'interpreta com el [[:ca:Gradient_(matemàtiques)|gradient]] de l'aplicació determinant
 
: <math>\nabla \det (A) = {\rm Com }(A)</math>
 
Per al cas on ''A'' és la identitat
 
: <math>\det (I+H)=1 + {\rm tr } (H)+o(\|H\|)\qquad \nabla \det (I) = I</math>
 
El caràcter derivable permet afirmar que GL<sub>n</sub>(<math>\mathbb{R}</math>) és un [[:ca:Grup_de_Lie|grup de Lie]].
 
* És també el [[:ca:Polinomi|polinomi]] que fa de GL<sub>n</sub>(<math>\mathbb{R}</math>) una [[:ca:Varietat_algebraica|varietat algebraica]].
 
Aquestes fórmules porten de vegades el nom d'identitats de [[:ca:Carl_Gustav_Jakob_Jacobi|Jacobi]]. S'expliquen a l'article [[:ca:Matriu_d'adjunts|matriu d'adjunts]].
 
== Generalització als espais vectorials sobre altres cossos i en els mòduls ==
Les diferents definicions i propietats de la teoria dels determinants s'escriuen de manera idèntica en el marc dels espais vectorials complexos i de les matrius de coeficients complexos. El mateix sobre tot [[:ca:Cos_(matemàtiques)|cos]] commutatiu, excepte per al paràgraf «variacions de la funció determinant» que llavors no té sentit.
 
Quasi la totalitat de la teoria dels determinants encara pot ser estesa a les matrius a coeficients en un [[:ca:Anell_(matemàtiques)|anell]] commutatiu ''A'' i amb els mòduls de dimensió finita sobre ''A''. L'únic punt de divergència és la caracterització de l'anul·lació dels determinants.
 
Així una matriu de coeficients en un anell commutatiu ''A'' és invertible si i només si el seu determinant és invertible a ''A''.
 
La qüestió de l'algoritme de càlcul del determinant s'ha de reprendre. En efecte, el mètode del pivot de Gauss demana d'efectuar divisions, el que no és possible a l'anell ''A'' mateix. Les fórmules de Leibniz o de Laplace permeten fer un càlcul sense divisió, però continuen sent molt costoses en temps de còmput. Existeixen algoritmes més raonables, en els quals el temps d'execució és d'ordre ''n''<sup>4</sup>; sobretot, l'algoritme del pivot Gauss s'adapta en el cas d'un [[:ca:Anell_euclidià|anell euclidià]], aquesta adaptació és descrita a l'article sobre el [[:ca:Teorema_dels_factors_invariants|teorema dels factors invariants]]. El lloc web de la universitat lliure de Berlín proposa un [http://page.inf.fu-berlin.de/~rote/Papers/pdf/Division-free+algorithms.pdf document de referència sobre la qüestió dels algoritmes sense divisió] ''''''{{en}}''''''.
 
== Referències ==
<references responsive="1" group=""></references>
 
== Bibliografia ==
 
* {{ref-llibre|títol=Algèbre|editor=Dunod|nom=Lang|cognom=Serge|any=2004|pàgines=926|isbn=2100079808}}
* {{ref-llibre|títol=Algebra|editor=Prentice Hall Inc.|cognom=Artin|nom=Michael|lloc=Englewood Cliffs, NJ|data=1991|isbn=0-13-004763-5}}
* Henri Cartan. ''Cours de calcul différentiel'', París, Hermann, 1977
* {{Ref-llibre|títol=Matrices, géométrie, algèbre linéaire|editor=Cassini|cognom=Gabriel|nom=Pierre|data=2001|lloc=París|isbn=978-2-84225-018-8|citació=per a una introducció matricial basada en les transveccions}}
 
== Vegeu també ==
 
* [[:ca:Matriu_d'adjunts|Matriu d'adjunts]]
* [[:ca:Teorema_del_determinant_de_Sylvester|Teorema del determinant de Sylvester]]
* [[:ca:Polinomi_característic|Polinomi característic]]
* [[:ca:Regla_de_Cramer|Regla de Cramer]]
* [[:ca:Wronskià|Wronskià]]
* [[:ca:Jacobià|Jacobià]]
* [[:ca:Producte_vectorial|Producte vectorial]]
* [[:ca:Determinant_de_Slater|Determinant de Slater]]
 
== Enllaços externs ==
 
* [http://xavier.hubaut.info/coursmath/mat/applin.htm Les matemàtiques de secundària, aplicacions lineals per Xavier Hubaut]
* [http://www.bluebit.gr/matrix-calculator/ Un càlcul en línia de determinants en anglès] {{Webarchive|url=https://web.archive.org/web/20081212221215/http://www.bluebit.gr/matrix-calculator/|date=2008-12-12}}
* [http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html Un lloc web en anglès sobre la història de les matrius i dels determinants] {{Webarchive|url=https://web.archive.org/web/20150308120526/http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html|date=2015-03-08}}
 
{{Article de qualitat}}{{Autoritat}}{{ORDENA:Determinant (matematiques)}}