text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
LSH is a cryptographic hash function designed in 2014 by South Korea to provide integrity in general-purpose software environments such as PCs and smart devices. LSH is one of the cryptographic algorithms approved by the Korean Cryptographic Module Validation Program (KCMVP).
And it is the national standard of South Korea (KS X 3262).
== Specification ==
The overall structure of the hash function LSH is shown in the following figure.
The hash function LSH has the wide-pipe Merkle-Damgård structure with one-zeros padding.
The message hashing process of LSH consists of the following three stages.
Initialization:
One-zeros padding of a given bit string message.
Conversion to 32-word array message blocks from the padded bit string message.
Initialization of a chaining variable with the initialization vector.
Compression:
Updating of chaining variables by iteration of a compression function with message blocks.
Finalization:
Generation of an
n
{\displaystyle n}
-bit hash value from the final chaining variable.
The specifications of the hash function LSH are as follows.
=== Initialization ===
Let
m
{\displaystyle m}
be a given bit string message.
The given
m
{\displaystyle m}
is padded by one-zeros, i.e., the bit ‘1’ is appended to the end of
m
{\displaystyle m}
, and the bit ‘0’s are appended until a bit length of a padded message is
32
w
t
{\displaystyle 32wt}
, where
t
=
⌈
(
|
m
|
+
1
)
/
32
w
⌉
{\displaystyle t=\lceil (|m|+1)/32w\rceil }
and
⌈
x
⌉
{\displaystyle \lceil x\rceil }
is the smallest integer not less than
x
{\displaystyle x}
.
Let
m
p
=
m
0
‖
m
1
‖
…
‖
m
(
32
w
t
−
1
)
{\displaystyle m_{p}=m_{0}\|m_{1}\|\ldots \|m_{(32wt-1)}}
be the one-zeros-padded
32
w
t
{\displaystyle 32wt}
-bit string of
m
{\displaystyle m}
.
Then
m
p
{\displaystyle m_{p}}
is considered as a
4
w
t
{\displaystyle 4wt}
-byte array
m
a
=
(
m
[
0
]
,
…
,
m
[
4
w
t
−
1
]
)
{\displaystyle m_{a}=(m[0],\ldots ,m[4wt-1])}
, where
m
[
k
]
=
m
8
k
‖
m
(
8
k
+
1
)
‖
…
‖
m
(
8
k
+
7
)
{\displaystyle m[k]=m_{8k}\|m_{(8k+1)}\|\ldots \|m_{(8k+7)}}
for all
0
≤
k
≤
(
4
w
t
−
1
)
{\displaystyle 0\leq k\leq (4wt-1)}
.
The
4
w
t
{\displaystyle 4wt}
-byte array
m
a
{\displaystyle m_{a}}
converts into a
32
t
{\displaystyle 32t}
-word array
M
=
(
M
[
0
]
,
…
,
M
[
32
t
−
1
]
)
{\displaystyle {\textsf {M}}=(M[0],\ldots ,M[32t-1])}
as follows.
M
[
s
]
←
m
[
w
s
/
8
+
(
w
/
8
−
1
)
]
‖
…
‖
m
[
w
s
/
8
+
1
]
‖
m
[
w
s
/
8
]
{\displaystyle M[s]\leftarrow m[ws/8+(w/8-1)]\|\ldots \|m[ws/8+1]\|m[ws/8]}
(
0
≤
s
≤
(
32
t
−
1
)
)
{\displaystyle (0\leq s\leq (32t-1))}
From the word array
M
{\displaystyle {\textsf {M}}}
, we define the
t
{\displaystyle t}
32-word array message blocks
{
M
(
i
)
}
i
=
0
t
−
1
{\displaystyle \{{\textsf {M}}^{(i)}\}_{i=0}^{t-1}}
as follows.
M
(
i
)
←
(
M
[
32
i
]
,
M
[
32
i
+
1
]
,
…
,
M
[
32
i
+
31
]
)
{\displaystyle {\textsf {M}}^{(i)}\leftarrow (M[32i],M[32i+1],\ldots ,M[32i+31])}
(
0
≤
i
≤
(
t
−
1
)
)
{\displaystyle (0\leq i\leq (t-1))}
The 16-word array chaining variable
CV
(
0
)
{\displaystyle {\textsf {CV}}^{(0)}}
is initialized to the initialization vector
IV
{\displaystyle {\textsf {IV}}}
.
CV
(
0
)
[
l
]
←
IV
[
l
]
{\displaystyle {\textsf {CV}}^{(0)}[l]\leftarrow {\textsf {IV}}[l]}
(
0
≤
l
≤
15
)
{\displaystyle (0\leq l\leq 15)}
The initialization vector
IV
{\displaystyle {\textsf {IV}}}
is as follows.
In the following tables, all values are expressed in hexadecimal form.
=== Compression ===
In this stage, the
t
{\displaystyle t}
32-word array message blocks
{
M
(
i
)
}
i
=
0
t
−
1
{\displaystyle \{{\textsf {M}}^{(i)}\}_{i=0}^{t-1}}
, which are generated from a message
m
{\displaystyle m}
in the initialization stage, are compressed by iteration of compression functions.
The compression function
CF
:
W
16
×
W
32
→
W
16
{\displaystyle {\textrm {CF}}:{\mathcal {W}}^{16}\times {\mathcal {W}}^{32}\rightarrow {\mathcal {W}}^{16}}
has two inputs; the
i
{\displaystyle i}
-th 16-word chaining variable
CV
(
i
)
{\displaystyle {\textsf {CV}}^{(i)}}
and the
i
{\displaystyle i}
-th 32-word message block
M
(
i
)
{\displaystyle {\textsf {M}}^{(i)}}
.
And it returns the
(
i
+
1
)
{\displaystyle (i+1)}
-th 16-word chaining variable
CV
(
i
+
1
)
{\displaystyle {\textsf {CV}}^{(i+1)}}
.
Here and subsequently,
W
t
{\displaystyle {\mathcal {W}}^{t}}
denotes the set of all
t
{\displaystyle t}
-word arrays for
t
≥
1
{\displaystyle t\geq 1}
.
The following four functions are used in a compression function:
Message expansion function
MsgExp
:
W
32
→
W
16
(
N
s
+
1
)
{\displaystyle {\textrm {MsgExp}}:{\mathcal {W}}^{32}\rightarrow {\mathcal {W}}^{16(Ns+1)}}
Message addition function
MsgAdd
:
W
16
×
W
16
→
W
16
{\displaystyle {\textrm {MsgAdd}}:{\mathcal {W}}^{16}\times {\mathcal {W}}^{16}\rightarrow {\mathcal {W}}^{16}}
Mix function
Mix
j
:
W
16
→
W
16
{\displaystyle {\textrm {Mix}}_{j}:{\mathcal {W}}^{16}\rightarrow {\mathcal {W}}^{16}}
Word-permutation function
WordPerm
:
W
16
→
W
16
{\displaystyle {\textrm {WordPerm}}:{\mathcal {W}}^{16}\rightarrow {\mathcal {W}}^{16}}
The overall structure of the compression function is shown in the following figure.
In a compression function, the message expansion function
MsgExp
{\displaystyle {\textrm {MsgExp}}}
generates
(
N
s
+
1
)
{\displaystyle (N_{s}+1)}
16-word array sub-messages
{
M
j
(
i
)
}
j
=
0
N
s
{\displaystyle \{{\textsf {M}}_{j}^{(i)}\}_{j=0}^{N_{s}}}
from given
M
(
i
)
{\displaystyle {\textsf {M}}^{(i)}}
.
Let
T
=
(
T
[
0
]
,
…
,
T
[
15
]
)
{\displaystyle {\textsf {T}}=(T[0],\ldots ,T[15])}
be a temporary 16-word array set to the
i
{\displaystyle i}
-th chaining variable
CV
(
i
)
{\displaystyle {\textsf {CV}}^{(i)}}
.
The
j
{\displaystyle j}
-th step function
Step
j
{\displaystyle {\textrm {Step}}_{j}}
having two inputs
T
{\displaystyle {\textsf {T}}}
and
M
j
(
i
)
{\displaystyle {\textsf {M}}_{j}^{(i)}}
updates
T
{\displaystyle {\textsf {T}}}
, i.e.,
T
←
Step
j
(
T
,
M
j
(
i
)
)
{\displaystyle {\textsf {T}}\leftarrow {\textrm {Step}}_{j}\left({\textsf {T}},{\textsf {M}}_{j}^{(i)}\right)}
.
All step functions are proceeded in order
j
=
0
,
…
,
N
s
−
1
{\displaystyle j=0,\ldots ,N_{s}-1}
.
Then one more
MsgAdd
{\displaystyle {\textrm {MsgAdd}}}
operation by
M
N
s
(
i
)
{\displaystyle {\textsf {M}}_{N_{s}}^{(i)}}
is proceeded, and the
(
i
+
1
)
{\displaystyle (i+1)}
-th chaining variable
CV
(
i
+
1
)
{\displaystyle {\textsf {CV}}^{(i+1)}}
is set to
T
{\displaystyle {\textsf {T}}}
.
The process of a compression function in detail is as follows.
Here the
j
{\displaystyle j}
-th step function
Step
j
:
W
16
×
W
16
→
W
16
{\displaystyle {\textrm {Step}}_{j}:{\mathcal {W}}^{16}\times {\mathcal {W}}^{16}\rightarrow {\mathcal {W}}^{16}}
is as follows.
Step
j
:=
WordPerm
∘
Mix
j
∘
MsgAdd
{\displaystyle {\textrm {Step}}_{j}:={\textrm {WordPerm}}\circ {\textrm {Mix}}_{j}\circ {\textrm {MsgAdd}}}
(
0
≤
j
≤
(
N
s
−
1
)
)
{\displaystyle (0\leq j\leq (N_{s}-1))}
The following figure shows the
j
{\displaystyle j}
-th step function
Step
j
{\displaystyle {\textrm {Step}}_{j}}
of a compression function.
==== Message Expansion Function MsgExp ====
Let
M
(
i
)
=
(
M
(
i
)
[
0
]
,
…
,
M
(
i
)
[
31
]
)
{\displaystyle {\textsf {M}}^{(i)}=(M^{(i)}[0],\ldots ,M^{(i)}[31])}
be the
i
{\displaystyle i}
-th 32-word array message block.
The message expansion function
MsgExp
{\displaystyle {\textrm {MsgExp}}}
generates
(
N
s
+
1
)
{\displaystyle (N_{s}+1)}
16-word array sub-messages
{
M
j
(
i
)
}
j
=
0
N
s
{\displaystyle \{{\textsf {M}}_{j}^{(i)}\}_{j=0}^{N_{s}}}
from a message block
M
(
i
)
{\displaystyle {\textsf {M}}^{(i)}}
.
The first two sub-messages
M
0
(
i
)
=
(
M
0
(
i
)
[
0
]
,
…
,
M
0
(
i
)
[
15
]
)
{\displaystyle {\textsf {M}}_{0}^{(i)}=(M_{0}^{(i)}[0],\ldots ,M_{0}^{(i)}[15])}
and
M
1
(
i
)
=
(
M
1
(
i
)
[
0
]
,
…
,
M
1
(
i
)
[
15
]
)
{\displaystyle {\textsf {M}}_{1}^{(i)}=(M_{1}^{(i)}[0],\ldots ,M_{1}^{(i)}[15])}
are defined as follows.
M
0
(
i
)
←
(
M
(
i
)
[
0
]
,
M
(
i
)
[
1
]
,
…
,
M
(
i
)
[
15
]
)
{\displaystyle {\textsf {M}}_{0}^{(i)}\leftarrow (M^{(i)}[0],M^{(i)}[1],\ldots ,M^{(i)}[15])}
M
1
(
i
)
←
(
M
(
i
)
[
16
]
,
M
(
i
)
[
17
]
,
…
,
M
(
i
)
[
31
]
)
{\displaystyle {\textsf {M}}_{1}^{(i)}\leftarrow (M^{(i)}[16],M^{(i)}[17],\ldots ,M^{(i)}[31])}
The next sub-messages
{
M
j
(
i
)
=
(
M
j
(
i
)
[
0
]
,
…
,
M
j
(
i
)
[
15
]
)
}
j
=
2
N
s
{\displaystyle \{{\textsf {M}}_{j}^{(i)}=(M_{j}^{(i)}[0],\ldots ,M_{j}^{(i)}[15])\}_{j=2}^{N_{s}}}
are generated as follows.
M
j
(
i
)
[
l
]
←
M
j
−
1
(
i
)
[
l
]
⊞
M
j
−
2
(
i
)
[
τ
(
l
)
]
{\displaystyle {\textsf {M}}_{j}^{(i)}[l]\leftarrow {\textsf {M}}_{j-1}^{(i)}[l]\boxplus {\textsf {M}}_{j-2}^{(i)}[\tau (l)]}
(
0
≤
l
≤
15
,
2
≤
j
≤
N
s
)
{\displaystyle (0\leq l\leq 15,\ 2\leq j\leq N_{s})}
Here
τ
{\displaystyle \tau }
is the permutation over
Z
16
{\displaystyle \mathbb {Z} _{16}}
defined as follows.
==== Message Addition Function MsgAdd ====
For two 16-word arrays
X
=
(
X
[
0
]
,
…
,
X
[
15
]
)
{\displaystyle {\textsf {X}}=(X[0],\ldots ,X[15])}
and
Y
=
(
Y
[
0
]
,
…
,
Y
[
15
]
)
{\displaystyle {\textsf {Y}}=(Y[0],\ldots ,Y[15])}
, the message addition function
MsgAdd
:
W
16
×
W
16
→
W
16
{\displaystyle {\textrm {MsgAdd}}:{\mathcal {W}}^{16}\times {\mathcal {W}}^{16}\rightarrow {\mathcal {W}}^{16}}
is defined as follows.
MsgAdd
(
X
,
Y
)
:=
(
X
[
0
]
⊕
Y
[
0
]
,
…
,
X
[
15
]
⊕
Y
[
15
]
)
{\displaystyle {\textrm {MsgAdd}}({\textsf {X}},{\textsf {Y}}):=(X[0]\oplus Y[0],\ldots ,X[15]\oplus Y[15])}
==== Mix Function Mix ====
The
j
{\displaystyle j}
-th mix function
Mix
j
:
W
16
→
W
16
{\displaystyle {\textrm {Mix}}_{j}:{\mathcal {W}}^{16}\rightarrow {\mathcal {W}}^{16}}
updates the 16-word array
T
=
(
T
[
0
]
,
…
,
T
[
15
]
)
{\displaystyle {\textsf {T}}=(T[0],\ldots ,T[15])}
by mixing every two-word pair;
T
[
l
]
{\displaystyle T[l]}
and
T
[
l
+
8
]
{\displaystyle T[l+8]}
for
(
0
≤
l
<
8
)
{\displaystyle (0\leq l<8)}
.
For
0
≤
j
<
N
s
{\displaystyle 0\leq j<N_{s}}
, the mix function
Mix
j
{\displaystyle {\textrm {Mix}}_{j}}
proceeds as follows.
(
T
[
l
]
,
T
[
l
+
8
]
)
←
Mix
j
,
l
(
T
[
l
]
,
T
[
l
+
8
]
)
{\displaystyle (T[l],T[l+8])\leftarrow {\textrm {Mix}}_{j,l}(T[l],T[l+8])}
(
0
≤
l
<
8
)
{\displaystyle (0\leq l<8)}
Here
Mix
j
,
l
{\displaystyle {\textrm {Mix}}_{j,l}}
is a two-word mix function.
Let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be words.
The two-word mix function
Mix
j
,
l
:
W
2
→
W
2
{\displaystyle {\textrm {Mix}}_{j,l}:{\mathcal {W}}^{2}\rightarrow {\mathcal {W}}^{2}}
is defined as follows.
The two-word mix function
Mix
j
,
l
{\displaystyle {\textrm {Mix}}_{j,l}}
is shown in the following figure.
The bit rotation amounts
α
j
{\displaystyle \alpha _{j}}
,
β
j
{\displaystyle \beta _{j}}
,
γ
l
{\displaystyle \gamma _{l}}
used in
Mix
j
,
l
{\displaystyle {\textrm {Mix}}_{j,l}}
are shown in the following table.
The
j
{\displaystyle j}
-th 8-word array constant
SC
j
=
(
S
C
j
[
0
]
,
…
,
S
C
j
[
7
]
)
{\displaystyle {\textsf {SC}}_{j}=(SC_{j}[0],\ldots ,SC_{j}[7])}
used in
Mix
j
,
l
{\displaystyle {\textrm {Mix}}_{j,l}}
for
0
≤
l
<
8
{\displaystyle 0\leq l<8}
is defined as follows.
The initial 8-word array constant
SC
0
=
(
S
C
0
[
0
]
,
…
,
S
C
0
[
7
]
)
{\displaystyle {\textsf {SC}}_{0}=(SC_{0}[0],\ldots ,SC_{0}[7])}
is defined in the following table.
For
1
≤
j
<
N
s
{\displaystyle 1\leq j<N_{s}}
, the
j
{\displaystyle j}
-th constant
SC
j
=
(
S
C
j
[
0
]
,
…
,
S
C
j
[
7
]
)
{\displaystyle {\textsf {SC}}_{j}=(SC_{j}[0],\ldots ,SC_{j}[7])}
is generated by
S
C
j
[
l
]
←
S
C
j
−
1
[
l
]
⊞
S
C
j
−
1
[
l
]
⋘
8
{\displaystyle SC_{j}[l]\leftarrow SC_{j-1}[l]\boxplus SC_{j-1}[l]^{\lll 8}}
for
0
≤
l
<
8
{\displaystyle 0\leq l<8}
.
==== Word-Permutation Function WordPerm ====
Let
X
=
(
X
[
0
]
,
…
,
X
[
15
]
)
{\displaystyle {\textsf {X}}=(X[0],\ldots ,X[15])}
be a 16-word array.
The word-permutation function
WordPerm
:
W
16
→
W
16
{\displaystyle {\textrm {WordPerm}}:{\mathcal {W}}^{16}\rightarrow {\mathcal {W}}^{16}}
is defined as follows.
WordPerm
(
X
)
=
(
X
[
σ
(
0
)
]
,
…
,
X
[
σ
(
15
)
]
)
{\displaystyle {\textrm {WordPerm}}({\textsf {X}})=(X[\sigma (0)],\ldots ,X[\sigma (15)])}
Here
σ
{\displaystyle \sigma }
is the permutation over
Z
16
{\displaystyle \mathbb {Z} _{16}}
defined by the following table.
=== Finalization ===
The finalization function
FIN
n
:
W
16
→
{
0
,
1
}
n
{\displaystyle {\textrm {FIN}}_{n}:{\mathcal {W}}^{16}\rightarrow \{0,1\}^{n}}
returns
n
{\displaystyle n}
-bit hash value
h
{\displaystyle h}
from the final chaining variable
CV
(
t
)
=
(
C
V
(
t
)
[
0
]
,
…
,
C
V
(
t
)
[
15
]
)
{\displaystyle {\textsf {CV}}^{(t)}=(CV^{(t)}[0],\ldots ,CV^{(t)}[15])}
.
When
H
=
(
H
[
0
]
,
…
,
H
[
7
]
)
{\displaystyle {\textsf {H}}=(H[0],\ldots ,H[7])}
is an 8-word variable and
h
b
=
(
h
b
[
0
]
,
…
,
h
b
[
w
−
1
]
)
{\displaystyle {\textsf {h}}_{\textsf {b}}=(h_{b}[0],\ldots ,h_{b}[w-1])}
is a
w
{\displaystyle w}
-byte variable, the finalization function
FIN
n
{\displaystyle {\textrm {FIN}}_{n}}
performs the following procedure.
H
[
l
]
←
C
V
(
t
)
[
l
]
⊕
C
V
(
t
)
[
l
+
8
]
{\displaystyle H[l]\leftarrow CV^{(t)}[l]\oplus CV^{(t)}[l+8]}
(
0
≤
l
≤
7
)
{\displaystyle (0\leq l\leq 7)}
h
b
[
s
]
←
H
[
⌊
8
s
/
w
⌋
]
[
7
:
0
]
⋙
(
8
s
mod
w
)
{\displaystyle h_{b}[s]\leftarrow H[\lfloor 8s/w\rfloor ]_{[7:0]}^{\ggg (8s\mod w)}}
(
0
≤
s
≤
(
w
−
1
)
)
{\displaystyle (0\leq s\leq (w-1))}
h
←
(
h
b
[
0
]
‖
…
‖
h
b
[
w
−
1
]
)
[
0
:
n
−
1
]
{\displaystyle h\leftarrow (h_{b}[0]\|\ldots \|h_{b}[w-1])_{[0:n-1]}}
Here,
X
[
i
:
j
]
{\displaystyle X_{[i:j]}}
denotes
x
i
‖
x
i
−
1
‖
…
‖
x
j
{\displaystyle x_{i}\|x_{i-1}\|\ldots \|x_{j}}
, the sub-bit string of a word
X
{\displaystyle X}
for
i
≥
j
{\displaystyle i\geq j}
.
And
x
[
i
:
j
]
{\displaystyle x_{[i:j]}}
denotes
x
i
‖
x
i
+
1
‖
…
‖
x
j
{\displaystyle x_{i}\|x_{i+1}\|\ldots \|x_{j}}
, the sub-bit string of a
l
{\displaystyle l}
-bit string
x
=
x
0
‖
x
1
‖
…
‖
x
l
−
1
{\displaystyle x=x_{0}\|x_{1}\|\ldots \|x_{l-1}}
for
i
≤
j
{\displaystyle i\leq j}
.
== Security ==
LSH is secure against known attacks on hash functions up to now.
LSH is collision-resistant for
q
<
2
n
/
2
{\displaystyle q<2^{n/2}}
and preimage-resistant and second-preimage-resistant for
q
<
2
n
{\displaystyle q<2^{n}}
in the ideal cipher model, where
q
{\displaystyle q}
is a number of queries for LSH structure.
LSH-256 is secure against all the existing hash function attacks when the number of steps is 13 or more, while LSH-512 is secure if the number of steps is 14 or more.
Note that the steps which work as security margin are 50% of the compression function.
== Performance ==
LSH outperforms SHA-2/3 on various software platforms.
The following table shows the speed performance of 1MB message hashing of LSH on several platforms.
The following table is the comparison at the platform based on Haswell, LSH is measured on Intel Core i7-4770k @ 3.5 GHz quad core platform, and others are measured on Intel Core i5-4570S @ 2.9 GHz quad core platform.
The following table is measured on Samsung Exynos 5250 ARM Cortex-A15 @ 1.7 GHz dual core platform.
== Test vectors ==
Test vectors for LSH for each digest length are as follows.
All values are expressed in hexadecimal form.
LSH-256-224("abc") = F7 C5 3B A4 03 4E 70 8E 74 FB A4 2E 55 99 7C A5 12 6B B7 62 36 88 F8 53 42 F7 37 32
LSH-256-256("abc") = 5F BF 36 5D AE A5 44 6A 70 53 C5 2B 57 40 4D 77 A0 7A 5F 48 A1 F7 C1 96 3A 08 98 BA 1B 71 47 41
LSH-512-224("abc") = D1 68 32 34 51 3E C5 69 83 94 57 1E AD 12 8A 8C D5 37 3E 97 66 1B A2 0D CF 89 E4 89
LSH-512-256("abc") = CD 89 23 10 53 26 02 33 2B 61 3F 1E C1 1A 69 62 FC A6 1E A0 9E CF FC D4 BC F7 58 58 D8 02 ED EC
LSH-512-384("abc") = 5F 34 4E FA A0 E4 3C CD 2E 5E 19 4D 60 39 79 4B 4F B4 31 F1 0F B4 B6 5F D4 5E 9D A4 EC DE 0F 27 B6 6E 8D BD FA 47 25 2E 0D 0B 74 1B FD 91 F9 FE
LSH-512-512("abc") = A3 D9 3C FE 60 DC 1A AC DD 3B D4 BE F0 A6 98 53 81 A3 96 C7 D4 9D 9F D1 77 79 56 97 C3 53 52 08 B5 C5 72 24 BE F2 10 84 D4 20 83 E9 5A 4B D8 EB 33 E8 69 81 2B 65 03 1C 42 88 19 A1 E7 CE 59 6D
== Implementations ==
LSH is free for any use public or private, commercial or non-commercial.
The source code for distribution of LSH implemented in C, Java, and Python can be downloaded from KISA's cryptography use activation webpage.
== KCMVP ==
LSH is one of the cryptographic algorithms approved by the Korean Cryptographic Module Validation Program (KCMVP).
== Standardization ==
LSH is included in the following standard.
KS X 3262, Hash function LSH (in Korean)
== References == | Wikipedia/LSH_(hash_function) |
The Digital Signature Algorithm (DSA) is a public-key cryptosystem and Federal Information Processing Standard for digital signatures, based on the mathematical concept of modular exponentiation and the discrete logarithm problem. In a digital signature system, there is a keypair involved, consisting of a private and a public key. In this system a signing entity that declared their public key can generate a signature using their private key, and a verifier can assert the source if it verifies the signature correctly using the declared public key. DSA is a variant of the Schnorr and ElGamal signature schemes.: 486
The National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS) in 1991, and adopted it as FIPS 186 in 1994. Five revisions to the initial specification have been released. The newest specification is: FIPS 186-5 from February 2023. DSA is patented but NIST has made this patent available worldwide royalty-free. Specification FIPS 186-5 indicates DSA will no longer be approved for digital signature generation, but may be used to verify signatures generated prior to the implementation date of that standard.
== Overview ==
The DSA works in the framework of public-key cryptosystems and is based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem, which is considered to be computationally intractable. The algorithm uses a key pair consisting of a public key and a private key. The private key is used to generate a digital signature for a message, and such a signature can be verified by using the signer's corresponding public key. The digital signature provides message authentication (the receiver can verify the origin of the message), integrity (the receiver can verify that the message has not been modified since it was signed) and non-repudiation (the sender cannot falsely claim that they have not signed the message).
== History ==
In 1982, the U.S government solicited proposals for a public key signature standard. In August 1991 the National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS). Initially there was significant criticism, especially from software companies that had already invested effort in developing digital signature software based on the RSA cryptosystem.: 484 Nevertheless, NIST adopted DSA as a Federal standard (FIPS 186) in 1994. Five revisions to the initial specification have been released: FIPS 186–1 in 1998, FIPS 186–2 in 2000, FIPS 186–3 in 2009, FIPS 186–4 in 2013, and FIPS 186–5 in 2023. Standard FIPS 186-5 forbids signing with DSA, while allowing verification of signatures generated prior to the implementation date of the standard as a document. It is to be replaced by newer signature schemes such as EdDSA.
DSA is covered by U.S. patent 5,231,668, filed July 26, 1991 and now expired, and attributed to David W. Kravitz, a former NSA employee. This patent was given to "The United States of America as represented by the Secretary of Commerce, Washington, D.C.", and NIST has made this patent available worldwide royalty-free. Claus P. Schnorr claims that his U.S. patent 4,995,082 (also now expired) covered DSA; this claim is disputed.
In 1993, Dave Banisar managed to get confirmation, via a FOIA request, that the DSA algorithm hasn't been designed by the NIST, but by the NSA.
OpenSSH announced that DSA was going to be removed in 2025. The support was entirely dropped in version 10.0.
== Operation ==
The DSA algorithm involves four operations: key generation (which creates the key pair), key distribution, signing and signature verification.
=== 1. Key generation ===
Key generation has two phases. The first phase is a choice of algorithm parameters which may be shared between different users of the system, while the second phase computes a single key pair for one user.
==== Parameter generation ====
Choose an approved cryptographic hash function
H
{\displaystyle H}
with output length
|
H
|
{\displaystyle |H|}
bits. In the original DSS,
H
{\displaystyle H}
was always SHA-1, but the stronger SHA-2 hash functions are approved for use in the current DSS. If
|
H
|
{\displaystyle |H|}
is greater than the modulus length
N
{\displaystyle N}
, only the leftmost
N
{\displaystyle N}
bits of the hash output are used.
Choose a key length
L
{\displaystyle L}
. The original DSS constrained
L
{\displaystyle L}
to be a multiple of 64 between 512 and 1024 inclusive. NIST 800-57 recommends lengths of 2048 (or 3072) for keys with security lifetimes extending beyond 2010 (or 2030).
Choose the modulus length
N
{\displaystyle N}
such that
N
<
L
{\displaystyle N<L}
and
N
≤
|
H
|
{\displaystyle N\leq |H|}
. FIPS 186-4 specifies
L
{\displaystyle L}
and
N
{\displaystyle N}
to have one of the values: (1024, 160), (2048, 224), (2048, 256), or (3072, 256).
Choose an
N
{\displaystyle N}
-bit prime
q
{\displaystyle q}
.
Choose an
L
{\displaystyle L}
-bit prime
p
{\displaystyle p}
such that
p
−
1
{\displaystyle p-1}
is a multiple of
q
{\displaystyle q}
.
Choose an integer
h
{\displaystyle h}
randomly from
{
2
…
p
−
2
}
{\displaystyle \{2\ldots p-2\}}
.
Compute
g
:=
h
(
p
−
1
)
/
q
mod
p
{\displaystyle g:=h^{(p-1)/q}\mod p}
. In the rare case that
g
=
1
{\displaystyle g=1}
try again with a different
h
{\displaystyle h}
. Commonly
h
=
2
{\displaystyle h=2}
is used. This modular exponentiation can be computed efficiently even if the values are large.
The algorithm parameters are (
p
{\displaystyle p}
,
q
{\displaystyle q}
,
g
{\displaystyle g}
). These may be shared between different users of the system.
==== Per-user keys ====
Given a set of parameters, the second phase computes the key pair for a single user:
Choose an integer
x
{\displaystyle x}
randomly from
{
1
…
q
−
1
}
{\displaystyle \{1\ldots q-1\}}
.
Compute
y
:=
g
x
mod
p
{\displaystyle y:=g^{x}\mod p}
.
x
{\displaystyle x}
is the private key and
y
{\displaystyle y}
is the public key.
=== 2. Key distribution ===
The signer should publish the public key
y
{\displaystyle y}
. That is, they should send the key to the receiver via a reliable, but not necessarily secret, mechanism. The signer should keep the private key
x
{\displaystyle x}
secret.
=== 3. Signing ===
A message
m
{\displaystyle m}
is signed as follows:
Choose an integer
k
{\displaystyle k}
randomly from
{
1
…
q
−
1
}
{\displaystyle \{1\ldots q-1\}}
Compute
r
:=
(
g
k
mod
p
)
mod
q
{\displaystyle r:=\left(g^{k}{\bmod {\,}}p\right){\bmod {\,}}q}
. In the unlikely case that
r
=
0
{\displaystyle r=0}
, start again with a different random
k
{\displaystyle k}
.
Compute
s
:=
(
k
−
1
(
H
(
m
)
+
x
r
)
)
mod
q
{\displaystyle s:=\left(k^{-1}\left(H(m)+xr\right)\right){\bmod {\,}}q}
. In the unlikely case that
s
=
0
{\displaystyle s=0}
, start again with a different random
k
{\displaystyle k}
.
The signature is
(
r
,
s
)
{\displaystyle \left(r,s\right)}
The calculation of
k
{\displaystyle k}
and
r
{\displaystyle r}
amounts to creating a new per-message key. The modular exponentiation in computing
r
{\displaystyle r}
is the most computationally expensive part of the signing operation, but it may be computed before the message is known.
Calculating the modular inverse
k
−
1
mod
q
{\displaystyle k^{-1}{\bmod {\,}}q}
is the second most expensive part, and it may also be computed before the message is known. It may be computed using the extended Euclidean algorithm or using Fermat's little theorem as
k
q
−
2
mod
q
{\displaystyle k^{q-2}{\bmod {\,}}q}
.
=== 4. Signature Verification ===
One can verify that a signature
(
r
,
s
)
{\displaystyle \left(r,s\right)}
is a valid signature for a message
m
{\displaystyle m}
as follows:
Verify that
0
<
r
<
q
{\displaystyle 0<r<q}
and
0
<
s
<
q
{\displaystyle 0<s<q}
.
Compute
w
:=
s
−
1
mod
q
{\displaystyle w:=s^{-1}{\bmod {\,}}q}
.
Compute
u
1
:=
H
(
m
)
⋅
w
mod
q
{\displaystyle u_{1}:=H(m)\cdot w\,{\bmod {\,}}q}
.
Compute
u
2
:=
r
⋅
w
mod
q
{\displaystyle u_{2}:=r\cdot w\,{\bmod {\,}}q}
.
Compute
v
:=
(
g
u
1
y
u
2
mod
p
)
mod
q
{\displaystyle v:=\left(g^{u_{1}}y^{u_{2}}{\bmod {\,}}p\right){\bmod {\,}}q}
.
The signature is valid if and only if
v
=
r
{\displaystyle v=r}
.
== Correctness of the algorithm ==
The signature scheme is correct in the sense that the verifier will always accept genuine signatures. This can be shown as follows:
First, since
g
=
h
(
p
−
1
)
/
q
mod
p
{\textstyle g=h^{(p-1)/q}~{\text{mod}}~p}
, it follows that
g
q
≡
h
p
−
1
≡
1
mod
p
{\textstyle g^{q}\equiv h^{p-1}\equiv 1\mod p}
by Fermat's little theorem. Since
g
>
0
{\displaystyle g>0}
and
q
{\displaystyle q}
is prime,
g
{\displaystyle g}
must have order
q
{\displaystyle q}
.
The signer computes
s
=
k
−
1
(
H
(
m
)
+
x
r
)
mod
q
{\displaystyle s=k^{-1}(H(m)+xr){\bmod {\,}}q}
Thus
k
≡
H
(
m
)
s
−
1
+
x
r
s
−
1
≡
H
(
m
)
w
+
x
r
w
(
mod
q
)
{\displaystyle {\begin{aligned}k&\equiv H(m)s^{-1}+xrs^{-1}\\&\equiv H(m)w+xrw{\pmod {q}}\end{aligned}}}
Since
g
{\displaystyle g}
has order
q
{\displaystyle q}
we have
g
k
≡
g
H
(
m
)
w
g
x
r
w
≡
g
H
(
m
)
w
y
r
w
≡
g
u
1
y
u
2
(
mod
p
)
{\displaystyle {\begin{aligned}g^{k}&\equiv g^{H(m)w}g^{xrw}\\&\equiv g^{H(m)w}y^{rw}\\&\equiv g^{u_{1}}y^{u_{2}}{\pmod {p}}\end{aligned}}}
Finally, the correctness of DSA follows from
r
=
(
g
k
mod
p
)
mod
q
=
(
g
u
1
y
u
2
mod
p
)
mod
q
=
v
{\displaystyle {\begin{aligned}r&=(g^{k}{\bmod {\,}}p){\bmod {\,}}q\\&=(g^{u_{1}}y^{u_{2}}{\bmod {\,}}p){\bmod {\,}}q\\&=v\end{aligned}}}
== Sensitivity ==
With DSA, the entropy, secrecy, and uniqueness of the random signature value
k
{\displaystyle k}
are critical. It is so critical that violating any one of those three requirements can reveal the entire private key to an attacker. Using the same value twice (even while keeping
k
{\displaystyle k}
secret), using a predictable value, or leaking even a few bits of
k
{\displaystyle k}
in each of several signatures, is enough to reveal the private key
x
{\displaystyle x}
.
This issue affects both DSA and Elliptic Curve Digital Signature Algorithm (ECDSA) – in December 2010, the group fail0verflow announced the recovery of the ECDSA private key used by Sony to sign software for the PlayStation 3 game console. The attack was made possible because Sony failed to generate a new random
k
{\displaystyle k}
for each signature.
This issue can be prevented by deriving
k
{\displaystyle k}
deterministically from the private key and the message hash, as described by RFC 6979. This ensures that
k
{\displaystyle k}
is different for each
H
(
m
)
{\displaystyle H(m)}
and unpredictable for attackers who do not know the private key
x
{\displaystyle x}
.
In addition, malicious implementations of DSA and ECDSA can be created where
k
{\displaystyle k}
is chosen in order to subliminally leak information via signatures. For example, an offline private key could be leaked from a perfect offline device that only released innocent-looking signatures.
== Implementations ==
Below is a list of cryptographic libraries that provide support for DSA:
Botan
Bouncy Castle
cryptlib
Crypto++
libgcrypt
Nettle
OpenSSL
wolfCrypt
GnuTLS
== See also ==
Modular arithmetic
RSA (cryptosystem)
ECDSA
== References ==
== External links ==
FIPS PUB 186-4: Digital Signature Standard (DSS), the fourth (and current) revision of the official DSA specification.
Recommendation for Key Management -- Part 1: general, NIST Special Publication 800-57, p. 62–63 | Wikipedia/Digital_Signature_Algorithm |
In cryptography, a nonce is an arbitrary number that can be used just once in a cryptographic communication. It is often a random or pseudo-random number issued in an authentication protocol to ensure that each communication session is unique, and therefore that old communications cannot be reused in replay attacks. Nonces can also be useful as initialization vectors and in cryptographic hash functions.
== Definition ==
A nonce is an arbitrary number used only once in a cryptographic communication, in the spirit of a nonce word. They are often random or pseudo-random numbers. Many nonces also include a timestamp to ensure exact timeliness, though this requires clock synchronisation between organisations. The addition of a client nonce ("cnonce") helps to improve the security in some ways as implemented in digest access authentication. To ensure that a nonce is used only once, it should be time-variant (including a suitably fine-grained timestamp in its value), or generated with enough random bits to ensure an insignificantly low chance of repeating a previously generated value. Some authors define pseudo-randomness (or unpredictability) as a requirement for a nonce.
Nonce is a word dating back to Middle English for something only used once or temporarily (often with the construction "for the nonce"). It descends from the construction "then anes" ("the one [purpose]"). A false etymology claiming it to stand for "number used once" or similar is incorrect.
== Usage ==
=== Authentication ===
Authentication protocols may use nonces to ensure that old communications cannot be reused in replay attacks. For instance, nonces are used in HTTP digest access authentication to calculate an MD5 digest of the password. The nonces are different each time the 401 authentication challenge response code is presented, thus making replay attacks virtually impossible. The scenario of ordering products over the Internet can provide an example of the usefulness of nonces in replay attacks. An attacker could take the encrypted information and—without needing to decrypt—could continue to send a particular order to the supplier, thereby ordering products over and over again under the same name and purchase information. The nonce is used to give 'originality' to a given message so that if the company receives any other orders from the same person with the same nonce, it will discard those as invalid orders.
A nonce may be used to ensure security for a stream cipher. Where the same key is used for more than one message and then a different nonce is used to ensure that the keystream is different for different messages encrypted with that key; often the message number is used.
Secret nonce values are used by the Lamport signature scheme as a signer-side secret which can be selectively revealed for comparison to public hashes for signature creation and verification.
=== Initialization vectors ===
Initialization vectors may be referred to as nonces, as they are typically random or pseudo-random.
=== Hashing ===
Nonces are used in proof-of-work systems to vary the input to a cryptographic hash function so as to obtain a hash for a certain input that fulfils certain arbitrary conditions. In doing so, it becomes far more difficult to create a "desirable" hash than to verify it, shifting the burden of work onto one side of a transaction or system. For example, proof of work, using hash functions, was considered as a means to combat email spam by forcing email senders to find a hash value for the email (which included a timestamp to prevent pre-computation of useful hashes for later use) that had an arbitrary number of leading zeroes, by hashing the same input with a large number of values until a "desirable" hash was obtained.
Similarly, the Bitcoin blockchain hashing algorithm can be tuned to an arbitrary difficulty by changing the required minimum/maximum value of the hash so that the number of bitcoins awarded for new blocks does not increase linearly with increased network computation power as new users join. This is likewise achieved by forcing Bitcoin miners to add nonce values to the value being hashed to change the hash algorithm output. As cryptographic hash algorithms cannot easily be predicted based on their inputs, this makes the act of blockchain hashing and the possibility of being awarded bitcoins something of a lottery, where the first "miner" to find a nonce that delivers a desirable hash is awarded bitcoins.
== See also ==
Key stretching
Salt (cryptography)
Nonce word
== References ==
== External links ==
RFC 2617 – HTTP Authentication: Basic and Digest Access Authentication
RFC 3540 – Robust Explicit Congestion Notification (ECN) Signaling with Nonces
RFC 4418 – UMAC: Message Authentication Code using Universal Hashing
Web Services Security | Wikipedia/Cryptographic_nonce |
In cryptography, GMR is a digital signature algorithm named after its inventors Shafi Goldwasser, Silvio Micali and Ron Rivest.
As with RSA the security of the system is related to the difficulty of factoring very large numbers. But, in contrast to RSA, GMR is secure against adaptive chosen-message attacks, which is the currently accepted security definition for signature schemes— even when an attacker receives signatures for messages of his choice, this does not allow them to forge a signature for a single additional message.
== External links ==
Goldwasser, Shafi; Micali, Silvio; Rivest, Ronald L. (April 1988). "A Digital Signature Scheme Secure Against Adaptive Chosen-Message Attacks" (PDF). SIAM Journal on Computing. 17 (2): 281–308. doi:10.1137/0217017. S2CID 1715998. Retrieved 26 October 2022. | Wikipedia/GMR_(cryptography) |
The Password Hashing Competition was an open competition announced in 2013 to select one or more password hash functions that can be recognized as a recommended standard. It was modeled after the successful Advanced Encryption Standard process and NIST hash function competition, but directly organized by cryptographers and security practitioners. On 20 July 2015, Argon2 was selected as the final PHC winner, with special recognition given to four other password hashing schemes: Catena, Lyra2, yescrypt and Makwa.
One goal of the Password Hashing Competition was to raise awareness of the need for strong password hash algorithms, hopefully avoiding a repeat of previous password breaches involving weak or no hashing, such as the ones involving
RockYou (2009),
JIRA,
Gawker (2010),
PlayStation Network outage,
Battlefield Heroes (2011),
eHarmony,
LinkedIn,
Adobe,
ASUS,
South Carolina Department of Revenue (2012),
Evernote,
Ubuntu Forums (2013),
etc.
The organizers were in contact with NIST, expecting an impact on its recommendations.
== See also ==
crypt (C)
Password hashing
List of computer science awards
CAESAR Competition
== References ==
== External links ==
The Password Hashing Competition web site
Source code and descriptions of the first round submissions
PHC string format | Wikipedia/Makwa_(cryptography) |
In cryptography, a brute-force attack or exhaustive key search is a cryptanalytic attack that consists of an attacker submitting many possible keys or passwords with the hope of eventually guessing correctly. This strategy can theoretically be used to break any form of encryption that is not information-theoretically secure. However, in a properly designed cryptosystem the chance of successfully guessing the key is negligible.
When cracking passwords, this method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used because a brute-force search takes too long. Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones due to diversity of characters.
Brute-force attacks can be made less effective by obfuscating the data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.
Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one. The word 'hammering' is sometimes used to describe a brute-force attack, with 'anti-hammering' for countermeasures.
== Basic concept ==
Brute-force attacks work by calculating every possible combination that could make up a password and testing it to see if it is the correct password. As the password's length increases, the amount of time, on average, to find the correct password increases exponentially.
== Theoretical limits ==
The resources required for a brute-force attack grow exponentially with increasing key size, not linearly. Although U.S. export regulations historically restricted key lengths to 56-bit symmetric keys (e.g. Data Encryption Standard), these restrictions are no longer in place, so modern symmetric algorithms typically use computationally stronger 128- to 256-bit keys.
There is a physical argument that a 128-bit symmetric key is computationally secure against brute-force attack. The Landauer limit implied by the laws of physics sets a lower limit on the energy required to perform a computation of kT · ln 2 per bit erased in a computation, where T is the temperature of the computing device in kelvins, k is the Boltzmann constant, and the natural logarithm of 2 is about 0.693 (0.6931471805599453). No irreversible computing device can use less energy than this, even in principle. Thus, in order to simply flip through the possible values for a 128-bit symmetric key (ignoring doing the actual computing to check it) would, theoretically, require 2128 − 1 bit flips on a conventional processor. If it is assumed that the calculation occurs near room temperature (≈300 K), the Von Neumann-Landauer Limit can be applied to estimate the energy required as ≈1018 joules, which is equivalent to consuming 30 gigawatts of power for one year. This is equal to 30×109 W×365×24×3600 s = 9.46×1017 J or 262.7 TWh (about 0.1% of the yearly world energy production). The full actual computation – checking each key to see if a solution has been found – would consume many times this amount. Furthermore, this is simply the energy requirement for cycling through the key space; the actual time it takes to flip each bit is not considered, which is certainly greater than 0 (see Bremermann's limit).
However, this argument assumes that the register values are changed using conventional set and clear operations, which inevitably generate entropy. It has been shown that computational hardware can be designed not to encounter this theoretical obstruction (see reversible computing), though no such computers are known to have been constructed.
As commercial successors of governmental ASIC solutions have become available, also known as custom hardware attacks, two emerging technologies have proven their capability in the brute-force attack of certain ciphers. One is modern graphics processing unit (GPU) technology, the other is the field-programmable gate array (FPGA) technology. GPUs benefit from their wide availability and price-performance benefit, FPGAs from their energy efficiency per cryptographic operation. Both technologies try to transport the benefits of parallel processing to brute-force attacks. In case of GPUs some hundreds, in the case of FPGA some thousand processing units making them much better suited to cracking passwords than conventional processors. For instance in 2022, 8 Nvidia RTX 4090 GPU were linked together to test password strength by using the software Hashcat with results that showed 200 billion eight-character NTLM password combinations could be cycled through in 48 minutes.
Various publications in the fields of cryptographic analysis have proved the energy efficiency of today's FPGA technology, for example, the COPACOBANA FPGA Cluster computer consumes the same energy as a single PC (600 W), but performs like 2,500 PCs for certain algorithms. A number of firms provide hardware-based FPGA cryptographic analysis solutions from a single FPGA PCI Express card up to dedicated FPGA computers. WPA and WPA2 encryption have successfully been brute-force attacked by reducing the workload by a factor of 50 in comparison to conventional CPUs and some hundred in case of FPGAs.
Advanced Encryption Standard (AES) permits the use of 256-bit keys. Breaking a symmetric 256-bit key by brute-force requires 2128 times more computational power than a 128-bit key. One of the fastest supercomputers in 2019 has a speed of 100 petaFLOPS which could theoretically check 100 trillion (1014) AES keys per second (assuming 1000 operations per check), but would still require 3.67×1055 years to exhaust the 256-bit key space.
An underlying assumption of a brute-force attack is that the complete key space was used to generate keys, something that relies on an effective random number generator, and that there are no defects in the algorithm or its implementation. For example, a number of systems that were originally thought to be impossible to crack by brute-force have nevertheless been cracked because the key space to search through was found to be much smaller than originally thought, because of a lack of entropy in their pseudorandom number generators. These include Netscape's implementation of Secure Sockets Layer (SSL) (cracked by Ian Goldberg and David Wagner in 1995) and a Debian/Ubuntu edition of OpenSSL discovered in 2008 to be flawed. A similar lack of implemented entropy led to the breaking of Enigma's code.
== Credential recycling ==
Credential recycling is the hacking practice of re-using username and password combinations gathered in previous brute-force attacks. A special form of credential recycling is pass the hash, where unsalted hashed credentials are stolen and re-used without first being brute-forced.
== Unbreakable codes ==
Certain types of encryption, by their mathematical properties, cannot be defeated by brute-force. An example of this is one-time pad cryptography, where every cleartext bit has a corresponding key from a truly random sequence of key bits. A 140 character one-time-pad-encoded string subjected to a brute-force attack would eventually reveal every 140 character string possible, including the correct answer – but of all the answers given, there would be no way of knowing which was the correct one. Defeating such a system, as was done by the Venona project, generally relies not on pure cryptography, but upon mistakes in its implementation, such as the key pads not being truly random, intercepted keypads, or operators making mistakes.
== Countermeasures ==
In case of an offline attack where the attacker has gained access to the encrypted material, one can try key combinations without the risk of discovery or interference. In case of online attacks, database and directory administrators can deploy countermeasures such as limiting the number of attempts that a password can be tried, introducing time delays between successive attempts, increasing the answer's complexity (e.g., requiring a CAPTCHA answer or employing multi-factor authentication), and/or locking accounts out after unsuccessful login attempts. Website administrators may prevent a particular IP address from trying more than a predetermined number of password attempts against any account on the site. Additionally, the MITRE D3FEND framework provides structured recommendations for defending against brute-force attacks by implementing strategies such as network traffic filtering, deploying decoy credentials, and invalidating authentication caches.
== Reverse brute-force attack ==
In a reverse brute-force attack (also called password spraying), a single (usually common) password is tested against multiple usernames or encrypted files. The process may be repeated for a select few passwords. In such a strategy, the attacker is not targeting a specific user.
== See also ==
Bitcoin mining
Cryptographic key length
Distributed.net
Hail Mary Cloud
Key derivation function
MD5CRK
Metasploit Express
Side-channel attack
TWINKLE and TWIRL
Unicity distance
RSA Factoring Challenge
Secure Shell
== Notes ==
== References ==
== External links ==
RSA-sponsored DES-III cracking contest
Demonstration of a brute-force device designed to guess the passcode of locked iPhones running iOS 10.3.3
How We Cracked the Code Book Ciphers – Essay by the winning team of the challenge in The Code Book | Wikipedia/Brute-force_attack |
Algebraic Eraser (AE) is an anonymous key agreement protocol that allows two parties, each having an AE public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key that can then be used to encrypt subsequent communications using a symmetric key cipher. Algebraic Eraser was developed by Iris Anshel, Michael Anshel, Dorian Goldfeld and Stephane Lemieux. SecureRF owns patents covering the protocol and unsuccessfully attempted (as of July 2019) to standardize the protocol as part of ISO/IEC 29167-20, a standard for securing radio-frequency identification devices and wireless sensor networks.
== Keyset parameters ==
Before two parties can establish a key they must first agree on a set of parameters, called the keyset parameters. These parameters comprise:
N
{\displaystyle N}
, the number of strands in the braid,
q
{\displaystyle q}
, the size of the finite field
F
q
{\displaystyle \mathbb {F} _{q}}
,
M
∗
{\displaystyle M_{*}}
, the initial NxN seed matrix in
F
q
{\displaystyle \mathbb {F} _{q}}
,
T
{\displaystyle \mathrm {T} }
, a set of
N
{\displaystyle N}
elements in the finite field
F
q
{\displaystyle \mathbb {F} _{q}}
(also called the T-values), and
A
,
B
{\displaystyle A,B}
a set of conjugates in the braid group designed to commute with each other.
== E-multiplication ==
The fundamental operation of the Algebraic Eraser is a one-way function called E-multiplication. Given a matrix, permutation, an Artin generator
β
{\displaystyle \beta }
in the braid group, and T-values, one applies E-multiplication by converting the generator to a colored Burau matrix and braid permutation,
(
C
B
(
β
)
,
σ
β
)
{\displaystyle (CB(\beta ),\sigma _{\beta })}
, applying the permutation and T-values, and then multiplying the matrices and permutations. The output of E-multiplication is itself a matrix and permutation pair:
(
M
′
,
σ
′
)
=
(
M
,
σ
0
)
∗
(
C
B
(
β
)
,
σ
β
)
{\displaystyle (M',\sigma ')=(M,\sigma _{0})*(CB(\beta ),\sigma _{\beta })}
.
== Key establishment protocol ==
The following example illustrates how to make a key establishment. Suppose Alice wants to establish a shared key with Bob, but the only channel available may be eavesdropped by a third party. Initially, Alice and Bob must agree on the keyset parameters they will use.
Each party must have a key pair derived from the keyset, consisting of a private key (e.g., in the case of Alice)
(
m
A
,
B
a
)
{\displaystyle (m_{A},\mathrm {B} _{a})}
where
m
A
{\displaystyle m_{A}}
is a randomly selected polynomial of the seed matrix
m
A
=
∑
i
=
0
N
−
1
a
i
M
∗
i
{\displaystyle m_{A}=\sum _{i=0}^{N-1}{a_{i}M_{*}^{i}}}
and a braid, which is a randomly selected set of conjugates and inverses chosen from the keyset parameters (A for Alice and B for Bob, where (for Alice)
B
a
=
∏
i
=
1
k
A
i
±
1
{\displaystyle \mathrm {B} _{a}=\prod _{i=1}^{k}A_{i}^{\pm 1}}
).
From their private key material Alice and Bob each compute their public key
(
P
u
b
A
,
σ
a
)
{\displaystyle (Pub_{A},\sigma _{a})}
and
(
P
u
b
B
,
σ
b
)
{\displaystyle (Pub_{B},\sigma _{b})}
where, e.g.,
(
P
u
b
A
,
σ
a
)
=
(
m
A
,
i
d
)
∗
b
a
{\displaystyle (Pub_{A},\sigma _{a})=(m_{A},id)*b_{a}}
, that is, the result of E-Multiplication of the private matrix and identity-permutation with the private braid.
Each party must know the other party's public key prior to execution of the protocol.
To compute the shared secret, Alice computes
(
S
a
b
,
σ
a
b
)
=
(
m
A
P
u
b
B
,
σ
b
)
∗
B
a
{\displaystyle (S_{ab},\sigma _{ab})=(m_{A}Pub_{B},\sigma _{b})*\mathrm {B} _{a}}
and Bob computes
(
S
b
a
,
σ
b
a
)
=
(
m
B
P
u
b
A
,
σ
a
)
∗
B
b
{\displaystyle (S_{ba},\sigma _{ba})=(m_{B}Pub_{A},\sigma _{a})*\mathrm {B} _{b}}
. The shared secret is the matrix/permutation pair
(
S
a
b
,
σ
a
b
)
{\displaystyle (S_{ab},\sigma _{ab})}
, which equals
(
S
b
a
,
σ
b
a
)
{\displaystyle (S_{ba},\sigma _{ba})}
. The shared secrets are equal because the conjugate sets
A
{\displaystyle A}
and
B
{\displaystyle B}
are chosen to commute and both Alice and Bob use the same seed matrix
M
∗
{\displaystyle M_{*}}
and T-values
T
{\displaystyle \mathrm {T} }
.
The only information about her private key that Alice initially exposes is her public key. So, no party other than Alice can determine Alice's private key, unless that party can solve the Braid Group Simultaneous Conjugacy Separation Search problem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the Diffie–Hellman problem.
The public keys are either static (and trusted, say via a certificate) or ephemeral. Ephemeral keys are temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoid man-in-the-middle attacks. If one of Alice or Bob's public key is static then man-in-the-middle attacks are thwarted. Static public keys provide neither forward secrecy nor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a secure key derivation function to the raw Diffie–Hellman shared secret to avoid leaking information about the static private key.
== Security ==
The security of AE is based on the Generalized Simultaneous Conjugacy Search Problem (GSCSP) within the braid group. This is a distinct and different hard problem than the Conjugacy Search Problem (CSP), which has been the central hard problem in what is called braid group cryptography. Even if CSP is uniformly broken (which has not been done to date), it is not known how this would facilitate a break of GSCSP.
== Known attacks ==
The first attack by Kalka, Teicher and Tsaban shows a class of weak-keys when
M
∗
{\displaystyle M_{*}}
or
m
A
{\displaystyle m_{A}}
are chosen randomly. The authors of Algebraic Eraser followed up with a preprint on how to choose parameters that aren't prone to the attack. Ben-Zvi, Blackburn, and Tsaban improved the first attack into one the authors claim can break the publicized security parameters (claimed to provide 128-bit security) using less than 8 CPU hours, and less than 64MB of memory. Anshel, Atkins and Goldfeld responded to this attack in January 2016.
A second attack by Myasnikov and Ushakov, published as a preprint, shows that conjugates chosen with a too-short conjugator braid can be separated, breaking the system. This attack was refuted by Gunnells, by showing that properly sized conjugator braids cannot be separated.
In 2016, Simon R. Blackburn and Matthew J. B. Robshaw published a range of practical attacks against the January 2016 draft of the ISO/IEC 29167-20 over-the-air protocol, including impersonation of a target tag with negligible amount of time and memory and full private key recovery requiring 249 time and 248 memory. Atkins and Goldfeld responded that adding a hash or message authentication code to the draft protocol defeats these attacks.
== See also ==
Anshel–Anshel–Goldfeld key exchange
Group-based cryptography
Non-commutative cryptography
== Notes ==
== References ==
== External links ==
SecureRF home page | Wikipedia/Algebraic_Eraser |
In computer science and cryptography, Whirlpool (sometimes styled WHIRLPOOL) is a cryptographic hash function. It was designed by Vincent Rijmen (co-creator of the Advanced Encryption Standard) and Paulo S. L. M. Barreto, who first described it in 2000.
The hash has been recommended by the NESSIE project. It has also been adopted by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as part of the joint ISO/IEC 10118-3 international standard.
== Design features ==
Whirlpool is a hash designed after the Square block cipher, and is considered to be in that family of block cipher functions.
Whirlpool is a Miyaguchi-Preneel construction based on a substantially modified Advanced Encryption Standard (AES).
Whirlpool takes a message of any length less than 2256 bits and returns a 512-bit message digest.
The authors have declared that
"WHIRLPOOL is not (and will never be) patented. It may be used free of charge for any purpose."
=== Version changes ===
The original Whirlpool will be called Whirlpool-0, the first revision of Whirlpool will be called Whirlpool-T and the latest version will be called Whirlpool in the following test vectors.
In the first revision in 2001, the S-box was changed from a randomly generated one with good cryptographic properties to one which has better cryptographic properties and is easier to implement in hardware.
In the second revision (2003), a flaw in the diffusion matrix was found that lowered the estimated security of the algorithm below its potential. Changing the 8x8 rotating matrix constants from (1, 1, 3, 1, 5, 8, 9, 5) to (1, 1, 4, 1, 8, 5, 2, 9) solved this issue.
== Internal structure ==
The Whirlpool hash function is a Merkle–Damgård construction based on an AES-like block cipher W in Miyaguchi–Preneel mode.
The block cipher W consists of an 8×8 state matrix
S
{\displaystyle S}
of bytes, for a total of 512 bits.
The encryption process consists of updating the state with four round functions over 10 rounds. The four round functions are SubBytes (SB), ShiftColumns (SC), MixRows (MR) and AddRoundKey (AK). During each round the new state is computed as
S
=
A
K
∘
M
R
∘
S
C
∘
S
B
(
S
)
{\displaystyle S=AK\circ MR\circ SC\circ SB(S)}
.
=== SubBytes ===
The SubBytes operation applies a non-linear permutation (the S-box) to each byte of the state independently. The 8-bit S-box is composed of 3 smaller 4-bit S-boxes.
=== ShiftColumns ===
The ShiftColumns operation cyclically shifts each byte in each column of the state. Column j has its bytes shifted downwards by j positions.
=== MixRows ===
The MixRows operation is a right-multiplication of each row by an 8×8 matrix over
G
F
(
2
8
)
{\displaystyle GF({2^{8}})}
. The matrix is chosen such that the branch number (an important property when looking at resistance to differential cryptanalysis) is 9, which is maximal.
=== AddRoundKey ===
The AddRoundKey operation uses bitwise xor to add a key calculated by the key schedule to the current state. The key schedule is identical to the encryption itself, except the AddRoundKey function is replaced by an AddRoundConstant function that adds a predetermined constant in each round.
== Whirlpool hashes ==
The Whirlpool algorithm has undergone two revisions since its original 2000 specification.
People incorporating Whirlpool will most likely use the most recent revision of Whirlpool; while there are no known security weaknesses in earlier versions of Whirlpool, the most recent revision has better hardware implementation efficiency characteristics, and is also likely to be more secure. As mentioned earlier, it is also the version adopted in the ISO/IEC 10118-3 international standard.
The 512-bit (64-byte) Whirlpool hashes (also termed message digests) are typically represented as 128-digit hexadecimal numbers.
The following demonstrates a 43-byte ASCII input (not including quotes) and the corresponding Whirlpool hashes:
== Implementations ==
The authors provide reference implementations of the Whirlpool algorithm, including a version written in C and a version written in Java. These reference implementations have been released into the public domain.
Research on the security analysis of the Whirlpool function however, has revealed that on average, the introduction of 8 random faults is sufficient to compromise the 512-bit Whirlpool hash message being processed and the secret key of HMAC-Whirlpool within the context of Cloud of Things (CoTs). This emphasizes the need for increased security measures in its implementation.
== Adoption ==
Two of the first widely used mainstream cryptographic programs that started using Whirlpool were FreeOTFE, followed by TrueCrypt in 2005.
VeraCrypt (a fork of TrueCrypt) included Whirlpool (the final version) as one of its supported hash algorithms.
== See also ==
Digital timestamping
== References ==
== External links ==
The WHIRLPOOL Hash Function at the Wayback Machine (archived 2017-11-29)
Jacksum on SourceForge, a Java implementation of all three revisions of Whirlpool
whirlpool on GitHub – An open source Go implementation of the latest revision of Whirlpool
A Matlab Implementation of the Whirlpool Hashing Function
RHash, an open source command-line tool, which can calculate and verify Whirlpool hash.
Perl Whirlpool module at CPAN
Digest module implementing the Whirlpool hashing algorithm in Ruby
Ironclad a Common Lisp cryptography package containing a Whirlpool implementation
The ISO/IEC 10118-3:2004 standard
Test vectors for the Whirlpool hash from the NESSIE project
Managed C# implementation
Python Whirlpool module | Wikipedia/Whirlpool_(hash_function) |
The GOST hash function, defined in the standards GOST R 34.11-94 and GOST 34.311-95 is a 256-bit cryptographic hash function. It was initially defined in the Russian national standard GOST R 34.11-94 Information Technology – Cryptographic Information Security – Hash Function. The equivalent standard used by other member-states of the CIS is GOST 34.311-95.
This function must not be confused with a different Streebog hash function, which is defined in the new revision of the standard GOST R 34.11-2012.
The GOST hash function is based on the GOST block cipher.
== Algorithm ==
GOST processes a variable-length message into a fixed-length output of 256 bits. The input message is broken up into chunks of 256-bit blocks (eight 32-bit little endian integers); the message is padded by appending as many zeros to it as are required to bring the length of the message up to 256 bits. The remaining bits are filled up with a 256-bit integer arithmetic sum of all previously hashed blocks and then a 256-bit integer representing the length of the original message, in bits.
=== Basic notation ===
The algorithm descriptions uses the following notation:
f
0
g
j
{\displaystyle {\mathcal {f}}0{\mathcal {g}}^{j}}
— j-bit block filled with zeroes.
j
M
j
{\displaystyle {\mathcal {j}}M{\mathcal {j}}}
— length of the M block in bits modulo 2256.
k
{\displaystyle {\mathcal {k}}}
— concatenation of two blocks.
+
{\displaystyle +}
— arithmetic sum of two blocks modulo 2256.
⊕
{\displaystyle \oplus }
— logical xor of two blocks.
Further we consider that the little-order bit is located at the left of a block, and the high-order bit at the right.
=== Description ===
The input message
M
{\displaystyle M}
is split into 256-bit blocks
m
n
,
m
n
−
1
,
m
n
−
2
,
…
,
m
1
{\displaystyle m_{n},\,m_{n-1},\,m_{n-2},\,\ldots ,\,m_{1}}
. In the case the last block
m
n
{\displaystyle m_{n}}
contains less than 256 bits, it is prepended left by zero bits to achieve the desired length.
Each block is processed by the step hash function
H
out
=
f
(
H
in
,
m
)
{\displaystyle H_{\text{out}}=f(H_{\text{in}},\,m)}
, where
H
out
{\displaystyle H_{\text{out}}}
,
H
in
{\displaystyle H_{\text{in}}}
,
m
{\displaystyle m}
are a 256-bit blocks.
Each message block, starting the first one, is processed by the step hash function
f
{\displaystyle f}
, to calculate intermediate hash value
H
i
+
1
=
f
(
H
i
,
m
i
)
{\displaystyle \!H_{i+1}=f(H_{i},\,m_{i})}
The
H
1
{\displaystyle H_{1}}
value can be arbitrary chosen, and usually is
0
256
{\displaystyle 0^{256}}
.
After
H
n
+
1
{\displaystyle H_{n+1}}
is calculated, the final hash value is obtained in the following way
H
n
+
2
=
f
(
H
n
+
1
,
L
)
{\displaystyle H_{n+2}=f(H_{n+1},\,L)}
, where L — is the length of the message M in bits modulo
2
256
{\displaystyle 2^{256}}
h
=
f
(
H
n
+
2
,
K
)
{\displaystyle h=f(H_{n+2},\,K)}
, where K — is 256-bit control sum of M:
m
1
+
m
2
+
m
3
+
…
+
m
n
{\displaystyle m_{1}+m_{2}+m_{3}+\ldots +m_{n}}
The
h
{\displaystyle h}
is the desired value of the hash function of the message M.
So, the algorithm works as follows.
Initialization:
h
:=
initial
{\displaystyle h:={\text{initial}}}
— Initial 256-bit value of the hash function, determined by user.
Σ
:=
0
{\displaystyle \Sigma :=0}
— Control sum
L
:=
0
{\displaystyle L:=0}
— Message length
Compression function of internal iterations: for i = 1 … n — 1 do the following (while
|
M
|
>
256
{\displaystyle |M|>256}
):
h
:=
f
(
h
,
m
i
)
{\displaystyle h:=f(h,\,m_{i})}
– apply step hash function
L
:=
L
+
256
{\displaystyle L:=L+256}
– recalculate message length
Σ
:=
Σ
+
m
i
{\displaystyle \Sigma :=\Sigma +m_{i}}
– calculate control sum
Compression function of final iteration:
L
:=
L
+
j
m
n
j
{\displaystyle L:=L+{\mathcal {j}}\,m_{n}\,{\mathcal {j}}}
– calculate the full message length in bits
m
n
:=
0
256
−
j
m
n
j
k
m
n
{\displaystyle m_{n}:={0}^{256-{\mathcal {j}}m_{n}{\mathcal {j}}}{\mathcal {k}}m_{n}}
– pad the last message with zeroes
Σ
:=
Σ
+
m
n
{\displaystyle \Sigma :=\Sigma +m_{n}}
– update control sum
h
:=
f
(
h
,
m
n
)
{\displaystyle h:=f(h,\,m_{n})}
– process the last message block
h
:=
f
(
h
,
L
)
{\displaystyle h:=f(h,\,L)}
– MD – strengthen up by hashing message length
h
:=
f
(
h
,
Σ
)
{\displaystyle h:=f(h,\,\Sigma )}
– hash control sum
The output value is
h
{\displaystyle h}
.
=== Step hash function ===
The step hash function
f
{\displaystyle f}
maps two 256-bit blocks into one:
H
out
=
f
(
H
in
,
m
)
{\displaystyle H_{\text{out}}=f(H_{\text{in}},\,m)}
.
It consist of three parts:
Generating of keys
K
1
,
K
2
,
K
3
,
K
4
{\displaystyle K_{1},\,K_{2},\,K_{3},\,K_{4}}
Enciphering transformation
H
in
{\displaystyle H_{\text{in}}}
using keys
K
1
,
K
2
,
K
3
,
K
4
{\displaystyle K_{1},\,K_{2},\,K_{3},\,K_{4}}
Shuffle transformation
==== Key generation ====
The keys generating algorithm uses:
Two transformations of 256-bit blocks:
Transformation
A
(
Y
)
=
A
(
y
4
k
y
3
k
y
2
k
y
1
)
=
(
y
1
⊕
y
2
)
k
y
4
k
y
3
k
y
2
{\displaystyle A(Y)=A(y_{4}\ {\mathcal {k}}\ y_{3}\ {\mathcal {k}}\ y_{2}\ {\mathcal {k}}\ y_{1})=(y_{1}\oplus y_{2})\ {\mathcal {k}}\ y_{4}\ {\mathcal {k}}\ y_{3}\ {\mathcal {k}}\ y_{2}}
, where
y
1
,
y
2
,
y
3
,
y
4
{\displaystyle y_{1},\,y_{2},\,y_{3},\,y_{4}}
are 64-bit sub-blocks of Y.
Transformation
P
(
Y
)
=
P
(
y
32
k
y
31
k
…
k
y
1
)
=
y
φ
(
32
)
k
y
φ
(
31
)
k
…
k
y
φ
(
1
)
{\displaystyle P(Y)=P(y_{32}{\mathcal {k}}y_{31}{\mathcal {k}}\dots {\mathcal {k}}y_{1})=y_{\varphi (32)}{\mathcal {k}}y_{\varphi (31)}{\mathcal {k}}\dots {\mathcal {k}}y_{\varphi (1)}}
, where
φ
(
i
+
1
+
4
(
k
−
1
)
)
=
8
i
+
k
,
i
=
0
,
…
,
3
,
k
=
1
,
…
,
8
{\displaystyle \varphi (i+1+4(k-1))=8i+k,\quad i=0,\,\dots ,\,3,\quad k=1,\,\dots ,\,8}
, and
y
32
,
y
31
,
…
,
y
1
{\displaystyle y_{32},\,y_{31},\,\dots ,\,y_{1}}
are 8-bit sub-blocks of Y.
Three constants:
C2 = 0
C3 = 0xff00ffff000000ffff0000ff00ffff0000ff00ff00ff00ffff00ff00ff00ff00
C4 = 0
The algorithm:
U
:=
H
in
,
V
:=
m
,
W
:=
U
⊕
V
,
K
1
=
P
(
W
)
{\displaystyle U:=H_{\text{in}},\quad V:=m,\quad W:=U\ \oplus \ V,\quad K_{1}=P(W)}
For j = 2, 3, 4 do the following:
U
:=
A
(
U
)
⊕
C
j
,
V
:=
A
(
A
(
V
)
)
,
W
:=
U
⊕
V
,
K
j
=
P
(
W
)
{\displaystyle U:=A(U)\oplus C_{j},\quad V:=A(A(V)),\quad W:=U\oplus V,\quad K_{j}=P(W)}
==== Enciphering transformation ====
After the keys generation, the enciphering of
H
in
{\displaystyle H_{\text{in}}}
is done using GOST 28147-89 in the mode of simple substitution on keys
K
1
,
K
2
,
K
3
,
K
4
{\displaystyle K_{1},\,K_{2},\,K_{3},\,K_{4}}
.
Let's denote the enciphering transformation as E (enciphering 64-bit data using 256-bit key). For enciphering, the
H
in
{\displaystyle H_{\text{in}}}
is split into four 64-bit blocks:
H
in
=
h
4
k
h
3
k
h
2
k
h
1
{\displaystyle H_{\text{in}}=h_{4}{\mathcal {k}}h_{3}{\mathcal {k}}h_{2}{\mathcal {k}}h_{1}}
, and each of these blocks is enciphered as:
s
1
=
E
(
h
1
,
K
1
)
{\displaystyle s_{1}=E(h_{1},\,K_{1})}
s
2
=
E
(
h
2
,
K
2
)
{\displaystyle s_{2}=E(h_{2},\,K_{2})}
s
3
=
E
(
h
3
,
K
3
)
{\displaystyle s_{3}=E(h_{3},\,K_{3})}
s
4
=
E
(
h
4
,
K
4
)
{\displaystyle s_{4}=E(h_{4},\,K_{4})}
After this, the result blocks are concatenated into one 256-bit block:
S
=
s
4
k
s
3
k
s
2
k
s
1
{\displaystyle S=s_{4}{\mathcal {k}}s_{3}{\mathcal {k}}s_{2}{\mathcal {k}}s_{1}}
.
==== Shuffle transformation ====
On the last step, the shuffle transformation is applied to
H
in
{\displaystyle H_{\text{in}}}
, S and m using a linear-feedback shift register. In the result, the intermediate hash value
H
out
{\displaystyle H_{\text{out}}}
is obtained.
First we define the ψ function, doing LFSR on a 256-bit block:
ψ
(
Y
)
=
ψ
(
y
16
k
y
15
k
…
k
y
2
k
y
1
)
=
(
y
1
⊕
y
2
⊕
y
3
⊕
y
4
⊕
y
13
⊕
y
16
)
k
y
16
k
y
15
k
…
k
y
3
k
y
2
{\displaystyle \psi (Y)=\psi (y_{16}{\mathcal {k}}y_{15}{\mathcal {k}}\ldots {\mathcal {k}}y_{2}{\mathcal {k}}y_{1})=(y_{1}\oplus y_{2}\oplus y_{3}\oplus y_{4}\oplus y_{13}\oplus y_{16}){\mathcal {k}}y_{16}{\mathcal {k}}y_{15}{\mathcal {k}}\ldots {\mathcal {k}}y_{3}{\mathcal {k}}y_{2}}
,
where
y
16
,
y
15
,
…
,
y
2
,
y
1
{\displaystyle y_{16},y_{15},\ldots ,y_{2},y_{1}}
are 16-bit sub-blocks of the Y.
The shuffle transformation is
H
out
=
ψ
61
(
H
in
⊕
ψ
(
m
⊕
ψ
12
(
S
)
)
)
{\displaystyle H_{\text{out}}=\psi ^{61}{\mathord {\left(H_{\text{in}}\oplus \psi \left(m\oplus \psi ^{12}(S)\right)\right)}}}
, where
ψ
i
{\displaystyle \psi ^{i}}
denotes an i-th power of the
ψ
{\displaystyle \psi }
function.
=== Initial values ===
There are two commonly used sets of initial parameters for GOST R 34.11 94.
The starting vector for both the sets is
H
1
{\displaystyle H_{1}}
= 0x00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000.
Although the GOST R 34.11 94 standard itself doesn't specify the algorithm initial value
H
1
{\displaystyle H_{1}}
and S-box of the enciphering transformation
E
{\displaystyle E}
, but uses the following "test parameters" in the samples sections.
==== "Test parameters" S-box ====
RFC 5831 specifies only these parameters, but RFC 4357 names them as "test parameters" and does not recommend them for use in production applications.
==== CryptoPro S-box ====
The CryptoPro S-box comes from "production ready" parameter set developed by CryptoPro company, it is also specified as part of RFC 4357, section 11.2.
== Cryptanalysis ==
In 2008, an attack was published that breaks the full-round GOST hash function. The paper presents a collision attack in 2105 time, and first and second preimage attacks in 2192 time (2n time refers to the approximate number of times the algorithm was calculated in the attack).
== GOST hash test vectors ==
=== Hashes for "test parameters" ===
The 256-bit (32-byte) GOST hashes are typically represented as 64-digit hexadecimal numbers.
Here are test vectors for the GOST hash with "test parameters"
GOST("The quick brown fox jumps over the lazy dog") =
77b7fa410c9ac58a25f49bca7d0468c9296529315eaca76bd1a10f376d1f4294
Even a small change in the message will, with overwhelming probability, result in a completely different hash due to the avalanche effect. For example, changing d to c:
GOST("The quick brown fox jumps over the lazy cog") =
a3ebc4daaab78b0be131dab5737a7f67e602670d543521319150d2e14eeec445
Two samples coming from the GOST R 34.11-94 standard:
GOST("This is message, length=32 bytes") =
b1c466d37519b82e8319819ff32595e047a28cb6f83eff1c6916a815a637fffa
GOST("Suppose the original message has length = 50 bytes") =
471aba57a60a770d3a76130635c1fbea4ef14de51f78b4ae57dd893b62f55208
More test vectors:
GOST("") =
ce85b99cc46752fffee35cab9a7b0278abb4c2d2055cff685af4912c49490f8d
GOST("a") =
d42c539e367c66e9c88a801f6649349c21871b4344c6a573f849fdce62f314dd
GOST("message digest") =
ad4434ecb18f2c99b60cbe59ec3d2469582b65273f48de72db2fde16a4889a4d
GOST( 128 characters of 'U' ) =
53a3a3ed25180cef0c1d85a074273e551c25660a87062a52d926a9e8fe5733a4
GOST( 1000000 characters of 'a' ) =
5c00ccc2734cdd3332d3d4749576e3c1a7dbaf0e7ea74e9fa602413c90a129fa
=== Hashes for CryptoPro parameters ===
GOST algorithm with CryptoPro S-box generates different set of hash values.
GOST("") = 981e5f3ca30c841487830f84fb433e13ac1101569b9c13584ac483234cd656c0
GOST("a") = e74c52dd282183bf37af0079c9f78055715a103f17e3133ceff1aacf2f403011
GOST("abc") = b285056dbf18d7392d7677369524dd14747459ed8143997e163b2986f92fd42c
GOST("message digest") =
bc6041dd2aa401ebfa6e9886734174febdb4729aa972d60f549ac39b29721ba0
GOST("The quick brown fox jumps over the lazy dog") =
9004294a361a508c586fe53d1f1b02746765e71b765472786e4770d565830a76
GOST("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789") =
73b70a39497de53a6e08c67b6d4db853540f03e9389299d9b0156ef7e85d0f61
GOST("12345678901234567890123456789012345678901234567890123456789012345678901234567890") =
6bc7b38989b28cf93ae8842bf9d752905910a7528a61e5bce0782de43e610c90
GOST("This is message, length=32 bytes") =
2cefc2f7b7bdc514e18ea57fa74ff357e7fa17d652c75f69cb1be7893ede48eb
GOST("Suppose the original message has length = 50 bytes") =
c3730c5cbccacf915ac292676f21e8bd4ef75331d9405e5f1a61dc3130a65011
GOST(128 of "U") = 1c4ac7614691bbf427fa2316216be8f10d92edfd37cd1027514c1008f649c4e8
GOST(1000000 of "a") = 8693287aa62f9478f7cb312ec0866b6c4e4a0f11160441e8f4ffcd2715dd554f
== See also ==
Kupyna
Hash function security summary
GOST standards
List of hash functions
== References ==
=== Further reading ===
Dolmatov, V. (March 2010). Dolmatov, V (ed.). "GOST R 34.11-94: Hash Function Algorithm". IETF. doi:10.17487/RFC5831. {{cite journal}}: Cite journal requires |journal= (help)
"Information technology. Cryptographic data security. Hashing function". 2010-02-20. The full text of the GOST R 34.11-94 standard (in Russian).
== External links ==
C implementation and test vectors for GOST hash function from Markku-Juhani Saarinen, also contains draft translations into English of the GOST 28147-89 and GOST R 34.11-94 standards. Bugfixed version, see [1].
C++ implementation with STL streams.
RHash, an open source command-line tool, which can calculate and verify GOST hash (supports both parameter sets).
Implementation of the GOST R 34.11-94 in JavaScript (CryptoPro parameters)
The GOST Hash Function Ecrypt page
Online GOST Calculator Archived 2014-11-06 at the Wayback Machine | Wikipedia/GOST_(hash_function) |
Post-Quantum Cryptography Standardization is a program and competition by NIST to update their standards to include post-quantum cryptography. It was announced at PQCrypto 2016. 23 signature schemes and 59 encryption/KEM schemes were submitted by the initial submission deadline at the end of 2017 of which 69 total were deemed complete and proper and participated in the first round. Seven of these, of which 3 are signature schemes, have advanced to the third round, which was announced on July 22, 2020.
On August 13, 2024, NIST released final versions of the first three Post Quantum Crypto Standards: FIPS 203, FIPS 204, and FIPS 205.
== Background ==
Academic research on the potential impact of quantum computing dates back to at least 2001. A NIST published report from April 2016 cites experts that acknowledge the possibility of quantum technology to render the commonly used RSA algorithm insecure by 2030. As a result, a need to standardize quantum-secure cryptographic primitives was pursued. Since most symmetric primitives are relatively easy to modify in a way that makes them quantum resistant, efforts have focused on public-key cryptography, namely digital signatures and key encapsulation mechanisms. In December 2016 NIST initiated a standardization process by announcing a call for proposals.
The competition is now in its third round out of expected four, where in each round some algorithms are discarded and others are studied more closely. NIST hopes to publish the standardization documents by 2024, but may speed up the process if major breakthroughs in quantum computing are made.
It is currently undecided whether the future standards will be published as FIPS or as NIST Special Publication (SP).
== Round one ==
Under consideration were:
(strikethrough means it had been withdrawn)
=== Round one submissions published attacks ===
Guess Again by Lorenz Panny
RVB by Lorenz Panny
RaCoSS by Daniel J. Bernstein, Andreas Hülsing, Tanja Lange and Lorenz Panny
HK17 by Daniel J. Bernstein and Tanja Lange
SRTPI by Bo-Yin Yang
WalnutDSA
by Ward Beullens and Simon R. Blackburn
by Matvei Kotov, Anton Menshov and Alexander Ushakov
DRS by Yang Yu and Léo Ducas
DAGS by Elise Barelli and Alain Couvreur
Edon-K by Matthieu Lequesne and Jean-Pierre Tillich
RLCE by Alain Couvreur, Matthieu Lequesne, and Jean-Pierre Tillich
Hila5 by Daniel J. Bernstein, Leon Groot Bruinderink, Tanja Lange and Lorenz Panny
Giophantus by Ward Beullens, Wouter Castryck and Frederik Vercauteren
RankSign by Thomas Debris-Alazard and Jean-Pierre Tillich
McNie by Philippe Gaborit; Terry Shue Chien Lau and Chik How Tan
== Round two ==
Candidates moving on to the second round were announced on January 30, 2019. They are:
== Round three ==
On July 22, 2020, NIST announced seven finalists ("first track"), as well as eight alternate algorithms ("second track"). The first track contains the algorithms which appear to have the most promise, and will be considered for standardization at the end of the third round. Algorithms in the second track could still become part of the standard, after the third round ends. NIST expects some of the alternate candidates to be considered in a fourth round. NIST also suggests it may re-open the signature category for new schemes proposals in the future.
On June 7–9, 2021, NIST conducted the third PQC standardization conference, virtually. The conference included candidates' updates and discussions on implementations, on performances, and on security issues of the candidates. A small amount of focus was spent on intellectual property concerns.
=== Finalists ===
=== Alternate candidates ===
=== Intellectual property concerns ===
After NIST's announcement regarding the finalists and the alternate candidates, various intellectual property concerns were voiced, notably surrounding lattice-based schemes such as Kyber and NewHope. NIST holds signed statements from submitting groups clearing any legal claims, but there is still a concern that third parties could raise claims. NIST claims that they will take such considerations into account while picking the winning algorithms.
=== Round three submissions published attacks ===
Rainbow: by Ward Beullens on a classical computer
=== Adaptations ===
During this round, some candidates have shown to be vulnerable to some attack vectors. It forces these candidates to adapt accordingly:
CRYSTAL-Kyber and SABER
may change the nested hashes used in their proposals in order for their security claims to hold.
FALCON
side channel attack using electromagnetic measurements to extract the secret signing keys. A masking may be added in order to resist the attack. This adaptation affects performance and should be considered whilst standardizing.
=== Selected Algorithms 2022 ===
On July 5, 2022, NIST announced the first group of winners from its six-year competition.
== Round four ==
On July 5, 2022, NIST announced four candidates for PQC Standardization Round 4.
=== Round four submissions published attacks ===
SIKE: by Wouter Castryck and Thomas Decru on a classical computer
=== Selected Algorithm 2025 ===
On March 11, 2025, NIST announced the selection of a backup algorithm for KEM.
== First release ==
On August 13, 2024, NIST released final versions of its first three Post Quantum Crypto Standards. According to the release announcement:
While there have been no substantive changes made to the standards since the draft versions, NIST has changed the algorithms’ names to specify the versions that appear in the three finalized standards, which are:
Federal Information Processing Standard (FIPS) 203, intended as the primary standard for general encryption. Among its advantages are comparatively small encryption keys that two parties can exchange easily, as well as its speed of operation. The standard is based on the CRYSTALS-Kyber algorithm, which has been renamed ML-KEM, short for Module-Lattice-Based Key-Encapsulation Mechanism.
FIPS 204, intended as the primary standard for protecting digital signatures. The standard uses the CRYSTALS-Dilithium algorithm, which has been renamed ML-DSA, short for Module-Lattice-Based Digital Signature Algorithm.
FIPS 205, also designed for digital signatures. The standard employs the Sphincs+ algorithm, which has been renamed SLH-DSA, short for Stateless Hash-Based Digital Signature Algorithm. The standard is based on a different math approach than ML-DSA, and it is intended as a backup method in case ML-DSA proves vulnerable.
Similarly, when the draft FIPS 206 standard built around FALCON is released, the algorithm will be dubbed FN-DSA, short for FFT (fast-Fourier transform) over NTRU-Lattice-Based Digital Signature Algorithm.
On March 11, 2025 NIST released HQC as the fifth algorithm for post-quantum asymmetric encryption as used for key encapsulation / exchange. The new algorithm is as a backup for ML-KEM, the main algorithm for general encryption. HQC is based on different math than ML-KEM, thus mitigating weakness if found. The draft standard incorporating the HQC algorithm is expected in early 2026 with the final in 2027.
== Additional Digital Signature Schemes ==
=== Round One ===
NIST received 50 submissions and deemed 40 to be complete and proper according to the submission requirements. Under consideration are:
(strikethrough means it has been withdrawn)
=== Round one submissions published attacks ===
3WISE by Daniel Smith-Tone
EagleSign by Mehdi Tibouchi
KAZ-SIGN by Daniel J. Bernstein; Scott Fluhrer
Xifrat1-Sign.I by Lorenz Panny
eMLE-Sig 2.0 by Mehdi Tibouchi (implementation by Lorenz Panny)
HPPC by Ward Beullens; Pierre Briaud, Maxime Bros, and Ray Perlner
ALTEQ by Markku-Juhani O. Saarinen (implementation only?)
Biscuit by Charles Bouillaguet
MEDS by Markku-Juhani O. Saarinen and Ward Beullens (implementation only)
FuLeeca by Felicitas Hörmann and Wessel van Woerden
LESS by the LESS team (implementation only)
DME-Sign by Markku-Juhani O. Saarinen (implementation only?); Pierre Briaud, Maxime Bros, Ray Perlner, and Daniel Smith-Tone
EHTv3 by Eamonn Postlethwaite and Wessel van Woerden; Keegan Ryan and Adam Suhl
Enhanced pqsigRM by Thomas Debris-Alazard, Pierre Loisel and Valentin Vasseur; Pierre Briaud, Maxime Bros, Ray Perlner and Daniel Smith-Tone
HAETAE by Markku-Juhani O. Saarinen (implementation only?)
HuFu by Markku-Juhani O. Saarinen
SDitH by Kevin Carrier and Jean-Pierre Tillich; Kevin Carrier, Valérian Hatey, and Jean-Pierre Tillich
VOX by Hiroki Furue and Yasuhiko Ikematsu
AIMer by Fukang Liu, Mohammad Mahzoun, Morten Øygarden, Willi Meier
SNOVA by Yasuhiko Ikematsu and Rika Akiyama
PROV by Ludovic Perret, and River Moreira Ferreira (implementation only)
=== Round Two ===
NIST deemed 14 submissions to pass to the second round.
== See also ==
Advanced Encryption Standard process
CAESAR Competition – Competition to design authenticated encryption schemes
Lattice-based cryptography
NIST hash function competition
== References ==
== External links ==
NIST's official Website on the standardization process
Post-quantum cryptography website by djb | Wikipedia/NIST_Post-Quantum_Cryptography_Standardization |
In cryptography, truncated differential cryptanalysis is a generalization of differential cryptanalysis, an attack against block ciphers. Lars Knudsen developed the technique in 1994. Whereas ordinary differential cryptanalysis analyzes the full difference between two texts, the truncated variant considers differences that are only partially determined. That is, the attack makes predictions of only some of the bits instead of the full block. This technique has been applied to SAFER, IDEA, Skipjack, E2, Twofish, Camellia, CRYPTON, and even the stream cipher Salsa20.
== References ==
Lars Knudsen (1994). Truncated and Higher Order Differentials (PDF/PostScript). 2nd International Workshop on Fast Software Encryption (FSE 1994). Leuven: Springer-Verlag. pp. 196–211. Retrieved 14 February 2007.
Lars Knudsen, Thomas Berson (1996). Truncated Differentials of SAFER (PDF/PostScript). 3rd International Workshop on Fast Software Encryption (FSE 1996). Cambridge: Springer-Verlag. pp. 15–26. Retrieved 27 February 2007.
Johan Borst, Lars R. Knudsen, Vincent Rijmen (May 1997). Two Attacks on Reduced IDEA. Advances in Cryptology – EUROCRYPT '97. Konstanz: Springer-Verlag. pp. 1–13. Archived from the original (gzipped PostScript) on 15 August 2000. Retrieved 8 March 2007.{{cite conference}}: CS1 maint: multiple names: authors list (link)
Lars Knudsen, M.J.B. Robshaw, David Wagner (1999). Truncated Differentials and Skipjack. Advances in Cryptology – CRYPTO '99. Santa Barbara, California: Springer-Verlag. pp. 165–180. Archived from the original (PostScript) on 28 September 2007. Retrieved 27 February 2007.{{cite conference}}: CS1 maint: multiple names: authors list (link)
M. Matsui, T. Tokita (1999). Cryptanalysis of a Reduced Version of the Block Cipher E2. 6th International Workshop on Fast Software Encryption (FSE 1999). Rome: Springer-Verlag. pp. 71–80. Archived from the original (PDF) on 2007-05-25. Retrieved 27 February 2007.
Shiho Moriai; Yiqun Lisa Yin (2000). "Cryptanalysis of Twofish (II)" (PDF). Retrieved 27 February 2007. {{cite journal}}: Cite journal requires |journal= (help)
Crowley, Paul (2006). "Truncated differential cryptanalysis of five rounds of Salsa20". Retrieved 27 February 2007. | Wikipedia/Truncated_differential_cryptanalysis |
In cryptography, the Tiny Encryption Algorithm (TEA) is a block cipher notable for its simplicity of description and implementation, typically a few lines of code. It was designed by David Wheeler and Roger Needham of the Cambridge Computer Laboratory; it was first presented at the Fast Software Encryption workshop in Leuven in 1994, and first published in the proceedings of that workshop.
The cipher is not subject to any patents.
== Properties ==
TEA operates on two 32-bit unsigned integers (could be derived from a 64-bit data block) and uses a 128-bit key. It has a Feistel structure with a suggested 64 rounds, typically implemented in pairs termed cycles. It has an extremely simple key schedule, mixing all of the key material in exactly the same way for each cycle. Different multiples of a magic constant are used to prevent simple attacks based on the symmetry of the rounds. The magic constant, 2654435769 or 0x9E3779B9 is chosen to be ⌊232⁄𝜙⌋, where 𝜙 is the golden ratio (as a nothing-up-my-sleeve number).
TEA has a few weaknesses. Most notably, it suffers from equivalent keys—each key is equivalent to three others, which means that the effective key size is only 126 bits. As a result, TEA is especially bad as a cryptographic hash function. This weakness led to a method for hacking Microsoft's Xbox game console, where the cipher was used as a hash function. TEA is also susceptible to a related-key attack which requires 223 chosen plaintexts under a related-key pair, with 232 time complexity. Because of these weaknesses, the XTEA cipher was designed.
== Versions ==
The first published version of TEA was supplemented by a second version that incorporated extensions to make it more secure. Block TEA (which was specified along with XTEA) operates on arbitrary-size blocks in place of the 64-bit blocks of the original.
A third version (XXTEA), published in 1998, described further improvements for enhancing the security of the Block TEA algorithm.
== Reference code ==
Following is an adaptation of the reference encryption and decryption routines in C, released into the public domain by David Wheeler and Roger Needham:
Note that the reference implementation acts on multi-byte numeric values. The original paper does not specify how to derive the numbers it acts on from binary or other content.
== See also ==
RC4 – A stream cipher that, just like TEA, is designed to be very simple to implement.
XTEA – First version of Block TEA's successor.
XXTEA – Corrected Block TEA's successor.
Treyfer – A simple and compact encryption algorithm with 64-bit key size and block size.
== Notes ==
== References ==
Andem, Vikram Reddy (2003). "A Cryptanalysis of the Tiny Encryption Algorithm, Masters thesis" (PDF). Tuscaloosa: The University of Alabama.
Hernández, Julio César; Isasi, Pedro; Ribagorda, Arturo (2002). "An application of genetic algorithms to the cryptoanalysis of one round TEA". Proceedings of the 2002 Symposium on Artificial Intelligence and Its Application.
Hernández, Julio César; Sierra, José María; Isasi, Pedro; Ribargorda, Arturo (2003). "Finding efficient distinguishers for cryptographic mappings, with an application to the block cipher TEA". The 2003 Congress on Evolutionary Computation, 2003. CEC '03. Vol. 3. pp. 2189–2193. doi:10.1109/CEC.2003.1299943. hdl:10016/3944. ISBN 978-0-7803-7804-9. S2CID 62216777.
Hernández, Julio César; Sierra, José María; Ribagorda, Arturo; Ramos, Benjamín; Mex-Perera, J. C. (2001). "Distinguishing TEA from a Random Permutation: Reduced Round Versions of TEA do Not Have the SAC or do Not Generate Random Numbers". Cryptography and Coding (PDF). Lecture Notes in Computer Science. Vol. 2260. pp. 374–377. doi:10.1007/3-540-45325-3_34. ISBN 978-3-540-43026-1. Archived from the original (PDF) on 26 April 2012.
Moon, Dukjae; Hwang, Kyungdeok; Lee, Wonil; Lee, Sangjin; Lim, Jongin (2002). "Impossible Differential Cryptanalysis of Reduced Round XTEA and TEA". Fast Software Encryption (PDF). Lecture Notes in Computer Science. Vol. 2365. pp. 49–60. doi:10.1007/3-540-45661-9_4. ISBN 978-3-540-44009-3.
Hong, Seokhie; Hong, Deukjo; Ko, Youngdai; Chang, Donghoon; Lee, Wonil; Lee, Sangjin (2004). "Differential Cryptanalysis of TEA and XTEA". Information Security and Cryptology - ICISC 2003. Lecture Notes in Computer Science. Vol. 2971. pp. 402–417. doi:10.1007/978-3-540-24691-6_30. ISBN 978-3-540-21376-5.
== External links ==
Test vectors for TEA
JavaScript implementation of XXTEA with Base64 Archived 28 April 2006 at the Wayback Machine
PHP implementation of XTEA (German language)
JavaScript implementation of XXTEA
JavaScript and PHP implementations of XTEA (Dutch text)
AVR ASM implementation
SEA Scalable Encryption Algorithm for Small Embedded Applications (Standaert, Piret, Gershenfeld, Quisquater - July 2005 UCL Belgium & MIT USA) | Wikipedia/Tiny_Encryption_Algorithm |
Identity-based cryptography is a type of public-key cryptography in which a publicly known string representing an individual or organization is used as a public key. The public string could include an email address, domain name, or a physical IP address.
The first implementation of identity-based signatures and an email-address based public-key infrastructure (PKI) was developed by Adi Shamir in 1984, which allowed users to verify digital signatures using only public information such as the user's identifier. Under Shamir's scheme, a trusted third party would deliver the private key to the user after verification of the user's identity, with verification essentially the same as that required for issuing a certificate in a typical PKI.
Shamir similarly proposed identity-based encryption, which appeared particularly attractive since there was no need to acquire an identity's public key prior to encryption. However, he was unable to come up with a concrete solution, and identity-based encryption remained an open problem for many years. The first practical implementations were finally devised by Sakai in 2000, and Boneh and Franklin in 2001. These solutions were based on bilinear pairings. Also in 2001, a solution was developed independently by Clifford Cocks.
Closely related to various identity-based encryption schemes are identity based key agreement schemes. One of the first identity based key agreement algorithms was published in 1986, just two years after Shamir's identity based signature. The author was E. Okamoto. Identity based key agreement schemes also allow for "escrow free" identity based cryptography. A notable example of such an escrow free identity based key agreement is the McCullagh-Barreto's "Authenticated Key Agreement without Escrow" found in section 4 of their 2004 paper, "A New Two-Party Identity-Based Authenticated Key Agreement". A variant of this escrow free key exchange is standardized as the identity based key agreement in the Chinese identity based standard SM9.
== Usage ==
Identity-based systems allow any party to generate a public key from a known identity value, such as an ASCII string. A trusted third party, called the private key generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the corresponding master private key (referred to as master key). Given the master public key, any party can compute a public key corresponding to the identity ID by combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identity ID contacts the PKG, which uses the master private key to generate the private key for the identity ID.
== Limitation ==
Identity-based systems have a characteristic problem in operation. Suppose Alice and Bob are users of such a system. Since the information needed to find Alice's public key is completely determined by Alice's ID and the master public key, it is not possible to revoke Alice's credentials and issue new credentials without either (a) changing Alice's ID (usually a phone number or an email address which will appear in a corporate directory); or (b) changing the master public key and re-issuing private keys to all users, including Bob.
This limitation may be overcome by including a time component (e.g. the current month) in the identity.
== See also ==
Identity-based encryption
Identity-based conditional proxy re-encryption
SM9 - Chinese National Identity Based Cryptography Standard
Sakai–Kasahara Identity Based Encryption
Boneh–Franklin
== References == | Wikipedia/Identity-based_cryptography |
Integrated Encryption Scheme (IES) is a hybrid encryption scheme which provides semantic security against an adversary who is able to use chosen-plaintext or chosen-ciphertext attacks. The security of the scheme is based on the computational Diffie–Hellman problem.
Two variants of IES are specified: Discrete Logarithm Integrated Encryption Scheme (DLIES) and Elliptic Curve Integrated Encryption Scheme (ECIES), which is also known as the Elliptic Curve Augmented Encryption Scheme or simply the Elliptic Curve Encryption Scheme. These two variants are identical up to the change of an underlying group.
== Informal description of DLIES ==
As a brief and informal description and overview of how IES works, a Discrete Logarithm Integrated Encryption Scheme (DLIES) is used, focusing on illuminating the reader's understanding, rather than precise technical details.
Alice learns Bob's public key
g
x
{\displaystyle g^{x}}
through a public key infrastructure or some other distribution method.Bob knows his own private key
x
{\displaystyle x}
.
Alice generates a fresh, ephemeral value
y
{\displaystyle y}
, and its associated public value
g
y
{\displaystyle g^{y}}
.
Alice then computes a symmetric key
k
{\displaystyle k}
using this information and a key derivation function (KDF) as follows:
k
=
KDF
(
g
x
y
)
{\displaystyle k={\textrm {KDF}}(g^{xy})}
Alice computes her ciphertext
c
{\displaystyle c}
from her actual message
m
{\displaystyle m}
(by symmetric encryption of
m
{\displaystyle m}
) encrypted with the key
k
{\displaystyle k}
(using an authenticated encryption scheme) as follows:
c
=
E
(
k
;
m
)
{\displaystyle c=E(k;m)}
Alice transmits (in a single message) both the public ephemeral
g
y
{\displaystyle g^{y}}
and the ciphertext
c
{\displaystyle c}
.
Bob, knowing
x
{\displaystyle x}
and
g
y
{\displaystyle g^{y}}
, can now compute
k
=
KDF
(
g
x
y
)
{\displaystyle k={\textrm {KDF}}(g^{xy})}
and decrypt
m
{\displaystyle m}
from
c
{\displaystyle c}
.
Note that the scheme does not provide Bob with any assurance as to who really sent the message: This scheme does nothing to stop anyone from pretending to be Alice.
== Formal description of ECIES ==
=== Required information ===
To send an encrypted message to Bob using ECIES, Alice needs the following information:
The cryptography suite to be used, including a key derivation function (e.g., ANSI-X9.63-KDF with SHA-1 option), a message authentication code system (e.g., HMAC-SHA-1-160 with 160-bit keys or HMAC-SHA-1-80 with 80-bit keys) and a symmetric encryption scheme (e.g., TDEA in CBC mode or XOR encryption scheme) — noted
E
{\displaystyle E}
.
The elliptic curve domain parameters:
(
p
,
a
,
b
,
G
,
n
,
h
)
{\displaystyle (p,a,b,G,n,h)}
for a curve over a prime field or
(
m
,
f
(
x
)
,
a
,
b
,
G
,
n
,
h
)
{\displaystyle (m,f(x),a,b,G,n,h)}
for a curve over a binary field.
Bob's public key
K
B
{\displaystyle K_{B}}
, which Bob generates it as follows:
K
B
=
k
B
G
{\displaystyle K_{B}=k_{B}G}
, where
k
B
∈
[
1
,
n
−
1
]
{\displaystyle k_{B}\in [1,n-1]}
is the private key he chooses at random.
Some optional shared information:
S
1
{\displaystyle S_{1}}
and
S
2
{\displaystyle S_{2}}
O
{\displaystyle O}
which denotes the point at infinity.
=== Encryption ===
To encrypt a message
m
{\displaystyle m}
Alice does the following:
generates a random number
r
∈
[
1
,
n
−
1
]
{\displaystyle r\in [1,n-1]}
and calculates
R
=
r
G
{\displaystyle R=rG}
derives a shared secret:
S
=
P
x
{\displaystyle S=P_{x}}
, where
P
=
(
P
x
,
P
y
)
=
r
K
B
{\displaystyle P=(P_{x},P_{y})=rK_{B}}
(and
P
≠
O
{\displaystyle P\neq O}
)
uses a KDF to derive symmetric encryption keys and MAC keys:
k
E
‖
k
M
=
KDF
(
S
‖
S
1
)
{\displaystyle k_{E}\|k_{M}={\textrm {KDF}}(S\|S_{1})}
encrypts the message:
c
=
E
(
k
E
;
m
)
{\displaystyle c=E(k_{E};m)}
computes the tag of encrypted message and
S
2
{\displaystyle S_{2}}
:
d
=
MAC
(
k
M
;
c
‖
S
2
)
{\displaystyle d={\textrm {MAC}}(k_{M};c\|S_{2})}
outputs
R
‖
c
‖
d
{\displaystyle R\|c\|d}
=== Decryption ===
To decrypt the ciphertext
R
‖
c
‖
d
{\displaystyle R\|c\|d}
Bob does the following:
derives the shared secret:
S
=
P
x
{\displaystyle S=P_{x}}
, where
P
=
(
P
x
,
P
y
)
=
k
B
R
{\displaystyle P=(P_{x},P_{y})=k_{B}R}
(it is the same as the one Alice derived because
P
=
k
B
R
=
k
B
r
G
=
r
k
B
G
=
r
K
B
{\displaystyle P=k_{B}R=k_{B}rG=rk_{B}G=rK_{B}}
), or outputs failed if
P
=
O
{\displaystyle P=O}
derives keys the same way as Alice did:
k
E
‖
k
M
=
KDF
(
S
‖
S
1
)
{\displaystyle k_{E}\|k_{M}={\textrm {KDF}}(S\|S_{1})}
uses MAC to check the tag and outputs failed if
d
≠
MAC
(
k
M
;
c
‖
S
2
)
{\displaystyle d\neq {\textrm {MAC}}(k_{M};c\|S_{2})}
uses symmetric encryption scheme to decrypt the message
m
=
E
−
1
(
k
E
;
c
)
{\displaystyle m=E^{-1}(k_{E};c)}
== References ==
SECG, Standards for efficient cryptography, SEC 1: Elliptic Curve Cryptography, Version 2.0, May 21, 2009.
Gayoso Martínez, Hernández Encinas, Sánchez Ávila: A Survey of the Elliptic Curve Integrated Encryption Scheme, Journal of Computer Science and Engineering, 2, 2 (2010), 7–13.
Ladar Levison: Code for using ECIES to protect data (ECC + AES + SHA), openssl-devel mailing list, August 6, 2010.
IEEE 1363a (non-public standard) specifies DLIES and ECIES
ANSI X9.63 (non-public standard)
ISO/IEC 18033-2 (non-public standard)
Victor Shoup, A proposal for an ISO standard for public key encryption, Version 2.1, December 20, 2001.
Abdalla, Michel and Bellare, Mihir and Rogaway, Phillip: DHIES: An Encryption Scheme Based on the Diffie–Hellman Problem, IACR Cryptology ePrint Archive, 1999. | Wikipedia/Integrated_Encryption_Scheme |
Fugue is a cryptographic hash function submitted by IBM to the NIST hash function competition. It was designed by Shai Halevi, William E. Hall, and Charanjit S. Jutla. Fugue takes an arbitrary-length message and compresses it down to a fixed bit-length (either 224, 256, 384 or 512 bits). The hash functions for the different output lengths are called Fugue-224, Fugue-256, Fugue-384 and Fugue-512. The authors also describe a parametrized version of Fugue. A weak version of Fugue-256 is also described using this parameterized version.
The selling point of Fugue is the authors' claimed proof that a wide range of current attack strategies based on differential cryptanalysis cannot be efficient against Fugue. It is also claimed to be competitive with the NIST hash function SHA-256 in both software and hardware efficiency, achieving up to 36.2 cycles per byte on an Intel Family 6 Model 15 Xeon 5150, and up to 25 cycles per byte on an Intel Core 2 processor T7700. On 45 nm Core2 processors, e.g. T9400, Fugue-256 runs at 16 cycles per byte using SSE4.1 instructions. On the newer Westmere architectures (32 nm), e.g. Core i5, Fugue-256 runs at 14 cycles/byte.
Fugue's design starts from the hash function Grindahl, and like Grindahl uses the S-box from AES, but it replaces the 4×4 column mixing matrix with a 16×16 "super-mix" operation which greatly improves diffusion. The "super-mix" operation is, however, only slightly more computationally expensive to implement than the AES mixing strategy.
== SuperMix ==
The 224 and 256 bit variants of Fugue work with a state which can be represented in 4 by 30 matrix of unsigned bytes, whereas the 384 and 512 bit variants work with a 4 by 36 byte matrix. Operations can be performed in-place on this state.
The core of the algorithm, known as the "SuperMix transformation", takes 4×4 matrix as input and returns a new 4x4 matrix. The input to SuperMix is simply the first four columns of the current 30-column state and the output is used to replace this same state area (i.e. SuperMix affects only the 4x4 matrix at the head of the state).
The SuperMix function can be defined as:
SuperMix
(
U
)
=
ROL
(
M
⋅
U
+
(
∑
j
≠
0
U
j
i
0
0
0
0
∑
j
≠
1
U
j
i
0
0
0
0
∑
j
≠
2
U
j
i
0
0
0
0
∑
j
≠
3
U
j
i
)
⋅
M
T
)
{\displaystyle {\text{SuperMix}}(U)={\text{ROL}}\left(M\cdot U+{\begin{pmatrix}\sum _{j\neq 0}U_{j}^{i}&0&0&0\\0&\sum _{j\neq 1}U_{j}^{i}&0&0\\0&0&\sum _{j\neq 2}U_{j}^{i}&0\\0&0&0&\sum _{j\neq 3}U_{j}^{i}\end{pmatrix}}\cdot M^{T}\right)}
where:
M
=
(
1
4
7
1
1
1
4
7
7
1
1
4
4
7
1
1
)
{\displaystyle M={\begin{pmatrix}1&4&7&1\\1&1&4&7\\7&1&1&4\\4&7&1&1\end{pmatrix}}}
;
U
{\displaystyle U}
is a 4x4 matrix of bytes (i.e. the matrix after the S-box substitution of the input); and
M
T
{\displaystyle M^{T}}
is the transpose of M.
The transformation
R
O
L
{\displaystyle ROL}
takes a 4x4 matrix, and rotates the
i
{\displaystyle i}
-th row to the left by
i
{\displaystyle i}
bytes, i.e.
ROL
(
W
)
j
i
=
W
j
−
i
(
mod
4
)
i
{\displaystyle {\text{ROL}}(W)_{j}^{i}=W_{j-i{\pmod {4}}}^{i}}
== Fugue 2.0 ==
Fugue 2.0 is a tweak of original Fugue, which runs at about twice the speed of Fugue for 256-bit output. The designers claim advanced proofs of resistance to differential collision attacks for this improved version.
A complete specification can be found at the link below.
== External links ==
The Hash Function Fugue | Wikipedia/Fugue_(hash_function) |
In theoretical computer science and cryptography, a trapdoor function is a function that is easy to compute in one direction, yet difficult to compute in the opposite direction (finding its inverse) without special information, called the "trapdoor". Trapdoor functions are a special case of one-way functions and are widely used in public-key cryptography.
In mathematical terms, if f is a trapdoor function, then there exists some secret information t, such that given f(x) and t, it is easy to compute x. Consider a padlock and its key. It is trivial to change the padlock from open to closed without using the key, by pushing the shackle into the lock mechanism. Opening the padlock easily, however, requires the key to be used. Here the key t is the trapdoor and the padlock is the trapdoor function.
An example of a simple mathematical trapdoor is "6895601 is the product of two prime numbers. What are those numbers?" A typical "brute-force" solution would be to try dividing 6895601 by many prime numbers until finding the answer. However, if one is told that 1931 is one of the numbers, one can find the answer by entering "6895601 ÷ 1931" into any calculator. This example is not a sturdy trapdoor function – modern computers can guess all of the possible answers within a second – but this sample problem could be improved by using the product of two much larger primes.
Trapdoor functions came to prominence in cryptography in the mid-1970s with the publication of asymmetric (or public-key) encryption techniques by Diffie, Hellman, and Merkle. Indeed, Diffie & Hellman (1976) coined the term. Several function classes had been proposed, and it soon became obvious that trapdoor functions are harder to find than was initially thought. For example, an early suggestion was to use schemes based on the subset sum problem. This turned out rather quickly to be unsuitable.
As of 2004, the best known trapdoor function (family) candidates are the RSA and Rabin families of functions. Both are written as exponentiation modulo a composite number, and both are related to the problem of prime factorization.
Functions related to the hardness of the discrete logarithm problem (either modulo a prime or in a group defined over an elliptic curve) are not known to be trapdoor functions, because there is no known "trapdoor" information about the group that enables the efficient computation of discrete logarithms.
A trapdoor in cryptography has the very specific aforementioned meaning and is not to be confused with a backdoor (these are frequently used interchangeably, which is incorrect). A backdoor is a deliberate mechanism that is added to a cryptographic algorithm (e.g., a key pair generation algorithm, digital signing algorithm, etc.) or operating system, for example, that permits one or more unauthorized parties to bypass or subvert the security of the system in some fashion.
== Definition ==
A trapdoor function is a collection of one-way functions { fk : Dk → Rk } (k ∈ K), in which all of K, Dk, Rk are subsets of binary strings {0, 1}*, satisfying the following conditions:
There exists a probabilistic polynomial time (PPT) sampling algorithm Gen s.t. Gen(1n) = (k, tk) with k ∈ K ∩ {0, 1}n and tk ∈ {0, 1}* satisfies | tk | < p (n), in which p is some polynomial. Each tk is called the trapdoor corresponding to k. Each trapdoor can be efficiently sampled.
Given input k, there also exists a PPT algorithm that outputs x ∈ Dk. That is, each Dk can be efficiently sampled.
For any k ∈ K, there exists a PPT algorithm that correctly computes fk.
For any k ∈ K, there exists a PPT algorithm A s.t. for any x ∈ Dk, let y = A ( k, fk(x), tk ), and then we have fk(y) = fk(x). That is, given trapdoor, it is easy to invert.
For any k ∈ K, without trapdoor tk, for any PPT algorithm, the probability to correctly invert fk (i.e., given fk(x), find a pre-image x' such that fk(x' ) = fk(x)) is negligible.
If each function in the collection above is a one-way permutation, then the collection is also called a trapdoor permutation.
== Examples ==
In the following two examples, we always assume that it is difficult to factorize a large composite number (see Integer factorization).
=== RSA assumption ===
In this example, the inverse
d
{\displaystyle d}
of
e
{\displaystyle e}
modulo
ϕ
(
n
)
{\displaystyle \phi (n)}
(Euler's totient function of
n
{\displaystyle n}
) is the trapdoor:
f
(
x
)
=
x
e
mod
n
.
{\displaystyle f(x)=x^{e}\mod n.}
If the factorization of
n
=
p
q
{\displaystyle n=pq}
is known, then
ϕ
(
n
)
=
(
p
−
1
)
(
q
−
1
)
{\displaystyle \phi (n)=(p-1)(q-1)}
can be computed. With this the inverse
d
{\displaystyle d}
of
e
{\displaystyle e}
can be computed
d
=
e
−
1
mod
ϕ
(
n
)
{\displaystyle d=e^{-1}\mod {\phi (n)}}
, and then given
y
=
f
(
x
)
{\displaystyle y=f(x)}
, we can find
x
=
y
d
mod
n
=
x
e
d
mod
n
=
x
mod
n
{\displaystyle x=y^{d}\mod n=x^{ed}\mod n=x\mod n}
.
Its hardness follows from the RSA assumption.
=== Rabin's quadratic residue assumption ===
Let
n
{\displaystyle n}
be a large composite number such that
n
=
p
q
{\displaystyle n=pq}
, where
p
{\displaystyle p}
and
q
{\displaystyle q}
are large primes such that
p
≡
3
(
mod
4
)
,
q
≡
3
(
mod
4
)
{\displaystyle p\equiv 3{\pmod {4}},q\equiv 3{\pmod {4}}}
, and kept confidential to the adversary. The problem is to compute
z
{\displaystyle z}
given
a
{\displaystyle a}
such that
a
≡
z
2
(
mod
n
)
{\displaystyle a\equiv z^{2}{\pmod {n}}}
. The trapdoor is the factorization of
n
{\displaystyle n}
. With the trapdoor, the solutions of z can be given as
c
x
+
d
y
,
c
x
−
d
y
,
−
c
x
+
d
y
,
−
c
x
−
d
y
{\displaystyle cx+dy,cx-dy,-cx+dy,-cx-dy}
, where
a
≡
x
2
(
mod
p
)
,
a
≡
y
2
(
mod
q
)
,
c
≡
1
(
mod
p
)
,
c
≡
0
(
mod
q
)
,
d
≡
0
(
mod
p
)
,
d
≡
1
(
mod
q
)
{\displaystyle a\equiv x^{2}{\pmod {p}},a\equiv y^{2}{\pmod {q}},c\equiv 1{\pmod {p}},c\equiv 0{\pmod {q}},d\equiv 0{\pmod {p}},d\equiv 1{\pmod {q}}}
. See Chinese remainder theorem for more details. Note that given primes
p
{\displaystyle p}
and
q
{\displaystyle q}
, we can find
x
≡
a
p
+
1
4
(
mod
p
)
{\displaystyle x\equiv a^{\frac {p+1}{4}}{\pmod {p}}}
and
y
≡
a
q
+
1
4
(
mod
q
)
{\displaystyle y\equiv a^{\frac {q+1}{4}}{\pmod {q}}}
. Here the conditions
p
≡
3
(
mod
4
)
{\displaystyle p\equiv 3{\pmod {4}}}
and
q
≡
3
(
mod
4
)
{\displaystyle q\equiv 3{\pmod {4}}}
guarantee that the solutions
x
{\displaystyle x}
and
y
{\displaystyle y}
can be well defined.
== See also ==
One-way function
== Notes ==
== References ==
Diffie, W.; Hellman, M. (1976), "New directions in cryptography" (PDF), IEEE Transactions on Information Theory, 22 (6): 644–654, CiteSeerX 10.1.1.37.9720, doi:10.1109/TIT.1976.1055638
Pass, Rafael, A Course in Cryptography (PDF), retrieved 27 November 2015
Goldwasser, Shafi, Lecture Notes on Cryptography (PDF), retrieved 25 November 2015
Ostrovsky, Rafail, Foundations of Cryptography (PDF), retrieved 27 November 2015
Dodis, Yevgeniy, Introduction to Cryptography Lecture Notes (Fall 2008), retrieved 17 December 2015
Lindell, Yehuda, Foundations of Cryptography (PDF), retrieved 17 December 2015 | Wikipedia/Trapdoor_function |
In cryptography, the Elliptic Curve Digital Signature Algorithm (ECDSA) offers a variant of the Digital Signature Algorithm (DSA) which uses elliptic-curve cryptography.
== Key and signature sizes ==
As with elliptic-curve cryptography in general, the bit size of the private key believed to be needed for ECDSA is about twice the size of the security level, in bits. For example, at a security level of 80 bits—meaning an attacker requires a maximum of about
2
80
{\displaystyle 2^{80}}
operations to find the private key—the size of an ECDSA private key would be 160 bits. On the other hand, the signature size is the same for both DSA and ECDSA: approximately
4
t
{\displaystyle 4t}
bits, where
t
{\displaystyle t}
is the exponent in the formula
2
t
{\displaystyle 2^{t}}
, that is, about 320 bits for a security level of 80 bits, which is equivalent to
2
80
{\displaystyle 2^{80}}
operations.
== Signature generation algorithm ==
Suppose Alice wants to send a signed message to Bob. Initially, they must agree on the curve parameters
(
CURVE
,
G
,
n
)
{\displaystyle ({\textrm {CURVE}},G,n)}
. In addition to the field and equation of the curve, we need
G
{\displaystyle G}
, a base point of prime order on the curve;
n
{\displaystyle n}
is the additive order of the point
G
{\displaystyle G}
.
The order
n
{\displaystyle n}
of the base point
G
{\displaystyle G}
must be prime. Indeed, we assume that every nonzero element of the ring
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
is invertible, so that
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
must be a field. It implies that
n
{\displaystyle n}
must be prime (cf. Bézout's identity).
Alice creates a key pair, consisting of a private key integer
d
A
{\displaystyle d_{A}}
, randomly selected in the interval
[
1
,
n
−
1
]
{\displaystyle [1,n-1]}
; and a public key curve point
Q
A
=
d
A
×
G
{\displaystyle Q_{A}=d_{A}\times G}
. We use
×
{\displaystyle \times }
to denote elliptic curve point multiplication by a scalar.
For Alice to sign a message
m
{\displaystyle m}
, she follows these steps:
Calculate
e
=
HASH
(
m
)
{\displaystyle e={\textrm {HASH}}(m)}
. (Here HASH is a cryptographic hash function, such as SHA-2, with the output converted to an integer.)
Let
z
{\displaystyle z}
be the
L
n
{\displaystyle L_{n}}
leftmost bits of
e
{\displaystyle e}
, where
L
n
{\displaystyle L_{n}}
is the bit length of the group order
n
{\displaystyle n}
. (Note that
z
{\displaystyle z}
can be greater than
n
{\displaystyle n}
but not longer.)
Select a cryptographically secure random integer
k
{\displaystyle k}
from
[
1
,
n
−
1
]
{\displaystyle [1,n-1]}
.
Calculate the curve point
(
x
1
,
y
1
)
=
k
×
G
{\displaystyle (x_{1},y_{1})=k\times G}
.
Calculate
r
=
x
1
mod
n
{\displaystyle r=x_{1}\,{\bmod {\,}}n}
. If
r
=
0
{\displaystyle r=0}
, go back to step 3.
Calculate
s
=
k
−
1
(
z
+
r
d
A
)
mod
n
{\displaystyle s=k^{-1}(z+rd_{A})\,{\bmod {\,}}n}
. If
s
=
0
{\displaystyle s=0}
, go back to step 3.
The signature is the pair
(
r
,
s
)
{\displaystyle (r,s)}
. (And
(
r
,
−
s
mod
n
)
{\displaystyle (r,-s\,{\bmod {\,}}n)}
is also a valid signature.)
As the standard notes, it is not only required for
k
{\displaystyle k}
to be secret, but it is also crucial to select different
k
{\displaystyle k}
for different signatures. Otherwise, the equation in step 6 can be solved for
d
A
{\displaystyle d_{A}}
, the private key: given two signatures
(
r
,
s
)
{\displaystyle (r,s)}
and
(
r
,
s
′
)
{\displaystyle (r,s')}
, employing the same unknown
k
{\displaystyle k}
for different known messages
m
{\displaystyle m}
and
m
′
{\displaystyle m'}
, an attacker can calculate
z
{\displaystyle z}
and
z
′
{\displaystyle z'}
, and since
s
−
s
′
=
k
−
1
(
z
−
z
′
)
{\displaystyle s-s'=k^{-1}(z-z')}
(all operations in this paragraph are done modulo
n
{\displaystyle n}
) the attacker can find
k
=
z
−
z
′
s
−
s
′
{\displaystyle k={\frac {z-z'}{s-s'}}}
. Since
s
=
k
−
1
(
z
+
r
d
A
)
{\displaystyle s=k^{-1}(z+rd_{A})}
, the attacker can now calculate the private key
d
A
=
s
k
−
z
r
{\displaystyle d_{A}={\frac {sk-z}{r}}}
.
This implementation failure was used, for example, to extract the signing key used for the PlayStation 3 gaming-console.
Another way ECDSA signature may leak private keys is when
k
{\displaystyle k}
is generated by a faulty random number generator. Such a failure in random number generation caused users of Android Bitcoin Wallet to lose their funds in August 2013.
To ensure that
k
{\displaystyle k}
is unique for each message, one may bypass random number generation completely and generate deterministic signatures by deriving
k
{\displaystyle k}
from both the message and the private key.
== Signature verification algorithm ==
For Bob to authenticate Alice's signature
r
,
s
{\displaystyle r,s}
on a message
m
{\displaystyle m}
, he must have a copy of her public-key curve point
Q
A
{\displaystyle Q_{A}}
. Bob can verify
Q
A
{\displaystyle Q_{A}}
is a valid curve point as follows:
Check that
Q
A
{\displaystyle Q_{A}}
is not equal to the identity element O, and its coordinates are otherwise valid.
Check that
Q
A
{\displaystyle Q_{A}}
lies on the curve.
Check that
n
×
Q
A
=
O
{\displaystyle n\times Q_{A}=O}
.
After that, Bob follows these steps:
Verify that r and s are integers in
[
1
,
n
−
1
]
{\displaystyle [1,n-1]}
. If not, the signature is invalid.
Calculate
e
=
HASH
(
m
)
{\displaystyle e={\textrm {HASH}}(m)}
, where HASH is the same function used in the signature generation.
Let
z
{\displaystyle z}
be the
L
n
{\displaystyle L_{n}}
leftmost bits of e.
Calculate
u
1
=
z
s
−
1
mod
n
{\displaystyle u_{1}=zs^{-1}\,{\bmod {\,}}n}
and
u
2
=
r
s
−
1
mod
n
{\displaystyle u_{2}=rs^{-1}\,{\bmod {\,}}n}
.
Calculate the curve point
(
x
1
,
y
1
)
=
u
1
×
G
+
u
2
×
Q
A
{\displaystyle (x_{1},y_{1})=u_{1}\times G+u_{2}\times Q_{A}}
. If
(
x
1
,
y
1
)
=
O
{\displaystyle (x_{1},y_{1})=O}
then the signature is invalid.
The signature is valid if
r
≡
x
1
(
mod
n
)
{\displaystyle r\equiv x_{1}{\pmod {n}}}
, invalid otherwise.
Note that an efficient implementation would compute inverse
s
−
1
mod
n
{\displaystyle s^{-1}\,{\bmod {\,}}n}
only once. Also, using Shamir's trick, a sum of two scalar multiplications
u
1
×
G
+
u
2
×
Q
A
{\displaystyle u_{1}\times G+u_{2}\times Q_{A}}
can be calculated faster than two scalar multiplications done independently.
=== Correctness of the algorithm ===
It is not immediately obvious why verification even functions correctly. To see why, denote as C the curve point computed in step 5 of verification,
C
=
u
1
×
G
+
u
2
×
Q
A
{\displaystyle C=u_{1}\times G+u_{2}\times Q_{A}}
From the definition of the public key as
Q
A
=
d
A
×
G
{\displaystyle Q_{A}=d_{A}\times G}
,
C
=
u
1
×
G
+
u
2
d
A
×
G
{\displaystyle C=u_{1}\times G+u_{2}d_{A}\times G}
Because elliptic curve scalar multiplication distributes over addition,
C
=
(
u
1
+
u
2
d
A
)
×
G
{\displaystyle C=(u_{1}+u_{2}d_{A})\times G}
Expanding the definition of
u
1
{\displaystyle u_{1}}
and
u
2
{\displaystyle u_{2}}
from verification step 4,
C
=
(
z
s
−
1
+
r
d
A
s
−
1
)
×
G
{\displaystyle C=(zs^{-1}+rd_{A}s^{-1})\times G}
Collecting the common term
s
−
1
{\displaystyle s^{-1}}
,
C
=
(
z
+
r
d
A
)
s
−
1
×
G
{\displaystyle C=(z+rd_{A})s^{-1}\times G}
Expanding the definition of s from signature step 6,
C
=
(
z
+
r
d
A
)
(
z
+
r
d
A
)
−
1
(
k
−
1
)
−
1
×
G
{\displaystyle C=(z+rd_{A})(z+rd_{A})^{-1}(k^{-1})^{-1}\times G}
Since the inverse of an inverse is the original element, and the product of an element's inverse and the element is the identity, we are left with
C
=
k
×
G
{\displaystyle C=k\times G}
From the definition of r, this is verification step 6.
This shows only that a correctly signed message will verify correctly; other properties such as incorrectly signed messages failing to verify correctly and resistance to cryptanalytic attacks are required for a secure signature algorithm.
== Public key recovery ==
Given a message m and Alice's signature
r
,
s
{\displaystyle r,s}
on that message, Bob can (potentially) recover Alice's public key:
Verify that r and s are integers in
[
1
,
n
−
1
]
{\displaystyle [1,n-1]}
. If not, the signature is invalid.
Calculate a curve point
R
=
(
x
1
,
y
1
)
{\displaystyle R=(x_{1},y_{1})}
where
x
1
{\displaystyle x_{1}}
is one of
r
{\displaystyle r}
,
r
+
n
{\displaystyle r+n}
,
r
+
2
n
{\displaystyle r+2n}
, etc. (provided
x
1
{\displaystyle x_{1}}
is not too large for the field of the curve) and
y
1
{\displaystyle y_{1}}
is a value such that the curve equation is satisfied. Note that there may be several curve points satisfying these conditions, and each different R value results in a distinct recovered key.
Calculate
e
=
HASH
(
m
)
{\displaystyle e={\textrm {HASH}}(m)}
, where HASH is the same function used in the signature generation.
Let z be the
L
n
{\displaystyle L_{n}}
leftmost bits of e.
Calculate
u
1
=
−
z
r
−
1
mod
n
{\displaystyle u_{1}=-zr^{-1}\,{\bmod {\,}}n}
and
u
2
=
s
r
−
1
mod
n
{\displaystyle u_{2}=sr^{-1}\,{\bmod {\,}}n}
.
Calculate the curve point
Q
A
=
(
x
A
,
y
A
)
=
u
1
×
G
+
u
2
×
R
{\displaystyle Q_{A}=(x_{A},y_{A})=u_{1}\times G+u_{2}\times R}
.
The signature is valid if
Q
A
{\displaystyle Q_{A}}
, matches Alice's public key.
The signature is invalid if all the possible R points have been tried and none match Alice's public key.
Note that an invalid signature, or a signature from a different message, will result in the recovery of an incorrect public key. The recovery algorithm can only be used to check validity of a signature if the signer's public key (or its hash) is known beforehand.
=== Correctness of the recovery algorithm ===
Start with the definition of
Q
A
{\displaystyle Q_{A}}
from recovery step 6,
Q
A
=
(
x
A
,
y
A
)
=
u
1
×
G
+
u
2
×
R
{\displaystyle Q_{A}=(x_{A},y_{A})=u_{1}\times G+u_{2}\times R}
From the definition
R
=
(
x
1
,
y
1
)
=
k
×
G
{\displaystyle R=(x_{1},y_{1})=k\times G}
from signing step 4,
Q
A
=
u
1
×
G
+
u
2
k
×
G
{\displaystyle Q_{A}=u_{1}\times G+u_{2}k\times G}
Because elliptic curve scalar multiplication distributes over addition,
Q
A
=
(
u
1
+
u
2
k
)
×
G
{\displaystyle Q_{A}=(u_{1}+u_{2}k)\times G}
Expanding the definition of
u
1
{\displaystyle u_{1}}
and
u
2
{\displaystyle u_{2}}
from recovery step 5,
Q
A
=
(
−
z
r
−
1
+
s
k
r
−
1
)
×
G
{\displaystyle Q_{A}=(-zr^{-1}+skr^{-1})\times G}
Expanding the definition of s from signature step 6,
Q
A
=
(
−
z
r
−
1
+
k
−
1
(
z
+
r
d
A
)
k
r
−
1
)
×
G
{\displaystyle Q_{A}=(-zr^{-1}+k^{-1}(z+rd_{A})kr^{-1})\times G}
Since the product of an element's inverse and the element is the identity, we are left with
Q
A
=
(
−
z
r
−
1
+
(
z
r
−
1
+
d
A
)
)
×
G
{\displaystyle Q_{A}=(-zr^{-1}+(zr^{-1}+d_{A}))\times G}
The first and second terms cancel each other out,
Q
A
=
d
A
×
G
{\displaystyle Q_{A}=d_{A}\times G}
From the definition of
Q
A
=
d
A
×
G
{\displaystyle Q_{A}=d_{A}\times G}
, this is Alice's public key.
This shows that a correctly signed message will recover the correct public key, provided additional information was shared to uniquely calculate curve point
R
=
(
x
1
,
y
1
)
{\displaystyle R=(x_{1},y_{1})}
from signature value r.
== Security ==
In December 2010, a group calling itself fail0verflow announced the recovery of the ECDSA private key used by Sony to sign software for the PlayStation 3 game console. However, this attack only worked because Sony did not properly implement the algorithm, because
k
{\displaystyle k}
was static instead of random. As pointed out in the Signature generation algorithm section above, this makes
d
A
{\displaystyle d_{A}}
solvable, rendering the entire algorithm useless.
On March 29, 2011, two researchers published an IACR paper demonstrating that it is possible to retrieve a TLS private key of a server using OpenSSL that authenticates with Elliptic Curves DSA over a binary field via a timing attack. The vulnerability was fixed in OpenSSL 1.0.0e.
In August 2013, it was revealed that bugs in some implementations of the Java class SecureRandom sometimes generated collisions in the
k
{\displaystyle k}
value. This allowed hackers to recover private keys giving them the same control over bitcoin transactions as legitimate keys' owners had, using the same exploit that was used to reveal the PS3 signing key on some Android app implementations, which use Java and rely on ECDSA to authenticate transactions.
This issue can be prevented by deterministic generation of k, as described by RFC 6979.
=== Concerns ===
Some concerns expressed about ECDSA:
Political concerns: the trustworthiness of NIST-produced curves being questioned after revelations were made that the NSA willingly inserts backdoors into software, hardware components and published standards; well-known cryptographers have expressed doubts about how the NIST curves were designed, and voluntary tainting has already been proved in the past. (See also the libssh curve25519 introduction.) Nevertheless, a proof that the named NIST curves exploit a rare weakness is still missing.
Technical concerns: the difficulty of properly implementing the standard, its slowness, and design flaws which reduce security in insufficiently defensive implementations.
== Implementations ==
Below is a list of cryptographic libraries that provide support for ECDSA:
Botan
Bouncy Castle
cryptlib
Crypto++
Crypto API (Linux)
GnuTLS
libgcrypt
LibreSSL
mbed TLS
Microsoft CryptoAPI
OpenSSL
wolfCrypt
== See also ==
EdDSA
RSA (cryptosystem)
== References ==
== Further reading ==
Accredited Standards Committee X9, ASC X9 Issues New Standard for Public Key Cryptography/ECDSA, Oct. 6, 2020. Source
Accredited Standards Committee X9, American National Standard X9.62-2005, Public Key Cryptography for the Financial Services Industry, The Elliptic Curve Digital Signature Algorithm (ECDSA), November 16, 2005.
Certicom Research, Standards for efficient cryptography, SEC 1: Elliptic Curve Cryptography, Version 2.0, May 21, 2009.
López, J. and Dahab, R. An Overview of Elliptic Curve Cryptography, Technical Report IC-00-10, State University of Campinas, 2000.
Daniel J. Bernstein, Pippenger's exponentiation algorithm, 2002.
Daniel R. L. Brown, Generic Groups, Collision Resistance, and ECDSA, Designs, Codes and Cryptography, 35, 119–152, 2005. ePrint version
Ian F. Blake, Gadiel Seroussi, and Nigel Smart, editors, Advances in Elliptic Curve Cryptography, London Mathematical Society Lecture Note Series 317, Cambridge University Press, 2005.
Hankerson, D.; Vanstone, S.; Menezes, A. (2004). Guide to Elliptic Curve Cryptography. Springer Professional Computing. New York: Springer. doi:10.1007/b97644. ISBN 0-387-95273-X. S2CID 720546.
== External links ==
Digital Signature Standard; includes info on ECDSA
The Elliptic Curve Digital Signature Algorithm (ECDSA); provides an in-depth guide on ECDSA. Wayback link | Wikipedia/Elliptic_Curve_Digital_Signature_Algorithm |
The Password Hashing Competition was an open competition announced in 2013 to select one or more password hash functions that can be recognized as a recommended standard. It was modeled after the successful Advanced Encryption Standard process and NIST hash function competition, but directly organized by cryptographers and security practitioners. On 20 July 2015, Argon2 was selected as the final PHC winner, with special recognition given to four other password hashing schemes: Catena, Lyra2, yescrypt and Makwa.
One goal of the Password Hashing Competition was to raise awareness of the need for strong password hash algorithms, hopefully avoiding a repeat of previous password breaches involving weak or no hashing, such as the ones involving
RockYou (2009),
JIRA,
Gawker (2010),
PlayStation Network outage,
Battlefield Heroes (2011),
eHarmony,
LinkedIn,
Adobe,
ASUS,
South Carolina Department of Revenue (2012),
Evernote,
Ubuntu Forums (2013),
etc.
The organizers were in contact with NIST, expecting an impact on its recommendations.
== See also ==
crypt (C)
Password hashing
List of computer science awards
CAESAR Competition
== References ==
== External links ==
The Password Hashing Competition web site
Source code and descriptions of the first round submissions
PHC string format | Wikipedia/Catena_(cryptography) |
In cryptanalysis, the piling-up lemma is a principle used in linear cryptanalysis to construct linear approximations to the action of block ciphers. It was introduced by Mitsuru Matsui (1993) as an analytical tool for linear cryptanalysis. The lemma states that the bias (deviation of the expected value from 1/2) of a linear Boolean function (XOR-clause) of independent binary random variables is related to the product of the input biases:
ϵ
(
X
1
⊕
X
2
⊕
⋯
⊕
X
n
)
=
2
n
−
1
∏
i
=
1
n
ϵ
(
X
i
)
{\displaystyle \epsilon (X_{1}\oplus X_{2}\oplus \cdots \oplus X_{n})=2^{n-1}\prod _{i=1}^{n}\epsilon (X_{i})}
or
I
(
X
1
⊕
X
2
⊕
⋯
⊕
X
n
)
=
∏
i
=
1
n
I
(
X
i
)
{\displaystyle I(X_{1}\oplus X_{2}\oplus \cdots \oplus X_{n})=\prod _{i=1}^{n}I(X_{i})}
where
ϵ
∈
[
−
1
2
,
1
2
]
{\displaystyle \epsilon \in [-{\tfrac {1}{2}},{\tfrac {1}{2}}]}
is the bias (towards zero) and
I
∈
[
−
1
,
1
]
{\displaystyle I\in [-1,1]}
the imbalance:
ϵ
(
X
)
=
P
(
X
=
0
)
−
1
2
{\displaystyle \epsilon (X)=P(X=0)-{\frac {1}{2}}}
I
(
X
)
=
P
(
X
=
0
)
−
P
(
X
=
1
)
=
2
ϵ
(
X
)
{\displaystyle I(X)=P(X=0)-P(X=1)=2\epsilon (X)}
.
Conversely, if the lemma does not hold, then the input variables are not independent.
== Interpretation ==
The lemma implies that XOR-ing independent binary variables always reduces the bias (or at least does not increase it); moreover, the output is unbiased if and only if there is at least one unbiased input variable.
Note that for two variables the quantity
I
(
X
⊕
Y
)
{\displaystyle I(X\oplus Y)}
is a correlation measure of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, equal to
P
(
X
=
Y
)
−
P
(
X
≠
Y
)
{\displaystyle P(X=Y)-P(X\neq Y)}
;
I
(
X
)
{\displaystyle I(X)}
can be interpreted as the correlation of
X
{\displaystyle X}
with
0
{\displaystyle 0}
.
== Expected value formulation ==
The piling-up lemma can be expressed more naturally when the random variables take values in
{
−
1
,
1
}
{\displaystyle \{-1,1\}}
. If we introduce variables
χ
i
=
1
−
2
X
i
=
(
−
1
)
X
i
{\displaystyle \chi _{i}=1-2X_{i}=(-1)^{X_{i}}}
(mapping 0 to 1 and 1 to -1) then, by inspection, the XOR-operation transforms to a product:
χ
1
χ
2
⋯
χ
n
=
1
−
2
(
X
1
⊕
X
2
⊕
⋯
⊕
X
n
)
=
(
−
1
)
X
1
⊕
X
2
⊕
⋯
⊕
X
n
{\displaystyle \chi _{1}\chi _{2}\cdots \chi _{n}=1-2(X_{1}\oplus X_{2}\oplus \cdots \oplus X_{n})=(-1)^{X_{1}\oplus X_{2}\oplus \cdots \oplus X_{n}}}
and since the expected values are the imbalances,
E
(
χ
i
)
=
I
(
X
i
)
{\displaystyle E(\chi _{i})=I(X_{i})}
, the lemma now states:
E
(
∏
i
=
1
n
χ
i
)
=
∏
i
=
1
n
E
(
χ
i
)
{\displaystyle E\left(\prod _{i=1}^{n}\chi _{i}\right)=\prod _{i=1}^{n}E(\chi _{i})}
which is a known property of the expected value for independent variables.
For dependent variables the above formulation gains a (positive or negative) covariance term, thus the lemma does not hold. In fact, since two Bernoulli variables are independent if and only if they are uncorrelated (i.e. have zero covariance; see uncorrelatedness), we have the converse of the piling up lemma: if it does not hold, the variables are not independent (uncorrelated).
== Boolean derivation ==
The piling-up lemma allows the cryptanalyst to determine the probability that the equality:
X
1
⊕
X
2
⊕
⋯
⊕
X
n
=
0
{\displaystyle X_{1}\oplus X_{2}\oplus \cdots \oplus X_{n}=0}
holds, where the X's are binary variables (that is, bits: either 0 or 1).
Let P(A) denote "the probability that A is true". If it equals one, A is certain to happen, and if it equals zero, A cannot happen. First of all, we consider the piling-up lemma for two binary variables, where
P
(
X
1
=
0
)
=
p
1
{\displaystyle P(X_{1}=0)=p_{1}}
and
P
(
X
2
=
0
)
=
p
2
{\displaystyle P(X_{2}=0)=p_{2}}
.
Now, we consider:
P
(
X
1
⊕
X
2
=
0
)
{\displaystyle P(X_{1}\oplus X_{2}=0)}
Due to the properties of the xor operation, this is equivalent to
P
(
X
1
=
X
2
)
{\displaystyle P(X_{1}=X_{2})}
X1 = X2 = 0 and X1 = X2 = 1 are mutually exclusive events, so we can say
P
(
X
1
=
X
2
)
=
P
(
X
1
=
X
2
=
0
)
+
P
(
X
1
=
X
2
=
1
)
=
P
(
X
1
=
0
,
X
2
=
0
)
+
P
(
X
1
=
1
,
X
2
=
1
)
{\displaystyle P(X_{1}=X_{2})=P(X_{1}=X_{2}=0)+P(X_{1}=X_{2}=1)=P(X_{1}=0,X_{2}=0)+P(X_{1}=1,X_{2}=1)}
Now, we must make the central assumption of the piling-up lemma: the binary variables we are dealing with are independent; that is, the state of one has no effect on the state of any of the others. Thus we can expand the probability function as follows:
Now we express the probabilities p1 and p2 as 1/2 + ε1 and 1/2 + ε2, where the ε's are the probability biases — the amount the probability deviates from 1/2.
Thus the probability bias ε1,2 for the XOR sum above is 2ε1ε2.
This formula can be extended to more X's as follows:
P
(
X
1
⊕
X
2
⊕
⋯
⊕
X
n
=
0
)
=
1
/
2
+
2
n
−
1
∏
i
=
1
n
ϵ
i
{\displaystyle P(X_{1}\oplus X_{2}\oplus \cdots \oplus X_{n}=0)=1/2+2^{n-1}\prod _{i=1}^{n}\epsilon _{i}}
Note that if any of the ε's is zero; that is, one of the binary variables is unbiased, the entire probability function will be unbiased — equal to 1/2.
A related slightly different definition of the bias is
ϵ
i
=
P
(
X
i
=
1
)
−
P
(
X
i
=
0
)
,
{\displaystyle \epsilon _{i}=P(X_{i}=1)-P(X_{i}=0),}
in fact minus two times the previous value. The advantage is that now with
ε
t
o
t
a
l
=
P
(
X
1
⊕
X
2
⊕
⋯
⊕
X
n
=
1
)
−
P
(
X
1
⊕
X
2
⊕
⋯
⊕
X
n
=
0
)
{\displaystyle \varepsilon _{total}=P(X_{1}\oplus X_{2}\oplus \cdots \oplus X_{n}=1)-P(X_{1}\oplus X_{2}\oplus \cdots \oplus X_{n}=0)}
we have
ε
t
o
t
a
l
=
(
−
1
)
n
+
1
∏
i
=
1
n
ε
i
,
{\displaystyle \varepsilon _{total}=(-1)^{n+1}\prod _{i=1}^{n}\varepsilon _{i},}
adding random variables amounts to multiplying their (2nd definition) biases.
== Practice ==
In practice, the Xs are approximations to the S-boxes (substitution components) of block ciphers. Typically, X values are inputs to the S-box and Y values are the corresponding outputs. By simply looking at the S-boxes, the cryptanalyst can tell what the probability biases are. The trick is to find combinations of input and output values that have probabilities of zero or one. The closer the approximation is to zero or one, the more helpful the approximation is in linear cryptanalysis.
However, in practice, the binary variables are not independent, as is assumed in the derivation of the piling-up lemma. This consideration has to be kept in mind when applying the lemma; it is not an automatic cryptanalysis formula.
== See also ==
Variance of a sum of independent real variables
== References == | Wikipedia/Piling-up_lemma |
BLAKE is a cryptographic hash function based on Daniel J. Bernstein's ChaCha stream cipher, but a permuted copy of the input block, XORed with round constants, is added before each ChaCha round. Like SHA-2, there are two variants differing in the word size. ChaCha operates on a 4×4 array of words. BLAKE repeatedly combines an 8-word hash value with 16 message words, truncating the ChaCha result to obtain the next hash value. BLAKE-256 and BLAKE-224 use 32-bit words and produce digest sizes of 256 bits and 224 bits, respectively, while BLAKE-512 and BLAKE-384 use 64-bit words and produce digest sizes of 512 bits and 384 bits, respectively.
The BLAKE2 hash function, based on BLAKE, was announced in 2012. The BLAKE3 hash function, based on BLAKE2, was announced in 2020.
== History ==
BLAKE was submitted to the NIST hash function competition by Jean-Philippe Aumasson, Luca Henzen, Willi Meier, and Raphael C.-W. Phan. In 2008, there were 51 entries. BLAKE made it to the final round consisting of five candidates but lost to Keccak in 2012, which was selected for the SHA-3 algorithm.
== Algorithm ==
Like SHA-2, BLAKE comes in two variants: one that uses 32-bit words, used for computing hashes up to 256 bits long, and one that uses 64-bit words, used for computing hashes up to 512 bits long. The core block transformation combines 16 words of input with 16 working variables, but only 8 words (256 or 512 bits) are preserved between blocks.
It uses a table of 16 constant words (the leading 512 or 1024 bits of the fractional part of π), and a table of 10 16-element permutations:
σ[0] = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
σ[1] = 14 10 4 8 9 15 13 6 1 12 0 2 11 7 5 3
σ[2] = 11 8 12 0 5 2 15 13 10 14 3 6 7 1 9 4
σ[3] = 7 9 3 1 13 12 11 14 2 6 5 10 4 0 15 8
σ[4] = 9 0 5 7 2 4 10 15 14 1 11 12 6 8 3 13
σ[5] = 2 12 6 10 0 11 8 3 4 13 7 5 15 14 1 9
σ[6] = 12 5 1 15 14 13 4 10 0 7 6 3 9 2 8 11
σ[7] = 13 11 7 14 12 1 3 9 5 0 15 4 8 6 2 10
σ[8] = 6 15 14 9 11 3 0 8 12 2 13 7 1 4 10 5
σ[9] = 10 2 8 4 7 6 1 5 15 11 9 14 3 12 13 0
The core operation, equivalent to ChaCha's quarter round, operates on a 4-word column or diagonal a b c d, which is combined with 2 words of message m[] and two constant words n[]. It is performed 8 times per full round:
j ← σ[r%10][2×i] // Index computations
k ← σ[r%10][2×i+1]
a ← a + b + (m[j] ⊕ n[k]) // Step 1 (with input)
d ← (d ⊕ a) >>> 16
c ← c + d // Step 2 (no input)
b ← (b ⊕ c) >>> 12
a ← a + b + (m[k] ⊕ n[j]) // Step 3 (with input)
d ← (d ⊕ a) >>> 8
c ← c + d // Step 4 (no input)
b ← (b ⊕ c) >>> 7
In the above, r is the round number (0–13), and i varies from 0 to 7.
The differences from the ChaCha quarter-round function are:
The addition of the message words has been added.
The rotation directions have been reversed.
"BLAKE reuses the permutation of the ChaCha stream cipher with rotations done in the opposite directions. Some have suspected an advanced optimization, but in fact it originates from a typo in the original BLAKE specifications", Jean-Philippe Aumasson explains in his "Crypto Dictionary".
The 64-bit version (which does not exist in ChaCha) is identical, but the rotation amounts are 32, 25, 16 and 11, respectively, and the number of rounds is increased to 16.
== Tweaks ==
Throughout the NIST hash function competition, entrants are permitted to "tweak" their algorithms to address issues that are discovered. Changes that have been made to BLAKE are: the number of rounds was increased from 10/14 to 14/16. This is to be more conservative about security while still being fast.
== Example digests ==
Hash values of an empty string:
BLAKE-224("") =
7dc5313b1c04512a174bd6503b89607aecbee0903d40a8a569c94eed
BLAKE-256("") =
716f6e863f744b9ac22c97ec7b76ea5f5908bc5b2f67c61510bfc4751384ea7a
BLAKE-384("") =
c6cbd89c926ab525c242e6621f2f5fa73aa4afe3d9e24aed727faaadd6af38b620bdb623dd2b4788b1c8086984af8706
BLAKE-512("") =
a8cfbbd73726062df0c6864dda65defe58ef0cc52a5625090fa17601e1eecd1b628e94f396ae402a00acc9eab77b4d4c2e852aaaa25a636d80af3fc7913ef5b8
Changing a single bit causes each bit in the output to change with 50% probability, demonstrating an avalanche effect:
BLAKE-512("The quick brown fox jumps over the lazy dog") =
1f7e26f63b6ad25a0896fd978fd050a1766391d2fd0471a77afb975e5034b7ad2d9ccf8dfb47abbbe656e1b82fbc634ba42ce186e8dc5e1ce09a885d41f43451
BLAKE-512("The quick brown fox jumps over the lazy dof") =
a701c2a1f9baabd8b1db6b75aee096900276f0b86dc15d247ecc03937b370324a16a4ffc0c3a85cd63229cfa15c15f4ba6d46ae2e849ed6335e9ff43b764198a
(In this example 266 matching bits out of 512 is about 52% due to the random nature of the avalanche.)
== BLAKE2 ==
BLAKE2 is a cryptographic hash function based on BLAKE, created by Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O'Hearn, and Christian Winnerlein. The design goal was to replace the widely used, but broken, MD5 and SHA-1 algorithms in applications requiring high performance in software. BLAKE2 was announced on December 21, 2012. A reference implementation is available under CC0, the OpenSSL License, and the Apache License 2.0.
BLAKE2b is faster than MD5, SHA-1, SHA-2, and SHA-3, on 64-bit x86-64 and ARM architectures. Its creators state that BLAKE2 provides better security than SHA-2 and similar to that of SHA-3: immunity to length extension, indifferentiability from a random oracle, etc.
BLAKE2 removes addition of constants to message words from BLAKE round function, changes two rotation constants, simplifies padding, adds parameter block that is XOR'ed with initialization vectors, and reduces the number of rounds from 16 to 12 for BLAKE2b (successor of BLAKE-512), and from 14 to 10 for BLAKE2s (successor of BLAKE-256).
BLAKE2 supports keying, salting, personalization, and hash tree modes, and can output digests from 1 up to 64 bytes for BLAKE2b, or up to 32 bytes for BLAKE2s. There are also parallel versions designed for increased performance on multi-core processors; BLAKE2bp (4-way parallel) and BLAKE2sp (8-way parallel).
BLAKE2X is a family of extendable-output functions (XOFs). Whereas BLAKE2 is limited to 64-byte digests, BLAKE2X allows for digests of up to 256 GiB. BLAKE2X is itself not an instance of a hash function, and must be based on an actual BLAKE2 instance. An example of a BLAKE2X instance could be BLAKE2Xb16MiB, which would be a BLAKE2X version based on BLAKE2b producing 16,777,216-byte digests (or exactly 16 MiB, hence the name of such an instance).
BLAKE2b and BLAKE2s are specified in RFC 7693. Optional features using the parameter block (salting, personalized hashes, tree hashing, et cetera), are not specified, and thus neither is support for BLAKE2bp, BLAKE2sp, or BLAKE2X.
=== Initialization vector ===
BLAKE2b uses an initialization vector that is the same as the IV used by SHA-512. These values are transparently obtained by taking the first 64 bits of the fractional parts of the positive square roots of the first eight prime numbers.
IV0 = 0x6a09e667f3bcc908 // Frac(sqrt(2))
IV1 = 0xbb67ae8584caa73b // Frac(sqrt(3))
IV2 = 0x3c6ef372fe94f82b // Frac(sqrt(5))
IV3 = 0xa54ff53a5f1d36f1 // Frac(sqrt(7))
IV4 = 0x510e527fade682d1 // Frac(sqrt(11))
IV5 = 0x9b05688c2b3e6c1f // Frac(sqrt(13))
IV6 = 0x1f83d9abfb41bd6b // Frac(sqrt(17))
IV7 = 0x5be0cd19137e2179 // Frac(sqrt(19))
=== BLAKE2b algorithm ===
Pseudocode for the BLAKE2b algorithm. The BLAKE2b algorithm uses 8-byte (UInt64) words, and 128-byte chunks.
Algorithm BLAKE2b
Input:
M Message to be hashed
cbMessageLen: Number, (0..2128) Length of the message in bytes
Key Optional 0..64 byte key
cbKeyLen: Number, (0..64) Length of optional key in bytes
cbHashLen: Number, (1..64) Desired hash length in bytes
Output:
Hash Hash of cbHashLen bytes
Initialize State vector h with IV
h0..7 ← IV0..7
Mix key size (cbKeyLen) and desired hash length (cbHashLen) into h0
h0 ← h0 xor 0x0101kknn
where kk is Key Length (in bytes)
nn is Desired Hash Length (in bytes)
Each time we Compress we record how many bytes have been compressed
cBytesCompressed ← 0
cBytesRemaining ← cbMessageLen
If there was a key supplied (i.e. cbKeyLen > 0)
then pad with trailing zeros to make it 128-bytes (i.e. 16 words)
and prepend it to the message M
if (cbKeyLen > 0) then
M ← Pad(Key, 128) || M
cBytesRemaining ← cBytesRemaining + 128
end if
Compress whole 128-byte chunks of the message, except the last chunk
while (cBytesRemaining > 128) do
chunk ← get next 128 bytes of message M
cBytesCompressed ← cBytesCompressed + 128 increase count of bytes that have been compressed
cBytesRemaining ← cBytesRemaining - 128 decrease count of bytes in M remaining to be processed
h ← Compress(h, chunk, cBytesCompressed, false) false ⇒ this is not the last chunk
end while
Compress the final bytes from M
chunk ← get next 128 bytes of message M We will get cBytesRemaining bytes (i.e. 0..128 bytes)
cBytesCompressed ← cBytesCompressed+cBytesRemaining The actual number of bytes leftover in M
chunk ← Pad(chunk, 128) If M was empty, then we will still compress a final chunk of zeros
h ← Compress(h, chunk, cBytesCompressed, true) true ⇒ this is the last chunk
Result ← first cbHashLen bytes of little endian state vector h
End Algorithm BLAKE2b
==== Compress ====
The Compress function takes a full 128-byte chunk of the input message and mixes it into the ongoing state array:
Function Compress
Input:
h Persistent state vector
chunk 128-byte (16 double word) chunk of message to compress
t: Number, 0..2128 Count of bytes that have been fed into the Compression
IsLastBlock: Boolean Indicates if this is the final round of compression
Output:
h Updated persistent state vector
Setup local work vector V
V0..7 ← h0..7 First eight items are copied from persistent state vector h
V8..15 ← IV0..7 Remaining eight items are initialized from the IV
Mix the 128-bit counter t into V12:V13
V12 ← V12 xor Lo(t) Lo 64-bits of UInt128 t
V13 ← V13 xor Hi(t) Hi 64-bits of UInt128 t
If this is the last block then invert all the bits in V14
if IsLastBlock then
V14 ← V14 xor 0xFFFFFFFFFFFFFFFF
Treat each 128-byte message chunk as sixteen 8-byte (64-bit) words m
m0..15 ← chunk
Twelve rounds of cryptographic message mixing
for i from 0 to 11 do
Select message mixing schedule for this round.
BLAKE2b uses 12 rounds, while SIGMA has only 10 entries.
S0..15 ← SIGMA[i mod 10] Rounds 10 and 11 use SIGMA[0] and SIGMA[1] respectively
Mix(V0, V4, V8, V12, m[S0], m[S1])
Mix(V1, V5, V9, V13, m[S2], m[S3])
Mix(V2, V6, V10, V14, m[S4], m[S5])
Mix(V3, V7, V11, V15, m[S6], m[S7])
Mix(V0, V5, V10, V15, m[S8], m[S9])
Mix(V1, V6, V11, V12, m[S10], m[S11])
Mix(V2, V7, V8, V13, m[S12], m[S13])
Mix(V3, V4, V9, V14, m[S14], m[S15])
end for
Mix the upper and lower halves of V into ongoing state vector h
h0..7 ← h0..7 xor V0..7
h0..7 ← h0..7 xor V8..15
Result ← h
End Function Compress
==== Mix ====
The Mix function is called by the Compress function, and mixes two 8-byte words from the message into the hash state. In most implementations this function would be written inline, or as an inlined function.
Function Mix
Inputs:
Va, Vb, Vc, Vd four 8-byte word entries from the work vector V
x, y two 8-byte word entries from padded message m
Output:
Va, Vb, Vc, Vd the modified versions of Va, Vb, Vc, Vd
Va ← Va + Vb + x with input
Vd ← (Vd xor Va) rotateright 32
Vc ← Vc + Vd no input
Vb ← (Vb xor Vc) rotateright 24
Va ← Va + Vb + y with input
Vd ← (Vd xor Va) rotateright 16
Vc ← Vc + Vd no input
Vb ← (Vb xor Vc) rotateright 63
Result ← Va, Vb, Vc, Vd
End Function Mix
=== Example digests ===
Hash values of an empty string:
BLAKE2s-224("") =
1fa1291e65248b37b3433475b2a0dd63d54a11ecc4e3e034e7bc1ef4
BLAKE2s-256("") =
69217a3079908094e11121d042354a7c1f55b6482ca1a51e1b250dfd1ed0eef9
BLAKE2b-384("") =
b32811423377f52d7862286ee1a72ee540524380fda1724a6f25d7978c6fd3244a6caf0498812673c5e05ef583825100
BLAKE2b-512("") =
786a02f742015903c6c6fd852552d272912f4740e15847618a86e217f71f5419d25e1031afee585313896444934eb04b903a685b1448b755d56f701afe9be2ce
Changing a single bit causes each bit in the output to change with 50% probability, demonstrating an avalanche effect:
BLAKE2b-512("The quick brown fox jumps over the lazy dog") =
a8add4bdddfd93e4877d2746e62817b116364a1fa7bc148d95090bc7333b3673f82401cf7aa2e4cb1ecd90296e3f14cb5413f8ed77be73045b13914cdcd6a918
BLAKE2b-512("The quick brown fox jumps over the lazy dof") =
ab6b007747d8068c02e25a6008db8a77c218d94f3b40d2291a7dc8a62090a744c082ea27af01521a102e42f480a31e9844053f456b4b41e8aa78bbe5c12957bb
=== Users of BLAKE2 ===
Argon2, the winner of the Password Hashing Competition, uses BLAKE2b
Chef's Habitat deployment system uses BLAKE2b for package signing
FreeBSD Ports package management tool uses BLAKE2b
GNU Core Utilities implements BLAKE2b in its b2sum command
IPFS allows use of BLAKE2b for tree hashing
librsync uses BLAKE2b
Noise (cryptographic protocol), which is used in WhatsApp includes BLAKE2 as an option.
RAR archive format version 5 supports an optional 256-bit BLAKE2sp file checksum instead of the default 32-bit CRC32; it was implemented in WinRAR v5+
7-Zip (in order to support the RAR archive format version 5) can generate the BLAKE2sp signature for each file in the Explorer shell via "CRC SHA" context menu, and choosing '*'
rmlint uses BLAKE2b for duplicate file detection
WireGuard uses BLAKE2s for hashing
Zcash, a cryptocurrency, uses BLAKE2b in the Equihash proof of work, and as a key derivation function
NANO, a cryptocurrency, uses BLAKE2b in the proof of work, for hashing digital signatures and as a key derivation function
Polkadot, a multi-chain blockchain uses BLAKE2b as its hashing algorithm.
Kadena (cryptocurrency), a scalable proof of work blockchain that uses Blake2s_256 as its hashing algorithm.
PCI Vault, uses BLAKE2b as its hashing algorithm for the purpose of PCI compliant PCD tokenization.
Ergo, a cryptocurrency, uses BLAKE2b256 as a subroutine of its hashing algorithm called Autolykos.
Linux kernel, version 5.17 replaced SHA-1 with BLAKE2s for hashing the entropy pool in the random number generator.
Open Network for Digital Commerce, a Government of India initiative, uses BLAKE-512 to sign API requests.
checksum, a Windows file hashing program has Blake2s as one of its algorithms
=== Implementations ===
In addition to the reference implementation, the following cryptography libraries provide implementations of BLAKE2:
Botan
Bouncy Castle
Crypto++
Libgcrypt
libsodium
OpenSSL
wolfSSL
== BLAKE3 ==
BLAKE3 is a cryptographic hash function based on Bao and BLAKE2, created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. It was announced on January 9, 2020, at Real World Crypto.
BLAKE3 is a single algorithm with many desirable features (parallelism, XOF, KDF, PRF and MAC), in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. BLAKE3 has a binary tree structure, so it supports a practically unlimited degree of parallelism (both SIMD and multithreading) given long enough input. The official Rust and C implementations are dual-licensed as public domain (CC0) and the Apache License.
BLAKE3 is designed to be as fast as possible. It is consistently a few times faster than BLAKE2. The BLAKE3 compression function is closely based on that of BLAKE2s, with the biggest difference being that the number of rounds is reduced from 10 to 7, a change based on the assumption that current cryptography is too conservative. In addition to providing parallelism, the Merkle tree format also allows for verified streaming (on-the-fly verifying) and incremental updates.
=== Users of BLAKE3 ===
Total Commander supports BLAKE3 for file checksums.
OpenZFS supports BLAKE3 from version 2.2
== References ==
== External links ==
The BLAKE web site
The BLAKE2 web site
The BLAKE3 web site | Wikipedia/BLAKE_(hash_function) |
In cryptography, a pepper is a secret added to an input such as a password during hashing with a cryptographic hash function. This value differs from a salt in that it is not stored alongside a password hash, but rather the pepper is kept separate in some other medium, such as a Hardware Security Module. Note that the National Institute of Standards and Technology refers to this value as a secret key rather than a pepper. A pepper is similar in concept to a salt or an encryption key. It is like a salt in that it is a randomized value that is added to a password hash, and it is similar to an encryption key in that it should be kept secret.
A pepper performs a comparable role to a salt or an encryption key, but while a salt is not secret (merely unique) and can be stored alongside the hashed output, a pepper is secret and must not be stored with the output. The hash and salt are usually stored in a database, but a pepper must be stored separately to prevent it from being obtained by the attacker in case of a database breach. A pepper should be long enough to remain secret from brute force attempts to discover it (NIST recommends at least 112 bits).
== History ==
The idea of a site- or service-specific salt (in addition to a per-user salt) has a long history, with Steven M. Bellovin proposing a local parameter in a Bugtraq post in 1995. In 1996 Udi Manber also described the advantages of such a scheme, terming it a secret salt. The term pepper has been used, by analogy to salt, but with a variety of meanings. For example, when discussing a challenge-response scheme, pepper has been used for a salt-like quantity, though not used for password storage; it has been used for a data transmission technique where a pepper must be guessed; and even as a part of jokes.
The term pepper was proposed for a secret or local parameter stored separately from the password in a discussion of protecting passwords from rainbow table attacks. This usage did not immediately catch on: for example, Fred Wenzel added support to Django password hashing for storage based on a combination of bcrypt and HMAC with separately stored nonces, without using the term. Usage has since become more common.
== Types ==
There are multiple different types of pepper:
A secret unique to each user.
A shared secret that is common to all users.
A randomly-selected number that must be re-discovered on every password input.
== Algorithm ==
An incomplete example of using a pepper constant to save passwords is given below.
This table contains two combinations of username and password. The password is not saved, and the 8-byte (64-bit) 44534C70C6883DE2 pepper is saved in a safe place separate from the output values of the hash.
Unlike the salt, the pepper does not provide protection to users who use the same password, but protects against dictionary attacks, unless the attacker has the pepper value available. Since the same pepper is not shared between different applications, an attacker is unable to reuse the hashes of one compromised database to another. A complete scheme for saving passwords usually includes both salt and pepper use.
== Shared-secret pepper ==
In the case of a shared-secret pepper, a single compromised password (via password reuse or other attack) along with a user's salt can lead to an attack to discover the pepper, rendering it ineffective. If an attacker knows a plaintext password and a user's salt, as well as the algorithm used to hash the password, then discovering the pepper can be a matter of brute forcing the values of the pepper. This is why NIST recommends the secret value be at least 112 bits, so that discovering it by exhaustive search is intractable. The pepper must be generated anew for every application it is deployed in, otherwise a breach of one application would result in lowered security of another application. Without knowledge of the pepper, other passwords in the database will be far more difficult to extract from their hashed values, as the attacker would need to guess the password as well as the pepper.
A pepper adds security to a database of salts and hashes because unless the attacker is able to obtain the pepper, cracking even a single hash is intractable, no matter how weak the original password. Even with a list of (salt, hash) pairs, an attacker must also guess the secret pepper in order to find the password which produces the hash. The NIST specification for a secret salt suggests using a Password-Based Key Derivation Function (PBKDF) with an approved Pseudorandom Function such as HMAC with SHA-3 as the hash function of the HMAC. The NIST recommendation is also to perform at least 1000 iterations of the PBKDF, and a further minimum 1000 iterations using the secret salt in place of the non-secret salt.
== Unique pepper per user ==
In the case of a pepper that is unique to each user, the tradeoff is gaining extra security at the cost of storing more information securely. Compromising one password hash and revealing its secret pepper will have no effect on other password hashes and their secret pepper, so each pepper must be individually discovered, which greatly increases the time taken to attack the password hashes.
== See also ==
Salt (cryptography)
HMAC
passwd
== References ==
== External links == | Wikipedia/Pepper_(cryptography) |
ShangMi 3 (SM3) is a cryptographic hash function, standardised for use in commercial cryptography in China. It was published by the National Cryptography Administration (Chinese: 国家密码管理局) on 2010-12-17 as "GM/T 0004-2012: SM3 cryptographic hash algorithm".
SM3 is used for implementing digital signatures, message authentication codes, and pseudorandom number generators. The algorithm is public and is considered similar to SHA-256 in security and efficiency. SM3 is used with Transport Layer Security.
== Definitive standards ==
SM3 is defined in each of:
GM/T 0004-2012: SM3 cryptographic hash algorithm
GB/T 32905-2016: Information security techniques—SM3 cryptographic hash algorithm
ISO/IEC 10118-3:2018—IT Security techniques—Hash-functions—Part 3: Dedicated hash-functions
IETF RFC draft-sca-cfrg-sm3-02
== References ==
== See also ==
SM4 (cipher) | Wikipedia/SM3_(hash_function) |
Post-quantum cryptography (PQC), sometimes referred to as quantum-proof, quantum-safe, or quantum-resistant, is the development of cryptographic algorithms (usually public-key algorithms) that are currently thought to be secure against a cryptanalytic attack by a quantum computer. Most widely-used public-key algorithms rely on the difficulty of one of three mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems could be easily solved on a sufficiently powerful quantum computer running Shor's algorithm or possibly alternatives.
As of 2024, quantum computers lack the processing power to break widely used cryptographic algorithms; however, because of the length of time required for migration to quantum-safe cryptography, cryptographers are already designing new algorithms to prepare for Y2Q or Q-Day, the day when current algorithms will be vulnerable to quantum computing attacks. Mosca's theorem provides the risk analysis framework that helps organizations identify how quickly they need to start migrating.
Their work has gained attention from academics and industry through the PQCrypto conference series hosted since 2006, several workshops on Quantum Safe Cryptography hosted by the European Telecommunications Standards Institute (ETSI), and the Institute for Quantum Computing. The rumoured existence of widespread harvest now, decrypt later programs has also been seen as a motivation for the early introduction of post-quantum algorithms, as data recorded now may still remain sensitive many years into the future.
In contrast to the threat quantum computing poses to current public-key algorithms, most current symmetric cryptographic algorithms and hash functions are considered to be relatively secure against attacks by quantum computers. While the quantum Grover's algorithm does speed up attacks against symmetric ciphers, doubling the key size can effectively counteract these attacks. Thus post-quantum symmetric cryptography does not need to differ significantly from current symmetric cryptography.
In 2024, the U.S. National Institute of Standards and Technology (NIST) released final versions of its first three Post-Quantum Cryptography Standards.
== Algorithms ==
Post-quantum cryptography research is mostly focused on six different approaches:
=== Lattice-based cryptography ===
This approach includes cryptographic systems such as learning with errors, ring learning with errors (ring-LWE), the ring learning with errors key exchange and the ring learning with errors signature, the older NTRU or GGH encryption schemes, and the newer NTRU signature and BLISS signatures. Some of these schemes like NTRU encryption have been studied for many years without anyone finding a feasible attack. Others like the ring-LWE algorithms have proofs that their security reduces to a worst-case problem. The Post-Quantum Cryptography Study Group sponsored by the European Commission suggested that the Stehle–Steinfeld variant of NTRU be studied for standardization rather than the NTRU algorithm. At that time, NTRU was still patented. Studies have indicated that NTRU may have more secure properties than other lattice based algorithms.
=== Multivariate cryptography ===
This includes cryptographic systems such as the Rainbow (Unbalanced Oil and Vinegar) scheme which is based on the difficulty of solving systems of multivariate equations. Various attempts to build secure multivariate equation encryption schemes have failed. However, multivariate signature schemes like Rainbow could provide the basis for a quantum secure digital signature. The Rainbow Signature Scheme is patented (the patent expires in August 2029).
=== Hash-based cryptography ===
This includes cryptographic systems such as Lamport signatures, the Merkle signature scheme, the XMSS, the SPHINCS, and the WOTS schemes. Hash based digital signatures were invented in the late 1970s by Ralph Merkle and have been studied ever since as an interesting alternative to number-theoretic digital signatures like RSA and DSA. Their primary drawback is that for any hash-based public key, there is a limit on the number of signatures that can be signed using the corresponding set of private keys. This fact reduced interest in these signatures until interest was revived due to the desire for cryptography that was resistant to attack by quantum computers. There appear to be no patents on the Merkle signature scheme and there exist many non-patented hash functions that could be used with these schemes. The stateful hash-based signature scheme XMSS developed by a team of researchers under the direction of Johannes Buchmann is described in RFC 8391.
Note that all the above schemes are one-time or bounded-time signatures, Moni Naor and Moti Yung invented UOWHF hashing in 1989 and designed a signature based on hashing (the Naor-Yung scheme) which can be unlimited-time in use (the first such signature that does not require trapdoor properties).
=== Code-based cryptography ===
This includes cryptographic systems which rely on error-correcting codes, such as the McEliece and Niederreiter encryption algorithms and the related Courtois, Finiasz and Sendrier Signature scheme. The original McEliece signature using random Goppa codes has withstood scrutiny for over 40 years. However, many variants of the McEliece scheme, which seek to introduce more structure into the code used in order to reduce the size of the keys, have been shown to be insecure. The Post-Quantum Cryptography Study Group sponsored by the European Commission has recommended the McEliece public key encryption system as a candidate for long term protection against attacks by quantum computers.
=== Isogeny-based cryptography ===
These cryptographic systems rely on the properties of isogeny graphs of elliptic curves (and higher-dimensional abelian varieties) over finite fields, in particular supersingular isogeny graphs, to create cryptographic systems. Among the more well-known representatives of this field are the Diffie–Hellman-like key exchange CSIDH, which can serve as a straightforward quantum-resistant replacement for the Diffie–Hellman and elliptic curve Diffie–Hellman key-exchange methods that are in widespread use today, and the signature scheme SQIsign which is based on the categorical equivalence between supersingular elliptic curves and maximal orders in particular types of quaternion algebras. Another widely noticed construction, SIDH/SIKE, was spectacularly broken in 2022. The attack is however specific to the SIDH/SIKE family of schemes and does not generalize to other isogeny-based constructions.
=== Symmetric key quantum resistance ===
Provided one uses sufficiently large key sizes, the symmetric key cryptographic systems like AES and SNOW 3G are already resistant to attack by a quantum computer. Further, key management systems and protocols that use symmetric key cryptography instead of public key cryptography like Kerberos and the 3GPP Mobile Network Authentication Structure are also inherently secure against attack by a quantum computer. Given its widespread deployment in the world already, some researchers recommend expanded use of Kerberos-like symmetric key management as an efficient way to get post-quantum cryptography today.
== Security reductions ==
In cryptography research, it is desirable to prove the equivalence of a cryptographic algorithm and a known hard mathematical problem. These proofs are often called "security reductions", and are used to demonstrate the difficulty of cracking the encryption algorithm. In other words, the security of a given cryptographic algorithm is reduced to the security of a known hard problem. Researchers are actively looking for security reductions in the prospects for post-quantum cryptography. Current results are given here:
=== Lattice-based cryptography – Ring-LWE Signature ===
In some versions of Ring-LWE there is a security reduction to the shortest-vector problem (SVP) in a lattice as a lower bound on the security. The SVP is known to be NP-hard. Specific ring-LWE systems that have provable security reductions include a variant of Lyubashevsky's ring-LWE signatures defined in a paper by Güneysu, Lyubashevsky, and Pöppelmann. The GLYPH signature scheme is a variant of the Güneysu, Lyubashevsky, and Pöppelmann (GLP) signature which takes into account research results that have come after the publication of the GLP signature in 2012. Another Ring-LWE signature is Ring-TESLA. There also exists a "derandomized variant" of LWE, called Learning with Rounding (LWR), which yields "improved speedup (by eliminating sampling small errors from a Gaussian-like distribution with deterministic errors) and bandwidth". While LWE utilizes the addition of a small error to conceal the lower bits, LWR utilizes rounding for the same purpose.
=== Lattice-based cryptography – NTRU, BLISS ===
The security of the NTRU encryption scheme and the BLISS signature is believed to be related to, but not provably reducible to, the closest vector problem (CVP) in a lattice. The CVP is known to be NP-hard. The Post-Quantum Cryptography Study Group sponsored by the European Commission suggested that the Stehle–Steinfeld variant of NTRU, which does have a security reduction be studied for long term use instead of the original NTRU algorithm.
=== Multivariate cryptography – Unbalanced oil and vinegar ===
Unbalanced Oil and Vinegar signature schemes are asymmetric cryptographic primitives based on multivariate polynomials over a finite field
F
{\displaystyle \mathbb {F} }
. Bulygin, Petzoldt and Buchmann have shown a reduction of generic multivariate quadratic UOV systems to the NP-Hard multivariate quadratic equation solving problem.
=== Hash-based cryptography – Merkle signature scheme ===
In 2005, Luis Garcia proved that there was a security reduction of Merkle Hash Tree signatures to the security of the underlying hash function. Garcia showed in his paper that if computationally one-way hash functions exist then the Merkle Hash Tree signature is provably secure.
Therefore, if one used a hash function with a provable reduction of security to a known hard problem one would have a provable security reduction of the Merkle tree signature to that known hard problem.
The Post-Quantum Cryptography Study Group sponsored by the European Commission has recommended use of Merkle signature scheme for long term security protection against quantum computers.
=== Code-based cryptography – McEliece ===
The McEliece Encryption System has a security reduction to the syndrome decoding problem (SDP). The SDP is known to be NP-hard. The Post-Quantum Cryptography Study Group sponsored by the European Commission has recommended the use of this cryptography for long term protection against attack by a quantum computer.
=== Code-based cryptography – RLCE ===
In 2016, Wang proposed a random linear code encryption scheme RLCE which is based on McEliece schemes. RLCE scheme can be constructed using any linear code such as Reed-Solomon code by inserting random columns in the underlying linear code generator matrix.
=== Supersingular elliptic curve isogeny cryptography ===
Security is related to the problem of constructing an isogeny between two supersingular curves with the same number of points. The most recent investigation of the difficulty of this problem is by Delfs and Galbraith indicates that this problem is as hard as the inventors of the key exchange suggest that it is. There is no security reduction to a known NP-hard problem.
== Comparison ==
One common characteristic of many post-quantum cryptography algorithms is that they require larger key sizes than commonly used "pre-quantum" public key algorithms. There are often tradeoffs to be made in key size, computational efficiency and ciphertext or signature size. The table lists some values for different schemes at a 128-bit post-quantum security level.
A practical consideration on a choice among post-quantum cryptographic algorithms is the effort required to send public keys over the internet. From this point of view, the Ring-LWE, NTRU, and SIDH algorithms provide key sizes conveniently under 1 kB, hash-signature public keys come in under 5 kB, and MDPC-based McEliece takes about 1 kB. On the other hand, Rainbow schemes require about 125 kB and Goppa-based McEliece requires a nearly 1 MB key.
=== Lattice-based cryptography – LWE key exchange and Ring-LWE key exchange ===
The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The basic idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper appeared in 2012 after a provisional patent application was filed in 2012.
In 2014, Peikert presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional 1 bit signal for rounding in Ding's construction is also utilized. For somewhat greater than 128 bits of security, Singh presents a set of parameters which have 6956-bit public keys for the Peikert's scheme. The corresponding private key would be roughly 14,000 bits.
In 2015, an authenticated key exchange with provable forward security following the same basic idea of Ding's was presented at Eurocrypt 2015, which is an extension of the HMQV construction in Crypto2005. The parameters for different security levels from 80 bits to 350 bits, along with the corresponding key sizes are provided in the paper.
=== Lattice-based cryptography – NTRU encryption ===
For 128 bits of security in NTRU, Hirschhorn, Hoffstein, Howgrave-Graham and Whyte, recommend using a public key represented as a degree 613 polynomial with coefficients
mod
(
2
10
)
{\displaystyle {\bmod {\left(2^{10}\right)}}}
This results in a public key of 6130 bits. The corresponding private key would be 6743 bits.
=== Multivariate cryptography – Rainbow signature ===
For 128 bits of security and the smallest signature size in a Rainbow multivariate quadratic equation signature scheme, Petzoldt, Bulygin and Buchmann, recommend using equations in GF(31) with a public key size of just over 991,000 bits, a private key of just over 740,000 bits and digital signatures which are 424 bits in length.
=== Hash-based cryptography – Merkle signature scheme ===
In order to get 128 bits of security for hash based signatures to sign 1 million messages using the fractal Merkle tree method of Naor Shenhav and Wool the public and private key sizes are roughly 36,000 bits in length.
=== Code-based cryptography – McEliece ===
For 128 bits of security in a McEliece scheme, The European Commission's Post-Quantum Cryptography Study group recommends using a binary Goppa code of length at least n = 6960 and dimension at least k = 5413, and capable of correcting t = 119 errors. With these parameters the public key for the McEliece system will be a systematic generator matrix whose non-identity part takes k × (n − k) = 8373911 bits. The corresponding private key, which consists of the code support with n = 6960 elements from GF(213) and a generator polynomial of with t = 119 coefficients from GF(213), will be 92,027 bits in length.
The group is also investigating the use of Quasi-cyclic MDPC codes of length at least n = 216 + 6 = 65542 and dimension at least k = 215 + 3 = 32771, and capable of correcting t = 264 errors. With these parameters the public key for the McEliece system will be the first row of a systematic generator matrix whose non-identity part takes k = 32771 bits. The private key, a quasi-cyclic parity-check matrix with d = 274 nonzero entries on a column (or twice as much on a row), takes no more than d × 16 = 4384 bits when represented as the coordinates of the nonzero entries on the first row.
Barreto et al. recommend using a binary Goppa code of length at least n = 3307 and dimension at least k = 2515, and capable of correcting t = 66 errors. With these parameters the public key for the McEliece system will be a systematic generator matrix whose non-identity part takes k × (n − k) = 1991880 bits. The corresponding private key, which consists of the code support with n = 3307 elements from GF(212) and a generator polynomial of with t = 66 coefficients from GF(212), will be 40,476 bits in length.
=== Supersingular elliptic curve isogeny cryptography ===
For 128 bits of security in the supersingular isogeny Diffie–Hellman (SIDH) method, De Feo, Jao and Plut recommend using a supersingular curve modulo a 768-bit prime. If one uses elliptic curve point compression the public key will need to be no more than 8x768 or 6144 bits in length. A March 2016 paper by authors Azarderakhsh, Jao, Kalach, Koziel, and Leonardi showed how to cut the number of bits transmitted in half, which was further improved by authors Costello, Jao, Longa, Naehrig, Renes and Urbanik resulting in a compressed-key version of the SIDH protocol with public keys only 2640 bits in size. This makes the number of bits transmitted roughly equivalent to the non-quantum secure RSA and Diffie–Hellman at the same classical security level.
=== Symmetric-key–based cryptography ===
As a general rule, for 128 bits of security in a symmetric-key–based system, one can safely use key sizes of 256 bits. The best quantum attack against arbitrary symmetric-key systems is an application of Grover's algorithm, which requires work proportional to the square root of the size of the key space. To transmit an encrypted key to a device that possesses the symmetric key necessary to decrypt that key requires roughly 256 bits as well. It is clear that symmetric-key systems offer the smallest key sizes for post-quantum cryptography.
== Forward secrecy ==
A public-key system demonstrates a property referred to as perfect forward secrecy when it generates random public keys per session for the purposes of key agreement. This means that the compromise of one message cannot lead to the compromise of others, and also that there is not a single secret value which can lead to the compromise of multiple messages. Security experts recommend using cryptographic algorithms that support forward secrecy over those that do not. The reason for this is that forward secrecy can protect against the compromise of long term private keys associated with public/private key pairs. This is viewed as a means of preventing mass surveillance by intelligence agencies.
Both the Ring-LWE key exchange and supersingular isogeny Diffie–Hellman (SIDH) key exchange can support forward secrecy in one exchange with the other party. Both the Ring-LWE and SIDH can also be used without forward secrecy by creating a variant of the classic ElGamal encryption variant of Diffie–Hellman.
The other algorithms in this article, such as NTRU, do not support forward secrecy as is.
Any authenticated public key encryption system can be used to build a key exchange with forward secrecy.
== Open Quantum Safe project ==
The Open Quantum Safe (OQS) project was started in late 2016 and has the goal of developing and prototyping quantum-resistant cryptography. It aims to integrate current post-quantum schemes in one library: liboqs. liboqs is an open source C library for quantum-resistant cryptographic algorithms. It initially focuses on key exchange algorithms but by now includes several signature schemes. It provides a common API suitable for post-quantum key exchange algorithms, and will collect together various implementations. liboqs will also include a test harness and benchmarking routines to compare performance of post-quantum implementations. Furthermore, OQS also provides integration of liboqs into OpenSSL.
As of March 2023, the following key exchange algorithms are supported:
As of August 2024, NIST has published 3 algorithms below as FIPS standards and the 4th is expected near end of the year:
Older supported versions that have been removed because of the progression of the NIST Post-Quantum Cryptography Standardization Project are:
== Implementation ==
One of the main challenges in post-quantum cryptography is considered to be the implementation of potentially quantum safe algorithms into existing systems. There are tests done, for example by Microsoft Research implementing PICNIC in a PKI using Hardware security modules. Test implementations for Google's NewHope algorithm have also been done by HSM vendors. In August 2023, Google released a FIDO2 security key implementation of an ECC/Dilithium hybrid signature schema which was done in partnership with ETH Zürich.
The Signal Protocol uses Post-Quantum Extended Diffie–Hellman (PQXDH).
On February 21, 2024, Apple announced that they were going to upgrade their iMessage protocol with a new PQC protocol called "PQ3", which will utilize ongoing keying.
Apple stated that, although quantum computers don't exist yet, they wanted to mitigate risks from future quantum computers as well as so-called "Harvest now, decrypt later" attack scenarios. Apple stated that they believe their PQ3 implementation provides protections that "surpass those in all other widely deployed messaging apps", because it utilizes ongoing keying.
Apple intends to fully replace the existing iMessage protocol within all supported conversations with PQ3 by the end of 2024. Apple also defined a scale to make it easier to compare the security properties of messaging apps, with a scale represented by levels ranging from 0 to 3: 0 for no end-to-end by default, 1 for pre-quantum end-to-end by default, 2 for PQC key establishment only (e.g. PQXDH), and 3 for PQC key establishment and ongoing rekeying (PQ3).
Other notable implementations include:
bouncycastle
liboqs
=== Hybrid encryption ===
Google has maintained the use of "hybrid encryption" in its use of post-quantum cryptography: whenever a relatively new post-quantum scheme is used, it is combined with a more proven, non-PQ scheme. This is to ensure that the data are not compromised even if the relatively new PQ algorithm turns out to be vulnerable to non-quantum attacks before Y2Q. This type of scheme is used in its 2016 and 2019 tests for post-quantum TLS, and in its 2023 FIDO2 key. Indeed, one of the algorithms used in the 2019 test, SIKE, was broken in 2022, but the non-PQ X25519 layer (already used widely in TLS) still protected the data. Apple's PQ3 and Signal's PQXDH are also hybrid.
The NSA and GCHQ argues against hybrid encryption, claiming that it adds complexity to implementation and transition. Daniel J. Bernstein, who supports hybrid encryption, argues that the claims are bogus.
== See also ==
NIST Post-Quantum Cryptography Standardization
Quantum cryptography – cryptography based on quantum mechanics
Crypto-shredding – Deleting encryption keys
== References ==
== Further reading ==
The PQXDH Key Agreement Protocol Specification
Post-Quantum Cryptography. Springer. 2008. p. 245. ISBN 978-3-540-88701-0.
Isogenies in a Quantum World Archived 2014-05-02 at the Wayback Machine
On Ideal Lattices and Learning With Errors Over Rings
Kerberos Revisited: Quantum-Safe Authentication
The picnic signature scheme
Buchmann, Johannes A.; Butin, Denis; Göpfert, Florian; Petzoldt, Albrecht (2016). "Post-Quantum Cryptography: State of the Art". The New Codebreakers: Essays Dedicated to David Kahn on the Occasion of His 85th Birthday. Springer. pp. 88–108. doi:10.1007/978-3-662-49301-4_6. ISBN 978-3-662-49301-4.
Bernstein, Daniel J.; Lange, Tanja (2017). "Post-quantum cryptography". Nature. 549 (7671): 188–194. Bibcode:2017Natur.549..188B. doi:10.1038/nature23461. PMID 28905891.
Kumar, Manoj; Pattnaik, Pratap (2020). "Post Quantum Cryptography(PQC) - an overview: (Invited Paper)". 2020 IEEE High Performance Extreme Computing Conference (HPEC). pp. 1–9. doi:10.1109/HPEC43674.2020.9286147. ISBN 978-1-7281-9219-2.
Campagna, Matt; LaMacchia, Brian; Ott, David (2021). "Post Quantum Cryptography: Readiness Challenges and the Approaching Storm". Computing Community Consortium. arXiv:2101.01269.
Yalamuri, Gagan; Honnavalli, Prasad; Eswaran, Sivaraman (2022). "A Review of the Present Cryptographic Arsenal to Deal with Post-Quantum Threats". Procedia Computer Science. 215: 834–845. doi:10.1016/j.procs.2022.12.086.
Bavdekar, Ritik; Chopde, Eashan Jayant; Bhatia, Ashutosh; Tiwari, Kamlesh; Daniel, Sandeep Joshua (2022). "Post Quantum Cryptography: Techniques, Challenges, Standardization, and Directions for Future Research". arXiv:2202.02826 [cs.CR].
Joseph, David; Misoczki, Rafael; Manzano, Marc; Tricot, Joe; Pinuaga, Fernando Dominguez; Lacombe, Olivier; Leichenauer, Stefan; Hidary, Jack; Venables, Phil; Hansen, Royal (2022). "Transitioning organizations to post-quantum cryptography". Nature. 605 (7909): 237–243. Bibcode:2022Natur.605..237J. doi:10.1038/s41586-022-04623-2. PMID 35546191.
Richter, Maximilian; Bertram, Magdalena; Seidensticker, Jasper; Tschache, Alexander (2022). "A Mathematical Perspective on Post-Quantum Cryptography". Mathematics. 10 (15): 2579. doi:10.3390/math10152579.
Li, Silong; Chen, Yuxiang; Chen, Lin; Liao, Jing; Kuang, Chanchan; Li, Kuanching; Liang, Wei; Xiong, Naixue (2023). "Post-Quantum Security: Opportunities and Challenges". Sensors. 23 (21): 8744. Bibcode:2023Senso..23.8744L. doi:10.3390/s23218744. PMC 10648643. PMID 37960442.
Dam, Duc-Thuan; Tran, Thai-Ha; Hoang, Van-Phuc; Pham, Cong-Kha; Hoang, Trong-Thuc (2023). "A Survey of Post-Quantum Cryptography: Start of a New Race". Cryptography. 7 (3): 40. doi:10.3390/cryptography7030040.
Bavdekar, Ritik; Jayant Chopde, Eashan; Agrawal, Ankit; Bhatia, Ashutosh; Tiwari, Kamlesh (2023). "Post Quantum Cryptography: A Review of Techniques, Challenges and Standardizations". 2023 International Conference on Information Networking (ICOIN). pp. 146–151. doi:10.1109/ICOIN56518.2023.10048976. ISBN 978-1-6654-6268-6.
Sood, Neerav (2024). "Cryptography in Post Quantum Computing Era". SSRN Electronic Journal. doi:10.2139/ssrn.4705470.
Rawal, Bharat S.; Curry, Peter J. (2024). "Challenges and opportunities on the horizon of post-quantum cryptography". APL Quantum. 1 (2). doi:10.1063/5.0198344.
Bagirovs, Emils; Provodin, Grigory; Sipola, Tuomo; Hautamäki, Jari (2024). "Applications of Post-quantum Cryptography". European Conference on Cyber Warfare and Security. 23 (1): 49–57. arXiv:2406.13258. doi:10.34190/eccws.23.1.2247.
Mamatha, G S; Dimri, Namya; Sinha, Rasha (2024). "Post-Quantum Cryptography: Securing Digital Communication in the Quantum Era". arXiv:2403.11741 [cs.CR].
Singh, Balvinder; Ahateshaam, Md; Lahiri, Abhisweta; Sagar, Anil Kumar (2024). "Future of Cryptography in the Era of Quantum Computing". Innovations in Electrical and Electronic Engineering. Lecture Notes in Electrical Engineering. Vol. 1115. pp. 13–31. doi:10.1007/978-981-99-8661-3_2. ISBN 978-981-99-8660-6.
== External links ==
PQCrypto, the post-quantum cryptography conference
ETSI Quantum Secure Standards Effort
NIST's Post-Quantum crypto Project
PQCrypto Usage & Deployment
ISO 27001 Certification Cost
ISO 22301:2019 – Security and Resilience in the United States | Wikipedia/Post-quantum_cryptography |
Differential fault analysis (DFA) is a type of active side-channel attack in the field of cryptography, specifically cryptanalysis. The principle is to induce faults—unexpected environmental conditions—into cryptographic operations to reveal their internal states.
== Principles ==
Taking a smartcard containing an embedded processor as an example, some unexpected environmental conditions it could experience include being subjected to high temperature, receiving unsupported supply voltage or current, being excessively overclocked, experiencing strong electric or magnetic fields, or even receiving ionizing radiation to influence the operation of the processor. When stressed like this, the processor may begin to output incorrect results due to physical data corruption, which may help a cryptanalyst deduce the instructions that the processor is running, or what the internal state of its data is.
For DES and Triple DES, about 200 single-flipped bits are necessary to obtain a secret key. DFA has also been applied successfully to the AES cipher.
Many countermeasures have been proposed to defend from these kinds of attacks. Most of them are based on error detection schemes.
== Fault injection ==
A fault injection attack involves stressing the transistors responsible for encryption tasks to generate faults that will then be used as input for analysis. The stress can be an electromagnetic pulse (EM pulse or laser pulse).
Practical fault injection consists of using an electromagnetic probe connected to a pulser or a laser generating a disturbance of a similar length to the processor's cycle time (of the order of a nanosecond). The energy transferred to the chip may be sufficient to burn out certain components of the chip, so the voltage of the pulser (a few hundred volts) and the positioning of the probe must be finely calibrated. For greater precision, the chips are often decapsulated (chemically eroded to expose the bare silicon).
== References == | Wikipedia/Differential_fault_analysis |
Mix networks are routing protocols that create hard-to-trace communications by using a chain of proxy servers known as mixes which take in messages from multiple senders, shuffle them, and send them back out in random order to the next destination (possibly another mix node). This breaks the link between the source of the request and the destination, making it harder for eavesdroppers to trace end-to-end communications. Furthermore, mixes only know the node that it immediately received the message from, and the immediate destination to send the shuffled messages to, making the network resistant to malicious mix nodes.
Each message is encrypted to each proxy using public key cryptography; the resulting encryption is layered like a Russian doll (except that each "doll" is of the same size) with the message as the innermost layer. Each proxy server strips off its own layer of encryption to reveal where to send the message next. If all but one of the proxy servers are compromised by the tracer, untraceability can still be achieved against some weaker adversaries.
The concept of a mix "cryptosystem" in the context of electronic mail was first described by David Chaum in 1981 becasue of the "traffic analysis problem" (traffic analysis). Applications that are based on this concept include anonymous remailers (such as Mixmaster), onion routing, garlic routing, and key-based routing (including Tor, I2P, and Freenet). Large-scale implementations of the mix network concept began to emerge in the 2020s, driven by advancements in privacy-preserving technologies and decentralized infrastructure.
== History ==
David Chaum published the concept of "mixes" in 1979 in a paper for his master's degree thesis work, shortly after he was first introduced to the field of cryptography through the work of public key cryptography, Martin Hellman, Whitfield Diffie and Ralph Merkle. While public key cryptography encrypted the security of information, Chaum believed there to be personal privacy vulnerabilities in the meta data found in communications. Some vulnerabilities that enabled the compromise of personal privacy included time of messages sent and received, size of messages and the address of the original sender. He cites Martin Hellman and Whitfield's paper "New Directions in Cryptography" (1976) in his work.
=== 1990s: Cypherpunk Movement ===
Innovators like Ian Goldberg and Adam Back made huge contributions to mixnet technology. This era saw significant advancements in cryptographic methods, which were important for the practical implementation of mixnets. Mixnets began to draw attention in academic circles, leading to more research on improving their efficiency and security. However, widespread practical application was still limited, and mixnets stayed largely within experimental stages. A "cypherpunk remailer" software was developed to make it easier for individuals to send anonymous emails using mixnets.
=== 2000s: Inspiration for Other Anonymous Networks ===
In the 2000s, the increasing concerns about internet privacy highlighted the significance of mix networks (mixnets). This era was marked by the emergence of Tor (The Onion Router) around the mid-2000s. Although Tor was not a straightforward implementation of a mixnet, it drew heavily from David Chaum's foundational ideas, particularly utilizing a form of onion routing akin to mixnet concepts. This period also witnessed the emergence of other systems that incorporated mixnet principles to various extents, all aimed at enhancing secure and anonymous communication.
=== 2010s: Renewed Academic Interest in Mix Networks ===
Entering the 2010s, there was a significant shift towards making mixnets more scalable and efficient. This change was driven by the introduction of new protocols and algorithms, which helped overcome some of the primary challenges that had previously hindered the widespread deployment of mixnets. The relevance of mixnets surged, especially after 2013, following Edward Snowden's disclosures about extensive global surveillance programs. This period saw a renewed focus on mixnets as vital tools for protecting privacy.
The Loopix architecture, introduced in 2017, integrated several pre-existing privacy-enhancing techniques to form a modern mix network design. Key elements of Loopix included:
"Sphinx" packet format, ensuring unlinkability and layered encryption
Poisson-process-based packet transmission, introducing randomness to prevent traffic correlation attacks.
Exponential mixing delays, making traffic analysis more difficult.
Loop-based cover traffic, where dummy packets (placeholder packets that do not contain actual data) are continuously injected to obscure real data flows.
Stratified mix node topology, optimizing anonymity while maintaining network efficiency.
The rise of blockchain technologies opened new possibilities for scalable decentralized systems, paving the way for large-scale, distributed mix networks.
=== 2020s: First large-scale implementations ===
Throughout the 2020s, various public and private research and development programs contributed to the realization of the first large-scale mix networks. By 2025, multiple projects—including 0KN, HOPR, Katzenpost, Nym, and xx.network (led by David Chaum)—are under active development, aiming to enhance privacy-preserving communication on a broader scale.
== How it works ==
Participant A prepares a message for delivery to participant B by appending a random value R to the message, sealing it with the addressee's public key
K
b
{\displaystyle K_{b}}
, appending B's address, and then sealing the result with the mix's public key
K
m
{\displaystyle K_{m}}
.
M opens it with his private key, now he knows B's address, and he sends
K
b
(
m
e
s
s
a
g
e
,
R
)
{\displaystyle K_{b}(message,R)}
to B.
=== Message format ===
K
m
(
R
1
,
K
b
(
R
0
,
m
e
s
s
a
g
e
)
,
B
)
⟶
(
K
b
(
R
0
,
m
e
s
s
a
g
e
)
,
B
)
{\displaystyle K_{m}(R1,K_{b}(R0,message),B)\longrightarrow (K_{b}(R0,message),B)}
To accomplish this, the sender takes the mix's public key (
K
m
{\displaystyle K_{m}}
), and uses it to encrypt an envelope containing a random string (
R
1
{\displaystyle R1}
), a nested envelope addressed to the recipient, and the email address of the recipient (B). This nested envelope is encrypted with the recipient's public key (
K
b
{\displaystyle K_{b}}
), and contains another random string (R0), along with the body of the message being sent. Upon receipt of the encrypted top-level envelope, the mix uses its secret key to open it. Inside, it finds the address of the recipient (B) and an encrypted message bound for B. The random string (
R
1
{\displaystyle R1}
) is discarded.
R
0
{\displaystyle R0}
is needed in the message in order to prevent an attacker from guessing messages. It is assumed that the attacker can observe all incoming and outgoing messages. If the random string is not used (i.e. only
(
K
b
(
m
e
s
s
a
g
e
)
)
{\displaystyle (K_{b}(message))}
is sent to
B
{\displaystyle B}
) and an attacker has a good guess that the message
m
e
s
s
a
g
e
′
{\displaystyle message'}
was sent, he can test whether
K
b
(
m
e
s
s
a
g
e
′
)
=
K
b
(
m
e
s
s
a
g
e
)
{\displaystyle K_{b}(message')=K_{b}(message)}
holds, whereby he can learn the content of the message. By appending the random string
R
0
{\displaystyle R0}
the attacker is prevented from performing this kind of attack; even if he should guess the correct message (i.e.
m
e
s
s
a
g
e
′
=
m
e
s
s
a
g
e
{\displaystyle message'=message}
is true) he won't learn if he is right since he doesn't know the secret value
R
0
{\displaystyle R0}
. Practically,
R
0
{\displaystyle R0}
functions as a salt.
=== Return addresses ===
What is needed now is a way for B to respond to A while still keeping the identity of A secret from B.
A solution is for A to form an untraceable return address
K
m
(
S
1
,
A
)
,
K
x
{\displaystyle K_{m}(S1,A),K_{x}}
where
A
{\displaystyle A}
is its own real address,
K
x
{\displaystyle K_{x}}
is a public one-time key chosen for the current occasion only, and
S
1
{\displaystyle S1}
is a key that will also act as a random string for purposes of sealing. Then, A can send this return address to B as part of a message sent by the techniques already described.
B sends
K
m
(
S
1
,
A
)
,
K
x
(
S
0
,
r
e
s
p
o
n
s
e
)
{\displaystyle K_{m}(S1,A),K_{x}(S0,response)}
to M, and M transforms it to
A
,
S
1
(
K
x
(
S
0
,
r
e
s
p
o
n
s
e
)
)
{\displaystyle A,S1(K_{x}(S0,response))}
.
This mix uses the string of bits
S
1
{\displaystyle S1}
that it finds after decrypting the address part
K
m
(
S
1
,
A
)
{\displaystyle K_{m}(S1,A)}
as a key to re-encrypt the message part
K
x
(
S
0
,
r
e
s
p
o
n
s
e
)
{\displaystyle K_{x}(S0,response)}
. Only the addressee, A, can decrypt the resulting output because A created both
S
1
{\displaystyle S1}
and
K
x
{\displaystyle K_{x}}
.
The additional key
K
x
{\displaystyle K_{x}}
assures that the mix cannot see the content of the reply-message.
The following indicates how B uses this untraceable return address to form a response to A, via a new kind of mix:
The message from A
⟶
{\displaystyle \longrightarrow }
B:
K
m
(
R
1
,
K
b
(
R
0
,
m
e
s
s
a
g
e
,
K
m
(
S
1
,
A
)
,
K
x
)
,
B
)
⟶
K
b
(
R
0
,
m
e
s
s
a
g
e
,
K
m
(
S
1
,
A
)
,
K
x
)
{\displaystyle K_{m}(R1,K_{b}(R0,message,K_{m}(S1,A),K_{x}),B)\longrightarrow K_{b}(R0,message,K_{m}(S1,A),K_{x})}
Reply message from B
⟶
{\displaystyle \longrightarrow }
A:
K
m
(
S
1
,
A
)
,
K
x
(
S
0
,
r
e
s
p
o
n
s
e
)
⟶
A
,
S
1
(
K
x
(
S
0
,
r
e
s
p
o
n
s
e
)
)
{\displaystyle K_{m}(S1,A),K_{x}(S0,response)\longrightarrow A,S1(K_{x}(S0,response))}
Where:
K
b
{\displaystyle K_{b}}
= B’s public key,
K
m
{\displaystyle K_{m}}
= the mix's public key.
A destination can reply to a source without sacrificing source anonymity. The reply message shares all of the performance and security benefits with the anonymous messages from source to destination.
== Vulnerabilities ==
Although mix networks provide security even if an adversary is able to view the entire path, mixing is not absolutely perfect. Adversaries can provide long term correlation attacks and track the sender and receiver of the packets.
=== Threat model ===
An adversary can perform a passive attack by monitoring the traffic to and from the mix network. Analyzing the arrival times between multiple packets can reveal information. Since no changes are actively made to the packets, an attack like this is hard to detect. In a worst case of an attack, we assume that all the links of the network are observable by the adversary and the strategies and infrastructure of the mix network are known.
A packet on an input link cannot be correlated to a packet on the output link based on information about the time the packet was received, the size of the packet, or the content of the packet. Packet correlation based on packet timing is prevented by batching and correlation based on content and packet size is prevented by encryption and packet padding, respectively.
Inter-packet intervals, that is, the time difference between observation of two consecutive packets on two network links, is used to infer if the links carry the same connection. The encryption and padding does not affect the inter-packet interval related to the same IP flow. Sequences of inter-packet interval vary greatly between connections, for example in web browsing, the traffic occurs in bursts. This fact can be used to identify a connection.
=== Active attack ===
Active attacks can be performed by injecting bursts of packets that contain unique timing signatures into the targeted flow. The attacker can perform attacks to attempt to identify these packets on other network links. The attacker might not be able to create new packets due to the required knowledge of symmetric keys on all the subsequent mixes. Replay packets cannot be used either as they are easily preventable through hashing and caching.
=== Artificial gap ===
Large gaps can be created in the target flow, if the attacker drops large volumes of consecutive packets in the flow. For example, a simulation is run sending 3000 packets to the target flow, where the attacker drops the packets 1 second after the start of the flow. As the number of consecutive packets dropped increases, the effectiveness of defensive dropping decreases significantly. Introducing a large gap will almost always create a recognizable feature.
=== Artificial bursts ===
The attacker can create artificial bursts. This is done by creating a signature from artificial packets by holding them on a link for a certain period of time and then releasing them all at once. Defensive dropping provides no defense in this scenario and the attacker can identify the target flow. There are other defense measures that can be taken to prevent this attack. One such solution can be adaptive padding algorithms. The more the packets are delayed, the easier it is to identify the behavior and thus better defense can be observed.
=== Other time analysis attacks ===
An attacker may also look into other timing attacks other than inter-packet intervals. The attacker can actively modify packet streams to observe the changes caused in the network's behavior. Packets can be corrupted to force re-transmission of TCP packets, which the behavior is easily observable to reveal information.
=== Sleeper attack ===
Assuming an adversary can see messages being sent and received into threshold mixes but they can't see the internal working of these mixes or what is sent by the same. If the adversary has left their own messages in respective mixes and they receive one of the two, they are able to determine the message sent and the corresponding sender. The adversary has to place their messages (active component) in the mix at any given time and the messages must remain there prior to a message being sent. This is not typically an active attack. Weaker adversaries can use this attack in combination with other attacks to cause more issues.
Mix networks derive security by changing order of messages they receive to avoid creating significant relation between the incoming and outgoing messages. Mixes create interference between messages. The interference puts bounds on the rate of information leak to an observer of the mix. In a mix of size n, an adversary observing input to and output from the mix has an uncertainty of order n in determining a match. A sleeper attack can take advantage of this. In a layered network of threshold mixes with a sleeper in each mix, there is a layer receiving inputs from senders and a second layer of mixes that forward messages to the final destination. From this, the attacker can learn the received message could not have come from the sender into any layer 1 mix that did not fire. There is a higher probability of matching the sent and received messages with these sleepers thus communication is not completely anonymous. Mixes may also be purely timed: they randomize the order of messages received in a particular interval and attach some of them with the mixes, forwarding them at the end of the interval despite what has been received in that interval. Messages that are available for mixing will interfere, but if no messages are available, there is no interference with received messages.
== References == | Wikipedia/Mix_network |
The Web Cryptography API is the World Wide Web Consortium’s (W3C) recommendation for a low-level interface that would increase the security of web applications by allowing them to perform cryptographic functions without having to access raw keying material. This agnostic API would perform basic cryptographic operations, such as hashing, signature generation and verification and encryption as well as decryption from within a web application.
== Description ==
On 26 January 2017, the W3C released its recommendation for a Web Cryptography API that could perform basic cryptographic operations in web applications. This agnostic API would utilize JavaScript to perform operations that would increase the security of data exchange within web applications. The API would provide a low-level interface to create and/or manage public keys and private keys for hashing, digital signature generation and verification and encryption and decryption for use with web applications.
The Web Cryptography API could be used for a wide range of uses, including:
Providing authentication for users and services
Electronic signing of documents or code
Protecting the integrity and confidentiality of communication and digital data exchange
Because the Web Cryptography API is agnostic in nature, it can be used on any platform. It would provide a common set of interfaces that would permit web applications and progressive web applications to conduct cryptographic functions without the need to access raw keying material. This would be done with the assistance of the SubtleCrypto interface, which defines a group of methods to perform the above cryptographic operations. Additional interfaces within the Web Cryptography API would allow for key generation, key derivation and key import and export.
== Proposed functionality ==
The W3C’s specification for the Web Cryptography API places focus on the common functionality and features that currently exist between platform-specific and standardized cryptographic APIs versus those that are known to just a few implementations. The group’s recommendation for the use of the Web Cryptography API does not dictate that a mandatory set of algorithms must be implemented. This is because of the awareness that cryptographic implementations will vary amongst conforming user agents because of government regulations, local policies, security practices and intellectual property concerns.
There are many types of existing web applications that the Web Cryptography API would be well suited for use with.
=== Multi-factor authentication ===
Today multi-factor authentication is considered one of the most reliable methods for verifying the identity of a user of a web application, such as online banking. Many web applications currently depend on this authentication method to protect both the user and the user agent. With the Web Cryptography API, a web application would have the ability to provide authentication from within itself instead of having to rely on transport-layer authentication to secret keying material to authenticate user access. This process would provide a richer experience for the user.
The Web Cryptography API would allow the application to locate suitable client keys that were previously created by the user agent or had been pre-provisioned by the web application. The application would be able to give the user agent the ability to either generate a new key or re-use an existing key in the event the user does not have a key already associated with their account. By binding this process to the Transport Layer Security that the user is authenticating through, the multi-factor authentication process can be additionally strengthened by the derivation of a key that is based on the underlying transport.
=== Protected document exchange ===
The API can be used to protect sensitive or confidential documents from unauthorized viewing from within a web application, even if they have been previously securely received. The web application would use the Web Cryptography API to encrypt the document with a secret key and then wrap it with public keys that have been associated with users who are authorized to view the document. Upon navigating to the web application, the authorized user would receive the document that had been encrypted and would be instructed to use their private key to begin the unwrapping process that would allow them to decrypt and view the document.
=== Cloud storage ===
Many businesses and individuals rely on cloud storage. For protection, remote service provide might want their web application to give users the ability to protect their confidential documents before uploading their documents or other data. The Web Cryptography API would allow users to:
Choose to select a private or secret key
Derive an encryption key from their key if they wish
Encrypt their document/data
Upload their encrypted document/data using the service provider’s existing APIs
=== Electronic document signing ===
The ability to electronically sign documents saves time, enhances the security of important documents and can serve as legal proof of a user’s acceptance of a document. Many web applications choose to accept electronic signatures instead of requiring written signatures. With the Web Cryptography API, a user would be prompted to choose a key that could be generated or pre-provisioned specifically for the web application. The key could then be used during the signing operation.
=== Protecting data integrity ===
Web applications often cache data locally, which puts the data at risk for compromise if an offline attack were to occur. The Web Cryptography API permits the web application to use a public key deployed from within itself to verify the integrity of the data cache.
=== Secure messaging ===
The Web Cryptography API can enhance the security of messaging for use in off-the-record (OTR) and other types of message-signing schemes through the use of key agreement. The message sender and intended recipient would negotiate shared encryption and message authentication code (MAC) keys to encrypt and decrypt messages to prevent unauthorized access.
=== JSON Object Signing and Encryption (JOSE) ===
The Web Cryptography API can be used by web applications to interact with message formats and structures that are defined under JOSE Working Group. The application can read and import JSON Web Signature (JWK) keys, validate messages that have been protected through electronic signing or MAC keys and decrypt JWE messages.
== Conformance to the Web Cryptography API ==
The W3C recommends that vendors avoid using vendor-specific proprietary extensions with specifications for the Web Cryptography API. This is because it could reduce the interoperability of the API and break up the user base since not all users would be able to access the particular content. It is recommended that when a vendor-specific extension cannot be avoided, the vendor should prefix it with vendor-specific strings to prevent clashes with future generations of the API’s specifications.
== References ==
== External links ==
Official website
Web Crypto API on MDN Web Docs | Wikipedia/Web_Cryptography_API |
In cryptography, impossible differential cryptanalysis is a form of differential cryptanalysis for block ciphers. While ordinary differential cryptanalysis tracks differences that propagate through the cipher with greater than expected probability, impossible differential cryptanalysis exploits differences that are impossible (having probability 0) at some intermediate state of the cipher algorithm.
Lars Knudsen appears to be the first to use a form of this attack, in the 1998 paper where he introduced his AES candidate, DEAL. The first presentation to attract the attention of the cryptographic community was later the same year at the rump session of CRYPTO '98, in which Eli Biham, Alex Biryukov, and Adi Shamir introduced the name "impossible differential" and used the technique to break 4.5 out of 8.5 rounds of IDEA and 31 out of 32 rounds of the NSA-designed cipher Skipjack. This development led cryptographer Bruce Schneier to speculate that the NSA had no previous knowledge of impossible differential cryptanalysis. The technique has since been applied to many other ciphers: Khufu and Khafre, E2, variants of Serpent, MARS, Twofish, Rijndael (AES), CRYPTON, Zodiac, Hierocrypt-3, TEA, XTEA, Mini-AES, ARIA, Camellia, and SHACAL-2.
Biham, Biryukov and Shamir also presented a relatively efficient specialized method for finding impossible differentials that they called a miss-in-the-middle attack. This consists of finding "two events with probability one, whose conditions cannot be met together."
== References ==
== Further reading ==
Orr Dunkelman (March 1999). An Analysis of Serpent-p and Serpent-p-ns (PDF/PostScript). Rump session, 2nd AES Candidate Conference. Rome: NIST. Retrieved 2007-02-27.
E. Biham; A. Biryukov; A. Shamir (May 1999). Cryptanalysis of Skipjack Reduced to 31 Rounds using Impossible Differentials (PDF/PostScript). Advances in Cryptology – EUROCRYPT '99. Prague: Springer-Verlag. pp. 12–23. Retrieved 2007-02-13.
Kazumaro Aoki; Masayuki Kanda (1999). "Search for Impossible Differential of E2" (PDF/PostScript). Retrieved 2007-02-27. {{cite journal}}: Cite journal requires |journal= (help)
Eli Biham, Vladimir Furman (April 2000). Impossible Differential on 8-Round MARS' Core (PDF/PostScript). 3rd AES Candidate Conference. pp. 186–194. Retrieved 2007-02-27.
Eli Biham; Vladimir Furman (December 2000). Improved Impossible Differentials on Twofish (PDF/PostScript). INDOCRYPT 2000. Calcutta: Springer-Verlag. pp. 80–92. Retrieved 2007-02-27.
Deukjo Hong; Jaechul Sung; Shiho Moriai; Sangjin Lee; Jongin Lim (April 2001). Impossible Differential Cryptanalysis of Zodiac. 8th International Workshop on Fast Software Encryption (FSE 2001). Yokohama: Springer-Verlag. pp. 300–311. Archived from the original (PDF) on 2007-12-13. Retrieved 2006-12-30.
Raphael C.-W. Phan; Mohammad Umar Siddiqi (July 2001). "Generalised Impossible Differentials of Advanced Encryption Standard". Electronics Letters. 37 (14): 896–898. Bibcode:2001ElL....37..896P. doi:10.1049/el:20010619.
Jung Hee Cheon, MunJu Kim, and Kwangjo Kim (September 2001). Impossible Differential Cryptanalysis of Hierocrypt-3 Reduced to 3 Rounds (PDF). Proceedings of 2nd NESSIE Workshop. Retrieved 2007-02-27.{{cite conference}}: CS1 maint: multiple names: authors list (link)
Jung Hee Cheon; MunJu Kim; Kwangjo Kim; Jung-Yeun Lee; SungWoo Kang (December 26, 2001). Improved Impossible Differential Cryptanalysis of Rijndael and Crypton. 4th International Conference on Information Security and Cryptology (ICISC 2001). Seoul: Springer-Verlag. pp. 39–49. CiteSeerX 10.1.1.15.9966.
Dukjae Moon; Kyungdeok Hwang; Wonil Lee; Sangjin Lee; AND Jongin Lim (February 2002). Impossible Differential Cryptanalysis of Reduced Round XTEA and TEA (PDF). 9th International Workshop on Fast Software Encryption (FSE 2002). Leuven: Springer-Verlag. pp. 49–60. Retrieved 2007-02-27.
Raphael C.-W. Phan (May 2002). "Classes of Impossible Differentials of Advanced Encryption Standard". Electronics Letters. 38 (11): 508–510. Bibcode:2002ElL....38..508P. doi:10.1049/el:20020347.
Raphael C.-W. Phan (October 2003). "Impossible Differential Cryptanalysis of Mini-AES" (PDF). Cryptologia. XXVII (4): 283–292. doi:10.1080/0161-110391891964. ISSN 0161-1194. S2CID 2658902. Archived from the original (PDF) on 2007-09-26. Retrieved 2007-02-27.
Raphael C.-W. Phan (July 2004). "Impossible Differential Cryptanalysis of 7-round AES". Information Processing Letters. 91 (1): 29–32. doi:10.1016/j.ipl.2004.03.006. Retrieved 2007-07-19.
Wenling Wu; Wentao Zhang; Dengguo Feng (2006). "Impossible Differential Cryptanalysis of ARIA and Camellia" (PDF). Retrieved 2007-02-27. {{cite journal}}: Cite journal requires |journal= (help) | Wikipedia/Impossible_differential_cryptanalysis |
PMAC, which stands for parallelizable MAC, is a message authentication code algorithm. It was created by Phillip Rogaway.
PMAC is a method of taking a block cipher and creating an efficient message authentication code that is reducible in security to the underlying block cipher.
PMAC is similar in functionality to the OMAC algorithm.
== Patents ==
PMAC is no longer patented and can be used royalty-free. It was originally patented by Phillip Rogaway, but he has since abandoned his patent filings.
== References ==
== External links ==
Phil Rogaway's page on PMAC
Changhoon Lee, Jongsung Kim, Jaechul Sung, Seokhie Hong, Sangjin Lee. "Forgery and Key Recovery Attacks on PMAC and Mitchell's TMAC Variant", 2006. [1] (ps)
Rust implementation | Wikipedia/PMAC_(cryptography) |
The Data Authentication Algorithm (DAA) is a former U.S. government standard for producing cryptographic message authentication codes. DAA is defined in FIPS PUB 113, which was withdrawn on September 1, 2008. The algorithm is not considered secure by today's standards.
According to the standard, a code produced by the DAA is called a Data Authentication Code (DAC). The algorithm chain encrypts the data, with the last cipher block truncated and used as the DAC.
The DAA is equivalent to ISO/IEC 9797-1 MAC algorithm 1, or CBC-MAC, with DES as the underlying cipher, truncated to between 24 and 56 bits (inclusive).
== Sources == | Wikipedia/Data_Authentication_Algorithm |
In cryptography, an SP-network, or substitution–permutation network (SPN), is a series of linked mathematical operations used in block cipher algorithms such as AES (Rijndael), 3-Way, Kalyna, Kuznyechik, PRESENT, SAFER, SHARK, and Square.
Such a network takes a block of the plaintext and the key as inputs, and applies several alternating rounds or layers of substitution boxes (S-boxes) and permutation boxes (P-boxes) to produce the ciphertext block. The S-boxes and P-boxes transform (sub-)blocks of input bits into output bits. It is common for these transformations to be operations that are efficient to perform in hardware, such as exclusive or (XOR) and bitwise rotation. The key is introduced in each round, usually in the form of "round keys" derived from it. (In some designs, the S-boxes themselves depend on the key.)
Decryption is done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order).
== Components ==
An S-box substitutes a small block of bits (the input of the S-box) by another block of bits (the output of the S-box). This substitution should be one-to-one, to ensure invertibility (hence decryption). In particular, the length of the output should be the same as the length of the input (the picture on the right has S-boxes with 4 input and 4 output bits), which is different from S-boxes in general that could also change the length, as in Data Encryption Standard (DES), for example. An S-box is usually not simply a permutation of the bits. Rather, in a good S-box each output bit will be affected by every input bit. More precisely, in a good S-box each output bit will be changed with 50% probability by every input bit. Since each output bit changes with the 50% probability, about half of the output bits will actually change with an input bit change (cf. Strict avalanche criterion).
A P-box is a permutation of all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible.
At each round, the round key (obtained from the key with some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typically XOR.
== Properties ==
A single typical S-box or a single P-box alone does not have much cryptographic strength: an S-box could be thought of as a substitution cipher, while a P-box could be thought of as a transposition cipher. However, a well-designed SP network with several alternating rounds of S- and P-boxes already satisfies Shannon's confusion and diffusion properties:
The reason for diffusion is the following: If one changes one bit of the plaintext, then it is fed into an S-box, whose output will change at several bits, then all these changes are distributed by the P-box among several S-boxes, hence the outputs of all of these S-boxes are again changed at several bits, and so on. Doing several rounds, each bit changes several times back and forth, therefore, by the end, the ciphertext has changed completely, in a pseudorandom manner. In particular, for a randomly chosen input block, if one flips the i-th bit, then the probability that the j-th output bit will change is approximately a half, for any i and j, which is the strict avalanche criterion. Vice versa, if one changes one bit of the ciphertext, then attempts to decrypt it, the result is a message completely different from the original plaintext—SP ciphers are not easily malleable.
The reason for confusion is exactly the same as for diffusion: changing one bit of the key changes several of the round keys, and every change in every round key diffuses over all the bits, changing the ciphertext in a very complex manner.
If an attacker somehow obtains one plaintext corresponding to one ciphertext—a known-plaintext attack, or worse, a chosen plaintext or chosen-ciphertext attack—the confusion and diffusion make it difficult for the attacker to recover the key.
== Performance ==
Although a Feistel network that uses S-boxes (such as DES) is quite similar to SP networks, there are some differences that make either this or that more applicable in certain situations. For a given amount of confusion and diffusion, an SP network has more "inherent parallelism" and so — given a CPU with many execution units — can be computed faster than a Feistel network. CPUs with few execution units — such as most smart cards — cannot take advantage of this inherent parallelism. Also SP ciphers require S-boxes to be invertible (to perform decryption); Feistel inner functions have no such restriction and can be constructed as one-way functions.
== See also ==
Feistel network
Product cipher
Square (cipher)
International Data Encryption Algorithm
== References ==
== Further reading ==
Katz, Jonathan; Lindell, Yehuda (2007). Introduction to Modern Cryptography. CRC Press. ISBN 9781584885511.
Stinson, Douglas R. (2006). Cryptography. Theory and Practice (Third ed.). Chapman & Hall/CRC. ISBN 1584885084. | Wikipedia/Substitution–permutation_network |
In cryptography, a pseudorandom function family, abbreviated PRF, is a collection of efficiently-computable functions which emulate a random oracle in the following way: no efficient algorithm can distinguish (with significant advantage) between a function chosen randomly from the PRF family and a random oracle (a function whose outputs are fixed completely at random). Pseudorandom functions are vital tools in the construction of cryptographic primitives, especially secure encryption schemes.
Pseudorandom functions are not to be confused with pseudorandom generators (PRGs). The guarantee of a PRG is that a single output appears random if the input was chosen at random. On the other hand, the guarantee of a PRF is that all its outputs appear random, regardless of how the corresponding inputs were chosen, as long as the function was drawn at random from the PRF family.
A pseudorandom function family can be constructed from any pseudorandom generator, using, for example, the "GGM" construction given by Goldreich, Goldwasser, and Micali. While in practice, block ciphers are used in most instances where a pseudorandom function is needed, they do not, in general, constitute a pseudorandom function family, as block ciphers such as AES are defined for only limited numbers of input and key sizes.
== Motivations from random functions ==
A PRF is an efficient (i.e. computable in polynomial time), deterministic function that maps two distinct sets (domain and range) and looks like a truly random function.
Essentially, a truly random function would just be composed of a lookup table filled with uniformly distributed random entries. However, in practice, a PRF is given an input string in the domain and a hidden random seed and runs multiple times with the same input string and seed, always returning the same value. Nonetheless, given an arbitrary input string, the output looks random if the seed is taken from a uniform distribution.
A PRF is considered to be good if its behavior is indistinguishable from a truly random function. Therefore, given an output from either the truly random function or a PRF, there should be no efficient method to correctly determine whether the output was produced by the truly random function or the PRF.
== Formal definition ==
Pseudorandom functions take inputs
x
∈
{
0
,
1
}
∗
{\displaystyle x\in \{0,1\}^{*}}
, where
∗
{\displaystyle {}^{*}}
is the Kleene star. Both the input size
I
=
|
x
|
{\displaystyle I=|x|}
and output size
λ
{\displaystyle \lambda }
depend only on the index size
n
:=
|
s
|
{\displaystyle n:=|s|}
.
A family of functions,
f
s
:
{
0
,
1
}
I
(
n
)
→
{
0
,
1
}
λ
(
n
)
{\displaystyle f_{s}:\left\{0,1\right\}^{I(n)}\rightarrow \left\{0,1\right\}^{\lambda (n)}}
is pseudorandom if the following conditions are satisfied:
There exists a polynomial-time algorithm that computes
f
s
(
x
)
{\displaystyle f_{s}(x)}
given any
s
{\displaystyle s}
and
x
{\displaystyle x}
.
Let
F
n
{\displaystyle F_{n}}
be the distribution of functions
f
s
{\displaystyle f_{s}}
where
s
{\displaystyle s}
is uniformly distributed over
{
0
,
1
}
n
{\displaystyle \{0,1\}^{n}}
, and let
R
F
n
{\displaystyle RF_{n}}
denote the uniform distribution over the set of all functions from
{
0
,
1
}
I
(
n
)
{\displaystyle \{0,1\}^{I(n)}}
to
{
0
,
1
}
λ
(
n
)
{\displaystyle \{0,1\}^{\lambda (n)}}
. Then we require
F
n
{\displaystyle F_{n}}
and
R
F
n
{\displaystyle RF_{n}}
are computationally indistinguishable, where n is the security parameter. That is, for any adversary that can query the oracle of a function sampled from either
F
n
{\displaystyle F_{n}}
or
R
F
n
{\displaystyle RF_{n}}
, the advantage that she can tell apart which kind of oracle is given to her is negligible in
n
{\displaystyle n}
.
== Oblivious pseudorandom functions ==
In an oblivious pseudorandom function, abbreviated OPRF, information is concealed from two parties that are involved in a PRF. That is, if Alice cryptographically hashes her secret value, cryptographically blinds the hash to produce the message she sends to Bob, and Bob mixes in his secret value and gives the result back to Alice, who unblinds it to get the final output, Bob is not able to see either Alice's secret value or the final output, and Alice is not able to see Bob's secret input, but Alice sees the final output which is a PRF of the two inputs -- a PRF of Alice's secret and Bob's secret. This enables transactions of sensitive cryptographic information to be secure even between untrusted parties.
An OPRF is used in some implementations of password-authenticated key agreement.
An OPRF is used in the Password Monitor functionality in Microsoft Edge.
== Application ==
PRFs can be used for:
dynamic perfect hashing; even if the adversary can change the key-distribution depending on the values the hashing function has assigned to the previous keys, the adversary can not force collisions.
Constructing deterministic, memoryless authentication schemes (message authentication code based) which are provably secure against chosen message attack.
Distributing unforgeable ID numbers, which can be locally verified by stations that contain only a small amount of storage.
Constructing identification friend or foe systems.
== See also ==
Pseudorandom permutation
== Notes ==
== References ==
Goldreich, Oded (2001). Foundations of Cryptography: Basic Tools. Cambridge: Cambridge University Press. ISBN 978-0-511-54689-1.
Pass, Rafael, A Course in Cryptography (PDF), retrieved 22 December 2015 | Wikipedia/Pseudorandom_function |
The NIST hash function competition was an open competition held by the US National Institute of Standards and Technology (NIST) to develop a new hash function called SHA-3 to complement the older SHA-1 and SHA-2. The competition was formally announced in the Federal Register on November 2, 2007. "NIST is initiating an effort to develop one or more additional hash algorithms through a public competition, similar to the development process for the Advanced Encryption Standard (AES)." The competition ended on October 2, 2012, when NIST announced that Keccak would be the new SHA-3 hash algorithm.
The winning hash function has been published as NIST FIPS 202 the "SHA-3 Standard", to complement FIPS 180-4, the Secure Hash Standard.
The NIST competition has inspired other competitions such as the Password Hashing Competition.
== Process ==
Submissions were due October 31, 2008 and the list of candidates accepted for the first round was published on December 9, 2008. NIST held a conference in late February 2009 where submitters presented their algorithms and NIST officials discussed criteria for narrowing down the field of candidates for Round 2. The list of 14 candidates accepted to Round 2 was published on July 24, 2009. Another conference was held on August 23–24, 2010 (after CRYPTO 2010) at the University of California, Santa Barbara, where the second-round candidates were discussed. The announcement of the final round candidates occurred on December 10, 2010. On October 2, 2012, NIST announced its winner, choosing Keccak, created by Guido Bertoni, Joan Daemen, and Gilles Van Assche of STMicroelectronics and Michaël Peeters of NXP.
== Entrants ==
This is an incomplete list of known submissions.
NIST selected 51 entries for round 1. 14 of them advanced to round 2, from which 5 finalists were selected.
=== Winner ===
The winner was announced to be Keccak on October 2, 2012.
=== Finalists ===
NIST selected five SHA-3 candidate algorithms to advance to the third (and final) round:
BLAKE (Aumasson et al.)
Grøstl (Knudsen et al.)
JH (Hongjun Wu)
Keccak (Keccak team, Daemen et al.)
Skein (Schneier et al.)
NIST noted some factors that figured into its selection as it announced the finalists:
Performance: "A couple of algorithms were wounded or eliminated by very large [hardware gate] area requirement – it seemed that the area they required precluded their use in too much of the potential application space."
Security: "We preferred to be conservative about security, and in some cases did not select algorithms with exceptional performance, largely because something about them made us 'nervous,' even though we knew of no clear attack against the full algorithm."
Analysis: "NIST eliminated several algorithms because of the extent of their second-round tweaks or because of a relative lack of reported cryptanalysis – either tended to create the suspicion that the design might not yet be fully tested and mature."
Diversity: The finalists included hashes based on different modes of operation, including the HAIFA and sponge function constructions, and with different internal structures, including ones based on AES, bitslicing, and alternating XOR with addition.
NIST has released a report explaining its evaluation algorithm-by-algorithm.
=== Did not pass to final round ===
The following hash function submissions were accepted for round two, but did not make it to the final round. As noted in the announcement of the finalists, "none of these candidates was clearly broken".
=== Did not pass to round two ===
The following hash function submissions were accepted for round one but did not pass to round two. They have neither been conceded by the submitters nor have had substantial cryptographic weaknesses. However, most of them have some weaknesses in the design components, or performance issues.
==== Entrants with substantial weaknesses ====
The following non-conceded round one entrants have had substantial cryptographic weaknesses announced:
==== Conceded entrants ====
The following round one entrants have been officially retracted from the competition by their submitters; they are considered broken according to the NIST official round one candidates web site. As such, they are withdrawn from the competition.
=== Rejected entrants ===
Several submissions received by NIST were not accepted as first-round candidates, following an internal review by NIST. In general, NIST gave no details as to why each was rejected. NIST also has not given a comprehensive list of rejected algorithms; there are known to be 13, but only the following are public.
== See also ==
Advanced Encryption Standard process
CAESAR Competition – Competition to design authenticated encryption schemes
Post-Quantum Cryptography Standardization
== References ==
== External links ==
NIST website for competition
Official list of second round candidates
Official list of first round candidates
SHA-3 Zoo
Classification of the SHA-3 Candidates
Hash Function Lounge
VHDL source code developed by the Cryptographic Engineering Research Group (CERG) at George Mason University
FIPS 202 – The SHA-3 Standard | Wikipedia/NIST_hash_function_competition |
Cryptographic primitives are well-established, low-level cryptographic algorithms that are frequently used to build cryptographic protocols for computer security systems. These routines include, but are not limited to, one-way hash functions and encryption functions.
== Rationale ==
When creating cryptographic systems, designers use cryptographic primitives as their most basic building blocks. Because of this, cryptographic primitives are designed to do one very specific task in a precisely defined and highly reliable fashion.
Since cryptographic primitives are used as building blocks, they must be very reliable, i.e. perform according to their specification. For example, if an encryption routine claims to be only breakable with X number of computer operations, and it is broken with significantly fewer than X operations, then that cryptographic primitive has failed. If a cryptographic primitive is found to fail, almost every protocol that uses it becomes vulnerable. Since creating cryptographic routines is very hard, and testing them to be reliable takes a long time, it is essentially never sensible (nor secure) to design a new cryptographic primitive to suit the needs of a new cryptographic system. The reasons include:
The designer might not be competent in the mathematical and practical considerations involved in cryptographic primitives.
Designing a new cryptographic primitive is very time-consuming and very error-prone, even for experts in the field.
Since algorithms in this field are not only required to be designed well but also need to be tested well by the cryptologist community, even if a cryptographic routine looks good from a design point of view it might still contain errors. Successfully withstanding such scrutiny gives some confidence (in fact, so far, the only confidence) that the algorithm is indeed secure enough to use; security proofs for cryptographic primitives are generally not available.
Cryptographic primitives are one of the building blocks of every cryptosystem, e.g., TLS, SSL, SSH, etc. Cryptosystem designers, not being in a position to definitively prove their security, must take the primitives they use as secure. Choosing the best primitive available for use in a protocol usually provides the best available security. However, compositional weaknesses are possible in any cryptosystem and it is the responsibility of the designer(s) to avoid them.
== Combining cryptographic primitives ==
Cryptographic primitives are not cryptographic systems, as they are quite limited on their own. For example, a bare encryption algorithm will provide no authentication mechanism, nor any explicit message integrity checking. Only when combined in security protocols can more than one security requirement be addressed. For example, to transmit a message that is not only encoded but also protected from tinkering (i.e. it is confidential and integrity-protected), an encoding routine, such as DES, and a hash-routine such as SHA-1 can be used in combination. If the attacker does not know the encryption key, they cannot modify the message such that message digest value(s) would be valid.
Combining cryptographic primitives to make a security protocol is itself an entire specialization. Most exploitable errors (i.e., insecurities in cryptosystems) are due not to design errors in the primitives (assuming always that they were chosen with care), but to the way they are used, i.e. bad protocol design and buggy or not careful enough implementation. Mathematical analysis of protocols is, at the time of this writing, not mature. There are some basic properties that can be verified with automated methods, such as BAN logic. There are even methods for full verification (e.g. the SPI calculus) but they are extremely cumbersome and cannot be automated. Protocol design is an art requiring deep knowledge and much practice; even then mistakes are common. An illustrative example, for a real system, can be seen on the OpenSSL vulnerability news page here.
== Commonly used primitives ==
One-way hash function, sometimes also called as one-way compression function—compute a reduced hash value for a message (e.g., SHA-256)
Symmetric key cryptography—compute a ciphertext decodable with the same key used to encode (e.g., AES)
Public-key cryptography—compute a ciphertext decodable with a different key used to encode (e.g., RSA)
Digital signatures—confirm the author of a message
Mix network—pool communications from many users to anonymize what came from whom
Private information retrieval—get database information without server knowing which item was requested
Commitment scheme—allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal it later
Cryptographically secure pseudorandom number generator
Non-interactive zero-knowledge proof
== See also ==
Category:Cryptographic primitives – a list of cryptographic primitives
Cryptographic agility
Distributed point function
== References ==
Levente Buttyán, István Vajda : Kriptográfia és alkalmazásai (Cryptography and its applications), Typotex 2004, ISBN 963-9548-13-8
Menezes, Alfred J : Handbook of applied cryptography, CRC Press, ISBN 0-8493-8523-7, October 1996, 816 pages.
Crypto101 is an introductory course on cryptography, freely available for programmers of all ages and skill levels. | Wikipedia/Cryptographic_primitive |
This article summarizes publicly known attacks against cryptographic hash functions. Note that not all entries may be up to date. For a summary of other hash function parameters, see comparison of cryptographic hash functions.
== Table color key ==
== Common hash functions ==
=== Collision resistance ===
=== Chosen prefix collision attack ===
=== Preimage resistance ===
=== Length extension ===
Vulnerable: MD5, SHA1, SHA256, SHA512
Not vulnerable: SHA384, SHA-3, BLAKE2
== Less-common hash functions ==
=== Collision resistance ===
=== Preimage resistance ===
== Attacks on hashed passwords ==
Hashes described here are designed for fast computation and have roughly similar speeds. Because most users typically choose short passwords formed in predictable ways, passwords can often be recovered from their hashed value if a fast hash is used. Searches on the order of 100 billion tests per second are possible with high-end graphics processors.
Special hashes called key derivation functions have been created to slow brute force searches. These include pbkdf2, bcrypt, scrypt, argon2, and balloon.
== See also ==
Comparison of cryptographic hash functions
Cryptographic hash function
Collision attack
Preimage attack
Length extension attack
Cipher security summary
== References ==
== External links ==
2010 summary of attacks against Tiger, MD4 and SHA-2: Jian Guo; San Ling; Christian Rechberger; Huaxiong Wang (2010-12-06). Advanced Meet-in-the-Middle Preimage Attacks: First Results on Full Tiger, and Improved Results on MD4 and SHA-2. Asiacrypt 2010. p. 3. | Wikipedia/Hash_function_security_summary |
Lexicographic codes or lexicodes are greedily generated error-correcting codes with remarkably good properties. They were produced independently by
Vladimir Levenshtein and by John Horton Conway and Neil Sloane. The binary lexicographic codes are linear codes, and include the Hamming codes and the binary Golay codes.
== Construction ==
A lexicode of length n and minimum distance d over a finite field is generated by starting with the all-zero vector and iteratively adding the next vector (in lexicographic order) of minimum Hamming distance d from the vectors added so far. As an example, the length-3 lexicode of minimum distance 2 would consist of the vectors marked by an "X" in the following example:
Here is a table of all n-bit lexicode by d-bit minimal hamming distance, resulting of maximum 2m codewords dictionnary.
For example, F4 code (n=4,d=2,m=3), extended Hamming code (n=8,d=4,m=4) and especially Golay code (n=24,d=8,m=12) shows exceptional compactness compared to neighbors.
All odd d-bit lexicode distances are exact copies of the even d+1 bit distances minus the last dimension, so
an odd-dimensional space can never create something new or more interesting than the d+1 even-dimensional space above.
Since lexicodes are linear, they can also be constructed by means of their basis.
== Implementation ==
Following C generate lexicographic code and parameters are set for the Golay code (N=24, D=8).
== Combinatorial game theory ==
The theory of lexicographic codes is closely connected to combinatorial game theory. In particular, the codewords in a binary lexicographic code of distance d encode the winning positions in a variant of Grundy's game, played on a collection of heaps of stones, in which each move consists of replacing any one heap by at most d − 1 smaller heaps, and the goal is to take the last stone.
== Notes ==
== External links ==
Bob Jenkins table of binary lexicodes
On-line generator for lexicodes and their variants
OEIS sequence A075928 (List of codewords in binary lexicode with Hamming distance 4 written as decimal numbers.)
Error-Correcting Codes on Graphs: Lexicodes, Trellises and Factor Graphs | Wikipedia/Lexicographic_code |
In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. This has nothing to do with whether the function is one-to-one; finding any one input with the desired image is considered a successful inversion. (See § Theoretical definition, below.)
The existence of such one-way functions is still an open conjecture. Their existence would prove that the complexity classes P and NP are not equal, thus resolving the foremost unsolved question of theoretical computer science.: ex. 2.2, page 70 The converse is not known to be true, i.e. the existence of a proof that P ≠ NP would not directly imply the existence of one-way functions.
In applied contexts, the terms "easy" and "hard" are usually interpreted relative to some specific computing entity; typically "cheap enough for the legitimate users" and "prohibitively expensive for any malicious agents". One-way functions, in this sense, are fundamental tools for cryptography, personal identification, authentication, and other data security applications. While the existence of one-way functions in this sense is also an open conjecture, there are several candidates that have withstood decades of intense scrutiny. Some of them are essential ingredients of most telecommunications, e-commerce, and e-banking systems around the world.
== Theoretical definition ==
A function f : {0, 1}* → {0, 1}* is one-way if f can be computed by a polynomial-time algorithm, but any polynomial-time randomized algorithm
F
{\displaystyle F}
that attempts to compute a pseudo-inverse for f succeeds with negligible probability. (The * superscript means any number of repetitions, see Kleene star.) That is, for all randomized algorithms
F
{\displaystyle F}
, all positive integers c and all sufficiently large n = length(x),
Pr
[
f
(
F
(
f
(
x
)
)
)
=
f
(
x
)
]
<
n
−
c
,
{\displaystyle \Pr[f(F(f(x)))=f(x)]<n^{-c},}
where the probability is over the choice of x from the discrete uniform distribution on {0, 1} n, and the randomness of
F
{\displaystyle F}
.
Note that, by this definition, the function must be "hard to invert" in the average-case, rather than worst-case sense. This is different from much of complexity theory (e.g., NP-hardness), where the term "hard" is meant in the worst-case. That is why even if some candidates for one-way functions (described below) are known to be NP-complete, it does not imply their one-wayness. The latter property is only based on the lack of known algorithms to solve the problem.
It is not sufficient to make a function "lossy" (not one-to-one) to have a one-way function. In particular, the function that outputs the string of n zeros on any input of length n is not a one-way function because it is easy to come up with an input that will result in the same output. More precisely: For such a function that simply outputs a string of zeroes, an algorithm F that just outputs any string of length n on input f(x) will "find" a proper preimage of the output, even if it is not the input which was originally used to find the output string.
== Related concepts ==
A one-way permutation is a one-way function that is also a permutation—that is, a one-way function that is bijective. One-way permutations are an important cryptographic primitive, and it is not known if their existence is implied by the existence of one-way functions.
A trapdoor one-way function or trapdoor permutation is a special kind of one-way function. Such a function is hard to invert unless some secret information, called the trapdoor, is known.
A collision-free hash function f is a one-way function that is also collision-resistant; that is, no randomized polynomial time algorithm can find a collision—distinct values x, y such that f(x) = f(y)—with non-negligible probability.
== Theoretical implications of one-way functions ==
If f is a one-way function, then the inversion of f would be a problem whose output is hard to compute (by definition) but easy to check (just by computing f on it). Thus, the existence of a one-way function implies that FP ≠ FNP, which in turn implies that P ≠ NP. However, P ≠ NP does not imply the existence of one-way functions.
The existence of a one-way function implies the existence of many other useful concepts, including:
Pseudorandom generators
Pseudorandom function families
Bit commitment schemes
Private-key encryption schemes secure against adaptive chosen-ciphertext attack
Message authentication codes
Digital signature schemes (secure against adaptive chosen-message attack)
== Candidates for one-way functions ==
The following are several candidates for one-way functions (as of April 2009). Clearly, it is not known whether
these functions are indeed one-way; but extensive research has so far failed to produce an efficient inverting algorithm for any of them.
=== Multiplication and factoring ===
The function f takes as inputs two prime numbers p and q in binary notation and returns their product. This function can be "easily" computed in O(b2) time, where b is the total number of bits of the inputs. Inverting this function requires finding the factors of a given integer N. The best factoring algorithms known run in
O
(
exp
64
9
b
(
log
b
)
2
3
)
{\displaystyle O\left(\exp {\sqrt[{3}]{{\frac {64}{9}}b(\log b)^{2}}}\right)}
time, where b is the number of bits needed to represent N.
This function can be generalized by allowing p and q to range over a suitable set of semiprimes. Note that f is not one-way for randomly selected integers p, q > 1, since the product will have 2 as a factor with probability 3/4 (because the probability that an arbitrary p is odd is 1/2, and likewise for q, so if they're chosen independently, the probability that both are odd is therefore 1/4; hence the probability that p or q is even, is 1 − 1/4 = 3/4).
=== The Rabin function (modular squaring) ===
The Rabin function,: 57 or squaring modulo
N
=
p
q
{\displaystyle N=pq}
, where p and q are primes is believed to be a collection of one-way functions. We write
Rabin
N
(
x
)
≜
x
2
mod
N
{\displaystyle \operatorname {Rabin} _{N}(x)\triangleq x^{2}{\bmod {N}}}
to denote squaring modulo N: a specific member of the Rabin collection. It can be shown that extracting square roots, i.e. inverting the Rabin function, is computationally equivalent to factoring N (in the sense of polynomial-time reduction). Hence it can be proven that the Rabin collection is one-way if and only if factoring is hard. This also holds for the special case in which p and q are of the same bit length. The Rabin cryptosystem is based on the assumption that this Rabin function is one-way.
=== Discrete exponential and logarithm ===
Modular exponentiation can be done in polynomial time. Inverting this function requires computing the discrete logarithm. Currently there are several popular groups for which no algorithm to calculate the underlying discrete logarithm in polynomial time is known. These groups are all finite abelian groups and the general discrete logarithm problem can be described as thus.
Let G be a finite abelian group of cardinality n. Denote its group operation by multiplication. Consider a primitive element α ∈ G and another element β ∈ G. The discrete logarithm problem is to find the positive integer k, where 1 ≤ k ≤ n, such that:
α
k
=
α
⋅
α
⋅
…
⋅
α
⏟
k
t
i
m
e
s
=
β
{\displaystyle \alpha ^{k}=\underbrace {\alpha \cdot \alpha \cdot \ldots \cdot \alpha } _{k\;\mathrm {times} }=\beta }
The integer k that solves the equation αk = β is termed the discrete logarithm of β to the base α. One writes k = logα β.
Popular choices for the group G in discrete logarithm cryptography are the cyclic groups (Zp)× (e.g. ElGamal encryption, Diffie–Hellman key exchange, and the Digital Signature Algorithm) and cyclic subgroups of elliptic curves over finite fields (see elliptic curve cryptography).
An elliptic curve is a set of pairs of elements of a field satisfying y2 = x3 + ax + b. The elements of the curve form a group under an operation called "point addition" (which is not the same as the addition operation of the field). Multiplication kP of a point P by an integer k (i.e., a group action of the additive group of the integers) is defined as repeated addition of the point to itself. If k and P are known, it is easy to compute R = kP, but if only R and P are known, it is assumed to be hard to compute k.
=== Cryptographically secure hash functions ===
There are a number of cryptographic hash functions that are fast to compute, such as SHA-256. Some of the simpler versions have fallen to sophisticated analysis, but the strongest versions continue to offer fast, practical solutions for one-way computation. Most of the theoretical support for the functions are more techniques for thwarting some of the previously successful attacks.
=== Other candidates ===
Other candidates for one-way functions include the hardness of the decoding of random linear codes, the hardness of certain lattice problems, and the subset sum problem (Naccache–Stern knapsack cryptosystem).
== Universal one-way function ==
There is an explicit function f that has been proved to be one-way, if and only if one-way functions exist. In other words, if any function is one-way, then so is f. Since this function was the first combinatorial complete one-way function to be demonstrated, it is known as the "universal one-way function". The problem of finding a one-way function is thus reduced to proving—perhaps non-constructively—that one such function exists.
There also exists a function that is one-way if polynomial-time bounded Kolmogorov complexity is mildly hard on average. Since the existence of one-way functions implies that polynomial-time bounded Kolmogorov complexity is mildly hard on average, the function is a universal one-way function.
== See also ==
One-way compression function
Cryptographic hash function
Geometric cryptography
Trapdoor function
== References ==
== Further reading ==
Jonathan Katz and Yehuda Lindell (2007). Introduction to Modern Cryptography. CRC Press. ISBN 1-58488-551-3.
Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 978-0-534-94728-6. Section 10.6.3: One-way functions, pp. 374–376.
Christos Papadimitriou (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN 978-0-201-53082-7. Section 12.1: One-way functions, pp. 279–298. | Wikipedia/One-way_function |
JH is a cryptographic hash function submitted to the NIST hash function competition by Hongjun Wu. Though chosen as one of the five finalists of the competition, in 2012 JH ultimately lost to NIST hash candidate Keccak. JH has a 1024-bit state, and works on 512-bit input blocks. Processing an input block consists of three steps:
XOR the input block into the left half of the state.
Apply a 42-round unkeyed permutation (encryption function) to the state. This consists of 42 repetitions of:
Break the input into 256 4-bit blocks, and map each through one of two 4-bit S-boxes, the choice being made by a 256-bit round-dependent key schedule. Equivalently, combine each input block with a key bit, and map the result through a 5→4 bit S-box.
Mix adjacent 4-bit blocks using a maximum distance separable code over GF(24).
Permute 4-bit blocks so that they will be adjacent to different blocks in following rounds.
XOR the input block into the right half of the state.
The resulting digest is the last 224, 256, 384 or 512 bits from the 1024-bit final value.
It is well suited to a bit slicing implementation using the SSE2 instruction set, giving speeds of 16.8 cycles per byte.
== Examples of JH hashes ==
Hash values of empty string.
JH-224("")
0x 2c99df889b019309051c60fecc2bd285a774940e43175b76b2626630
JH-256("")
0x 46e64619c18bb0a92a5e87185a47eef83ca747b8fcc8e1412921357e326df434
JH-384("")
0x 2fe5f71b1b3290d3c017fb3c1a4d02a5cbeb03a0476481e25082434a881994b0ff99e078d2c16b105ad069b569315328
JH-512("")
0x 90ecf2f76f9d2c8017d979ad5ab96b87d58fc8fc4b83060f3f900774faa2c8fabe69c5f4ff1ec2b61d6b316941cedee117fb04b1f4c5bc1b919ae841c50eec4f
Even a small change in the message will (with overwhelming probability) result in a mostly different hash, due to the avalanche effect. For example, adding a period to the end of the sentence:
JH-256("The quick brown fox jumps over the lazy dog")
0x 6a049fed5fc6874acfdc4a08b568a4f8cbac27de933496f031015b38961608a0
JH-256("The quick brown fox jumps over the lazy dog.")
0x d001ae2315421c5d3272bac4f4aa524bddd207530d5d26bbf51794f0da18fafc
== References ==
== External links ==
The JH web site Archived 2011-12-04 at the Wayback Machine
JH page on the SHA-3 Zoo
VHDL source code developed by the Cryptographic Engineering Research Group (CERG) at George Mason University | Wikipedia/JH_(hash_function) |
NSA Suite B Cryptography was a set of cryptographic algorithms promulgated by the National Security Agency as part of its Cryptographic Modernization Program. It was to serve as an interoperable cryptographic base for both unclassified information and most classified information.
Suite B was announced on 16 February 2005. A corresponding set of unpublished algorithms, Suite A, is "used in applications where Suite B may not be appropriate. Both Suite A and Suite B can be used to protect foreign releasable information, US-Only information, and Sensitive Compartmented Information (SCI)."
In 2018, NSA replaced Suite B with the Commercial National Security Algorithm Suite (CNSA).
Suite B's components were:
Advanced Encryption Standard (AES) with key sizes of 128 and 256 bits. For traffic flow, AES should be used with either the Counter Mode (CTR) for low bandwidth traffic or the Galois/Counter Mode (GCM) mode of operation for high bandwidth traffic (see Block cipher modes of operation) – symmetric encryption
Elliptic Curve Digital Signature Algorithm (ECDSA) – digital signatures
Elliptic Curve Diffie–Hellman (ECDH) – key agreement
Secure Hash Algorithm 2 (SHA-256 and SHA-384) – message digest
== General information ==
NIST, Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography, Special Publication 800-56A
Suite B Cryptography Standards
RFC 5759, Suite B Certificate and Certificate Revocation List (CRL) Profile
RFC 6239, Suite B Cryptographic Suites for Secure Shell (SSH)
RFC 6379, Suite B Cryptographic Suites for IPsec
RFC 6460, Suite B Profile for Transport Layer Security (TLS)
These RFC have been downgraded to historic references per RFC 8423.
== History ==
In December 2006, NSA submitted an Internet Draft on implementing Suite B as part of IPsec. This draft had been accepted for publication by IETF as RFC 4869, later made obsolete by RFC 6379.
Certicom Corporation of Ontario, Canada, which was purchased by BlackBerry Limited in 2009, holds some elliptic curve patents, which have been licensed by NSA for United States government use. These include patents on ECMQV, but ECMQV has been dropped from Suite B. AES and SHA had been previously released and have no patent restrictions. See also RFC 6090.
As of October 2012, CNSSP-15 stated that the 256-bit elliptic curve (specified in FIPS 186-2), SHA-256, and AES with 128-bit keys are sufficient for protecting classified information up to the Secret level, while the 384-bit elliptic curve (specified in FIPS 186-2), SHA-384, and AES with 256-bit keys are necessary for the protection of Top Secret information.
However, as of August 2015, NSA indicated that only the Top Secret algorithm strengths should be used to protect all levels of classified information.
In 2018 NSA withdrew Suite B in favor of the CNSA.
== Algorithms ==
NSA Suite B contains the following algorithms:
== Quantum resistant suite ==
In August 2015, NSA announced that it is planning to transition "in the not too distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy." NSA advised: "For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition." New standards are estimated to be published around 2024.
== Algorithm implementation ==
Using an algorithm suitable to encrypt information is not necessarily sufficient to properly protect information. If the algorithm is not executed within a secure device the encryption keys are vulnerable to disclosure. For this reason, the US federal government requires not only the use of NIST-validated encryption algorithms, but also that they be executed in a validated Hardware Security Module (HSM) that provides physical protection of the keys and, depending on the validation level, countermeasures against electronic attacks such as differential power analysis and other side-channel attacks. For example, using AES-256 within an FIPS 140-2 validated module is sufficient to encrypt only US Government sensitive, unclassified data. This same notion applies to the other algorithms.
== Commercial National Security Algorithm Suite ==
The Suite B algorithms have been replaced by Commercial National Security Algorithm (CNSA) Suite algorithms:
Advanced Encryption Standard (AES), per FIPS 197, using 256 bit keys to protect up to TOP SECRET
Elliptic Curve Diffie-Hellman (ECDH) Key Exchange, per FIPS SP 800-56A, using Curve P-384 to protect up to TOP SECRET.
Elliptic Curve Digital Signature Algorithm (ECDSA), per FIPS 186-4
Secure Hash Algorithm (SHA), per FIPS 180-4, using SHA-384 to protect up to TOP SECRET.
Diffie-Hellman (DH) Key Exchange, per RFC 3526, minimum 3072-bit modulus to protect up to TOP SECRET
RSA for key establishment (NIST SP 800-56B rev 1) and digital signatures (FIPS 186-4), minimum 3072-bit modulus to protect up to TOP SECRET
== See also ==
NSA cryptography
== References == | Wikipedia/NSA_Suite_B_Cryptography |
The tables below compare cryptography libraries that deal with cryptography algorithms and have application programming interface (API) function calls to each of the supported features.
== Cryptography libraries ==
== FIPS 140 ==
This table denotes, if a cryptography library provides the technical requisites for FIPS 140, and the status of their FIPS 140 certification (according to NIST's CMVP search, modules in process list and implementation under test list).
== Key operations ==
Key operations include key generation algorithms, key exchange agreements, and public key cryptography standards.
=== Public key algorithms ===
=== Elliptic-curve cryptography (ECC) support ===
=== Public key cryptography standards ===
== Hash functions ==
Comparison of supported cryptographic hash functions. Here hash functions are defined as taking an arbitrary length message and producing a fixed size output that is virtually impossible to use for recreating the original message.
== MAC algorithms ==
Comparison of implementations of message authentication code (MAC) algorithms. A MAC is a short piece of information used to authenticate a message—in other words, to confirm that the message came from the stated sender (its authenticity) and has not been changed in transit (its integrity).
== Block ciphers ==
Table compares implementations of block ciphers. Block ciphers are defined as being deterministic and operating on a set number of bits (termed a block) using a symmetric key. Each block cipher can be broken up into the possible key sizes and block cipher modes it can be run with.
=== Block cipher algorithms ===
=== Cipher modes ===
== Stream ciphers ==
The table below shows the support of various stream ciphers. Stream ciphers are defined as using plain text digits that are combined with a pseudorandom cipher digit stream. Stream ciphers are typically faster than block ciphers and may have lower hardware complexity, but may be more susceptible to attacks.
== Hardware-assisted support ==
These tables compare the ability to use hardware enhanced cryptography. By using the assistance of specific hardware, the library can achieve greater speeds and/or improved security than otherwise.
=== Smart card, SIM, HSM protocol support ===
=== General purpose CPU, platform acceleration support ===
== Code size and code to comment ratio ==
== Portability ==
== References == | Wikipedia/Comparison_of_cryptography_libraries |
The Encyclopedia of Cryptography and Security is a comprehensive work on Cryptography for both information security professionals and experts in the fields of Computer Science, Applied Mathematics, Engineering, Information Theory, Data Encryption, etc. It consists of 460 articles in alphabetical order and is available electronically and in print. The Encyclopedia has a representative Advisory Board consisting of 18 leading international specialists.
Topics include but are not limited to authentication and identification, copy protection, cryptoanalysis and security, factorization algorithms and primality tests, cryptographic protocols, key management, electronic payments and digital certificates, hash functions and MACs, elliptic curve cryptography, quantum cryptography and web security.
The style of the articles is of explanatory character and can be used for undergraduate or graduate courses.
== Advisory board members ==
Carlisle Adams, Entrust, Inc.
Friedrich Bauer, Technische Universität München
Gerrit Bleumer, Francotyp-Postalia
Dan Boneh, Stanford University
Pascale Charpin, INRIA-Rocquencourt
Claude Crepeau, McGill University
Yvo G. Desmedt, University College London (University of London)
Grigory Kabatiansky, Institute for Information Transmission Problems
Burt Kaliski, RSA Security
Peter Landrock, University of Aarhus
Patrick Drew McDaniel, Penn State University
Alfred Menezes, University of Waterloo
David Naccache, Gemplus
Christof Paar, Ruhr-Universität Bochum
Bart Preneel, Katholieke Universiteit Leuven
Jean-Jacques Quisquater, Université Catholique de Louvain
Kazue Sako, NEC Corporation
Berry Schoenmakers, Technische Universiteit Eindhoven
== References == | Wikipedia/Encyclopedia_of_Cryptography_and_Security |
A key in cryptography is a piece of information, usually a string of numbers or letters that are stored in a file, which, when processed through a cryptographic algorithm, can encode or decode cryptographic data. Based on the used method, the key can be different sizes and varieties, but in all cases, the strength of the encryption relies on the security of the key being maintained. A key's security strength is dependent on its algorithm, the size of the key, the generation of the key, and the process of key exchange.
== Scope ==
The key is what is used to encrypt data from plaintext to ciphertext. There are different methods for utilizing keys and encryption.
=== Symmetric cryptography ===
Symmetric cryptography refers to the practice of the same key being used for both encryption and decryption.
=== Asymmetric cryptography ===
Asymmetric cryptography has separate keys for encrypting and decrypting. These keys are known as the public and private keys, respectively.
== Purpose ==
Since the key protects the confidentiality and integrity of the system, it is important to be kept secret from unauthorized parties. With public key cryptography, only the private key must be kept secret, but with symmetric cryptography, it is important to maintain the confidentiality of the key. Kerckhoff's principle states that the entire security of the cryptographic system relies on the secrecy of the key.
== Key sizes ==
Key size is the number of bits in the key defined by the algorithm. This size defines the upper bound of the cryptographic algorithm's security. The larger the key size, the longer it will take before the key is compromised by a brute force attack. Since perfect secrecy is not feasible for key algorithms, researches are now more focused on computational security.
In the past, keys were required to be a minimum of 40 bits in length, however, as technology advanced, these keys were being broken quicker and quicker. As a response, restrictions on symmetric keys were enhanced to be greater in size.
Currently, 2048 bit RSA is commonly used, which is sufficient for current systems. However, current RSA key sizes would all be cracked quickly with a powerful quantum computer.
"The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher."
== Key generation ==
To prevent a key from being guessed, keys need to be generated randomly and contain sufficient entropy. The problem of how to safely generate random keys is difficult and has been addressed in many ways by various cryptographic systems. A key can directly be generated by using the output of a Random Bit Generator (RBG), a system that generates a sequence of unpredictable and unbiased bits. A RBG can be used to directly produce either a symmetric key or the random output for an asymmetric key pair generation. Alternatively, a key can also be indirectly created during a key-agreement transaction, from another key or from a password.
Some operating systems include tools for "collecting" entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high-quality randomness.
== Establishment scheme ==
The security of a key is dependent on how a key is exchanged between parties. Establishing a secured communication channel is necessary so that outsiders cannot obtain the key. A key establishment scheme (or key exchange) is used to transfer an encryption key among entities. Key agreement and key transport are the two types of a key exchange scheme that are used to be remotely exchanged between entities . In a key agreement scheme, a secret key, which is used between the sender and the receiver to encrypt and decrypt information, is set up to be sent indirectly. All parties exchange information (the shared secret) that permits each party to derive the secret key material. In a key transport scheme, encrypted keying material that is chosen by the sender is transported to the receiver. Either symmetric key or asymmetric key techniques can be used in both schemes.
The Diffie–Hellman key exchange and Rivest-Shamir-Adleman (RSA) are the most two widely used key exchange algorithms. In 1976, Whitfield Diffie and Martin Hellman constructed the Diffie–Hellman algorithm, which was the first public key algorithm. The Diffie–Hellman key exchange protocol allows key exchange over an insecure channel by electronically generating a shared key between two parties. On the other hand, RSA is a form of the asymmetric key system which consists of three steps: key generation, encryption, and decryption.
Key confirmation delivers an assurance between the key confirmation recipient and provider that the shared keying materials are correct and established. The National Institute of Standards and Technology recommends key confirmation to be integrated into a key establishment scheme to validate its implementations.
== Management ==
Key management concerns the generation, establishment, storage, usage and replacement of cryptographic keys. A key management system (KMS) typically includes three steps of establishing, storing and using keys. The base of security for the generation, storage, distribution, use and destruction of keys depends on successful key management protocols.
== Key vs password ==
A password is a memorized series of characters including letters, digits, and other special symbols that are used to verify identity. It is often produced by a human user or a password management software to protect personal and sensitive information or generate cryptographic keys. Passwords are often created to be memorized by users and may contain non-random information such as dictionary words. On the other hand, a key can help strengthen password protection by implementing a cryptographic algorithm which is difficult to guess or replace the password altogether. A key is generated based on random or pseudo-random data and can often be unreadable to humans.
A password is less safe than a cryptographic key due to its low entropy, randomness, and human-readable properties. However, the password may be the only secret data that is accessible to the cryptographic algorithm for information security in some applications such as securing information in storage devices. Thus, a deterministic algorithm called a key derivation function (KDF) uses a password to generate the secure cryptographic keying material to compensate for the password's weakness. Various methods such as adding a salt or key stretching may be used in the generation.
== See also ==
== References == | Wikipedia/Cryptographic_keys |
Strong cryptography or cryptographically strong are general terms used to designate the cryptographic algorithms that, when used correctly, provide a very high (usually insurmountable) level of protection against any eavesdropper, including the government agencies. There is no precise definition of the boundary line between the strong cryptography and (breakable) weak cryptography, as this border constantly shifts due to improvements in hardware and cryptanalysis techniques. These improvements eventually place the capabilities once available only to the NSA within the reach of a skilled individual, so in practice there are only two levels of cryptographic security, "cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files" (Bruce Schneier).
The strong cryptography algorithms have high security strength, for practical purposes usually defined as a number of bits in the key. For example, the United States government, when dealing with export control of encryption, considered as of 1999 any implementation of the symmetric encryption algorithm with the key length above 56 bits or its public key equivalent to be strong and thus potentially a subject to the export licensing. To be strong, an algorithm needs to have a sufficiently long key and be free of known mathematical weaknesses, as exploitation of these effectively reduces the key size. At the beginning of the 21st century, the typical security strength of the strong symmetrical encryption algorithms is 128 bits (slightly lower values still can be strong, but usually there is little technical gain in using smaller key sizes).
Demonstrating the resistance of any cryptographic scheme to attack is a complex matter, requiring extensive testing and reviews, preferably in a public forum. Good algorithms and protocols are required (similarly, good materials are required to construct a strong building), but good system design and implementation is needed as well: "it is possible to build a cryptographically weak system using strong algorithms and protocols" (just like the use of good materials in construction does not guarantee a solid structure). Many real-life systems turn out to be weak when the strong cryptography is not used properly, for example, random nonces are reused A successful attack might not even involve algorithm at all, for example, if the key is generated from a password, guessing a weak password is easy and does not depend on the strength of the cryptographic primitives. A user can become the weakest link in the overall picture, for example, by sharing passwords and hardware tokens with the colleagues.
== Background ==
The level of expense required for strong cryptography originally restricted its use to the government and military agencies, until the middle of the 20th century the process of encryption required a lot of human labor and errors (preventing the decryption) were very common, so only a small share of written information could have been encrypted. US government, in particular, was able to keep a monopoly on the development and use of cryptography in the US into the 1960s. In the 1970, the increased availability of powerful computers and unclassified research breakthroughs (Data Encryption Standard, the Diffie-Hellman and RSA algorithms) made strong cryptography available for civilian use. Mid-1990s saw the worldwide proliferation of knowledge and tools for strong cryptography. By the 21st century the technical limitations were gone, although the majority of the communication were still unencrypted. At the same the cost of building and running systems with strong cryptography became roughly the same as the one for the weak cryptography.
The use of computers changed the process of cryptanalysis, famously with Bletchley Park's Colossus. But just as the development of digital computers and electronics helped in cryptanalysis, it also made possible much more complex ciphers. It is typically the case that use of a quality cipher is very efficient, while breaking it requires an effort many orders of magnitude larger - making cryptanalysis so inefficient and impractical as to be effectively impossible.
== Cryptographically strong algorithms ==
This term "cryptographically strong" is often used to describe an encryption algorithm, and implies, in comparison to some other algorithm (which is thus cryptographically weak), greater resistance to attack. But it can also be used to describe hashing and unique identifier and filename creation algorithms. See for example the description of the Microsoft .NET runtime library function Path.GetRandomFileName. In this usage, the term means "difficult to guess".
An encryption algorithm is intended to be unbreakable (in which case it is as strong as it can ever be), but might be breakable (in which case it is as weak as it can ever be) so there is not, in principle, a continuum of strength as the idiom would seem to imply: Algorithm A is stronger than Algorithm B which is stronger than Algorithm C, and so on. The situation is made more complex, and less subsumable into a single strength metric, by the fact that there are many types of cryptanalytic attack and that any given algorithm is likely to force the attacker to do more work to break it when using one attack than another.
There is only one known unbreakable cryptographic system, the one-time pad, which is not generally possible to use because of the difficulties involved in exchanging one-time pads without them being compromised. So any encryption algorithm can be compared to the perfect algorithm, the one-time pad.
The usual sense in which this term is (loosely) used, is in reference to a particular attack, brute force key search — especially in explanations for newcomers to the field. Indeed, with this attack (always assuming keys to have been randomly chosen), there is a continuum of resistance depending on the length of the key used. But even so there are two major problems: many algorithms allow use of different length keys at different times, and any algorithm can forgo use of the full key length possible. Thus, Blowfish and RC5 are block cipher algorithms whose design specifically allowed for several key lengths, and who cannot therefore be said to have any particular strength with respect to brute force key search. Furthermore, US export regulations restrict key length for exportable cryptographic products and in several cases in the 1980s and 1990s (e.g., famously in the case of Lotus Notes' export approval) only partial keys were used, decreasing 'strength' against brute force attack for those (export) versions. More or less the same thing happened outside the US as well, as for example in the case of more than one of the cryptographic algorithms in the GSM cellular telephone standard.
The term is commonly used to convey that some algorithm is suitable for some task in cryptography or information security, but also resists cryptanalysis and has no, or fewer, security weaknesses. Tasks are varied, and might include:
generating randomness
encrypting data
providing a method to ensure data integrity
Cryptographically strong would seem to mean that the described method has some kind of maturity, perhaps even approved for use against different kinds of systematic attacks in theory and/or practice. Indeed, that the method may resist those attacks long enough to protect the information carried (and what stands behind the information) for a useful length of time. But due to the complexity and subtlety of the field, neither is almost ever the case. Since such assurances are not actually available in real practice, sleight of hand in language which implies that they are will generally be misleading.
There will always be uncertainty as advances (e.g., in cryptanalytic theory or merely affordable computer capacity) may reduce the effort needed to successfully use some attack method against an algorithm.
In addition, actual use of cryptographic algorithms requires their encapsulation in a cryptosystem, and doing so often introduces vulnerabilities which are not due to faults in an algorithm. For example, essentially all algorithms require random choice of keys, and any cryptosystem which does not provide such keys will be subject to attack regardless of any attack resistant qualities of the encryption algorithm(s) used.
== Legal issues ==
Widespread use of encryption increases the costs of surveillance, so the government policies aim to regulate the use of the strong cryptography. In the 2000s, the effect of encryption on the surveillance capabilities was limited by the ever-increasing share of communications going through the global social media platforms, that did not use the strong encryption and provided governments with the requested data. Murphy talks about a legislative balance that needs to be struck between the power of the government that are broad enough to be able to follow the quickly-evolving technology, yet sufficiently narrow for the public and overseeing agencies to understand the future use of the legislation.
=== USA ===
The initial response of the US government to the expanded availability of cryptography was to treat the cryptographic research in the same way the atomic energy research is, i.e., "born classified", with the government exercising the legal control of dissemination of research results. This had quickly found to be impossible, and the efforts were switched to the control over deployment (export, as prohibition on the deployment of cryptography within the US was not seriously considered).
The export control in the US historically uses two tracks:
military items (designated as "munitions", although in practice the items on the United States Munitions List do not match the common meaning of this word). The export of munitions is controlled ty the Department of State. The restrictions for the munitions are very tight, with individual export licenses specifying the product and the actual customer;
dual-use items ("commodities") need to be commercially available without excessive paperwork, so, depending on the destination, broad permissions can be granted for sales to civilian customers. The licensing for the dual-use items is provided by the Department of Commerce. The process of moving an item from the munition list to commodity status is handled by the Department of State.
Since the original applications of cryptography were almost exclusively military, it was placed on the munitions list. With the growth of the civilian uses, the dual-use cryptography was defined by cryptographic strength, with the strong encryption remaining a munition in a similar way to the guns (small arms are dual-use while artillery is of purely military value). This classification had its obvious drawbacks: a major bank is arguably just as systemically important as a military installation, and restriction on publishing the strong cryptography code run against the First Amendment, so after experimenting in 1993 with the Clipper chip (where the US government kept special decryption keys in escrow), in 1996 almost all cryptographic items were transferred to the Department of Commerce.
=== EU ===
The position of the EU, in comparison to the US, had always been tilting more towards privacy. In particular, EU had rejected the key escrow idea as early as 1997. European Union Agency for Cybersecurity (ENISA) holds the opinion that the backdoors are not efficient for the legitimate surveillance, yet pose great danger to the general digital security.
=== Five Eyes ===
The Five Eyes (post-Brexit) represent a group of states with similar views one the issues of security and privacy. The group might have enough heft to drive the global agenda on the lawful interception. The efforts of this group are not entirely coordinated: for example, the 2019 demand for Facebook not to implement end-to-end encryption was not supported by either Canada or New Zealand, and did not result in a regulation.
=== Russia ===
President and government of Russia in 90s has issued a few decrees formally banning uncertified cryptosystems from use by government agencies. Presidential decree of 1995 also attempted to ban individuals from producing and selling cryptography systems without having appropriate license, but it wasn't enforced in any way as it was suspected to be contradictory the Russian Constitution of 1993 and wasn't a law per se. The decree of No.313 issued in 2012 further amended previous ones allowing to produce and distribute products with embedded cryptosystems and requiring no license as such, even though it declares some restrictions. France had quite strict regulations in this field, but has relaxed them in recent years.
== Examples ==
=== Strong ===
PGP is generally considered an example of strong cryptography, with versions running under most popular operating systems and on various hardware platforms. The open source standard for PGP operations is OpenPGP, and GnuPG is an implementation of that standard from the FSF. However, the IDEA signature key in classical PGP is only 64 bits long, therefore no longer immune to collision attacks. OpenPGP therefore uses the SHA-2 hash function and AES cryptography.
The AES algorithm is considered strong after being selected in a lengthy selection process that was open and involved numerous tests.
Elliptic curve cryptography is another system which is based on a graphical geometrical function.
The latest version of TLS protocol (version 1.3), used to secure Internet transactions, is generally considered strong. Several vulnerabilities exist in previous versions, including demonstrated attacks such as POODLE. Worse, some cipher-suites are deliberately weakened to use a 40-bit effective key to allow export under pre-1996 U.S. regulations.
=== Weak ===
Examples that are not considered cryptographically strong include:
The DES, whose 56-bit keys allow attacks via exhaustive search.
Triple-DES (3DES / EDE3-DES) can be subject of the "SWEET32 Birthday attack"
Wired Equivalent Privacy which is subject to a number of attacks due to flaws in its design.
SSL v2 and v3. TLS 1.0 and TLS 1.1 are also deprecated now [see RFC7525] because of irreversible flaws which are still present by design and because they do not provide elliptical handshake (EC) for ciphers, no modern cryptography, no CCM/GCM ciphermodes. TLS1.x are also announced off by the PCIDSS 3.2 for commercial business/banking implementations on web frontends. Only TLS1.2 and TLS 1.3 are allowed and recommended, modern ciphers, handshakes and ciphermodes must be used exclusively.
The MD5 and SHA-1 hash functions, no longer immune to collision attacks.
The RC4 stream cipher.
The 40-bit Content Scramble System used to encrypt most DVD-Video discs.
Almost all classical ciphers.
Most rotary ciphers, such as the Enigma machine.
DHE/EDHE is guessable/weak when using/re-using known default prime values on the server
== Notes ==
== References ==
== Sources ==
Vagle, Jeffrey L. (2015). "Furtive Encryption: Power, Trusts, and the Constitutional Cost of Collective Surveillance". Indiana Law Journal. 90 (1).
Reinhold, Arnold G. (September 17, 1999). Strong Cryptography The Global Tide of Change. Cato Institute Briefing Papers No. 51. Cato Institute.
Diffie, Whitfield; Landau, Susan (2007). "The export of cryptography in the 20th and the 21st centuries". The History of Information Security. Elsevier. pp. 725–736. doi:10.1016/b978-044451608-4/50027-4. ISBN 978-0-444-51608-4.
Murphy, Cian C (2020). "The Crypto-Wars myth: The reality of state access to encrypted communications". Common Law World Review. 49 (3–4). SAGE Publications: 245–261. doi:10.1177/1473779520980556. hdl:1983/3c40a9b4-4a96-4073-b204-2030170b2e63. ISSN 1473-7795.
Riebe, Thea; Kühn, Philipp; Imperatori, Philipp; Reuter, Christian (2022-02-26). "U.S. Security Policy: The Dual-Use Regulation of Cryptography and its Effects on Surveillance" (PDF). European Journal for Security Research. 7 (1). Springer Science and Business Media LLC: 39–65. doi:10.1007/s41125-022-00080-0. ISSN 2365-0931.
Feigenbaum, Joan (2019-04-24). "Encryption and surveillance". Communications of the ACM. 62 (5). Association for Computing Machinery (ACM): 27–29. doi:10.1145/3319079. ISSN 0001-0782.
Schneier, Bruce (1998). "Security pitfalls in cryptography" (PDF). Retrieved 27 March 2024.
== See also ==
40-bit encryption
Cipher security summary
Export of cryptography
Comparison of cryptography libraries
FBI–Apple encryption dispute
Hash function security summary
Security level | Wikipedia/Strong_cryptography |
Visual cryptography is a cryptographic technique which allows visual information (pictures, text, etc.) to be encrypted in such a way that the decrypted information appears as a visual image.
One of the best-known techniques has been credited to Moni Naor and Adi Shamir, who developed it in 1994. They demonstrated a visual secret sharing scheme, where a binary image was broken up into n shares so that only someone with all n shares could decrypt the image, while any n − 1 shares revealed no information about the original image. Each share was printed on a separate transparency, and decryption was performed by overlaying the shares. When all n shares were overlaid, the original image would appear. There are several generalizations of the basic scheme including k-out-of-n visual cryptography, and using opaque sheets but illuminating them by multiple sets of identical illumination patterns under the recording of only one single-pixel detector.
Using a similar idea, transparencies can be used to implement a one-time pad encryption, where one transparency is a shared random pad, and another transparency acts as the ciphertext. Normally, there is an expansion of space requirement in visual cryptography. But if one of the two shares is structured recursively, the efficiency of visual cryptography can be increased to 100%.
Some antecedents of visual cryptography are in patents from the 1960s. Other antecedents are in the work on perception and secure communication.
Visual cryptography can be used to protect biometric templates in which decryption does not require any complex computations.
== Example ==
In this example, the binary image has been split into two component images. Each component image has a pair of pixels for every pixel in the original image. These pixel pairs are shaded black or white according to the following rule: if the original image pixel was black, the pixel pairs in the component images must be complementary; randomly shade one ■□, and the other □■. When these complementary pairs are overlapped, they will appear dark gray. On the other hand, if the original image pixel was white, the pixel pairs in the component images must match: both ■□ or both □■. When these matching pairs are overlapped, they will appear light gray.
So, when the two component images are superimposed, the original image appears. However, without the other component, a component image reveals no information about the original image; it is indistinguishable from a random pattern of ■□ / □■ pairs. Moreover, if you have one component image, you can use the shading rules above to produce a counterfeit component image that combines with it to produce any image at all.
== (2, n) visual cryptography sharing case ==
Sharing a secret with an arbitrary number of people, n, such that at least 2 of them are required to decode the secret is one form of the visual secret sharing scheme presented by Moni Naor and Adi Shamir in 1994. In this scheme we have a secret image which is encoded into n shares printed on transparencies. The shares appear random and contain no decipherable information about the underlying secret image, however if any 2 of the shares are stacked on top of one another the secret image becomes decipherable by the human eye.
Every pixel from the secret image is encoded into multiple subpixels in each share image using a matrix to determine the color of the pixels.
In the (2, n) case, a white pixel in the secret image is encoded using a matrix from the following set, where each row gives the subpixel pattern for one of the components:
{all permutations of the columns of} :
C
0
=
[
1
0
.
.
.
0
1
0
.
.
.
0
.
.
.
1
0
.
.
.
0
]
.
{\displaystyle \mathbf {C_{0}=} {\begin{bmatrix}1&0&...&0\\1&0&...&0\\...\\1&0&...&0\end{bmatrix}}.}
While a black pixel in the secret image is encoded using a matrix from the following set:
{all permutations of the columns of} :
C
1
=
[
1
0
.
.
.
0
0
1
.
.
.
0
.
.
.
0
0
.
.
.
1
]
.
{\displaystyle \mathbf {C_{1}=} {\begin{bmatrix}1&0&...&0\\0&1&...&0\\...\\0&0&...&1\end{bmatrix}}.}
For instance in the (2,2) sharing case (the secret is split into 2 shares and both shares are required to decode the secret) we use complementary matrices to share a black pixel and identical matrices to share a white pixel. Stacking the shares we have all the subpixels associated with the black pixel now black while 50% of the subpixels associated with the white pixel remain white.
== Cheating the (2, n) visual secret sharing scheme ==
Horng et al. proposed a method that allows n − 1 colluding parties to cheat an honest party in visual cryptography. They take advantage of knowing the underlying distribution of the pixels in the shares to create new shares that combine with existing shares to form a new secret message of the cheaters choosing.
We know that 2 shares are enough to decode the secret image using the human visual system. But examining two shares also gives some information about the 3rd share. For instance, colluding participants may examine their shares to determine when they both have black pixels and use that information to determine that another participant will also have a black pixel in that location. Knowing where black pixels exist in another party's share allows them to create a new share that will combine with the predicted share to form a new secret message. In this way a set of colluding parties that have enough shares to access the secret code can cheat other honest parties.
== Visual steganography ==
2×2 subpixels can also encode a binary image in each component image, as in the scheme on the right. Each white pixel of each component image is represented by two black subpixels, while each black pixel is represented by three black subpixels.
When overlaid, each white pixel of the secret image is represented by three black subpixels, while each black pixel is represented by all four subpixels black. Each corresponding pixel in the component images is randomly rotated to avoid orientation leaking information about the secret image.
== In popular culture ==
In "Do Not Forsake Me Oh My Darling", a 1967 episode of TV series The Prisoner, the protagonist uses a visual cryptography overlay of multiple transparencies to reveal a secret message – the location of a scientist friend who had gone into hiding.
== See also ==
Grille (cryptography)
Steganography
== References ==
== External links ==
Java implementation and illustrations of Visual Cryptography
Python implementation of Visual Cryptography
Visual Cryptography on Cipher Machines & Cryptology
Doug Stinson's visual cryptography page
Liu, Feng; Yan, Wei Qi (2014) Visual Cryptography for Image Processing and Security: Theory, Methods, and Applications, Springer
Hammoudi, Karim; Melkemi, Mahmoud (2018). "Personalized Shares in Visual Cryptography". Journal of Imaging. 4 (11): 126. doi:10.3390/jimaging4110126. | Wikipedia/Visual_cryptography |
A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they are uncorrelated and each have variance 1. The transformation is called "whitening" because it changes the input vector into a white noise vector.
Several other transformations are closely related to whitening:
the decorrelation transform removes only the correlations but leaves variances intact,
the standardization transform sets variances to 1 but leaves correlations intact,
a coloring transformation transforms a vector of white random variables into a random vector with a specified covariance matrix.
== Definition ==
Suppose
X
{\displaystyle X}
is a random (column) vector with non-singular covariance matrix
Σ
{\displaystyle \Sigma }
and mean
0
{\displaystyle 0}
. Then the transformation
Y
=
W
X
{\displaystyle Y=WX}
with
a whitening matrix
W
{\displaystyle W}
satisfying the condition
W
T
W
=
Σ
−
1
{\displaystyle W^{\mathrm {T} }W=\Sigma ^{-1}}
yields the whitened random vector
Y
{\displaystyle Y}
with unit diagonal covariance.
If
X
{\displaystyle X}
has non-zero mean
μ
{\displaystyle \mu }
, then whitening can be performed by
Y
=
W
(
X
−
μ
)
{\displaystyle Y=W(X-\mu )}
.
There are infinitely many possible whitening matrices
W
{\displaystyle W}
that all satisfy the above condition. Commonly used choices are
W
=
Σ
−
1
/
2
{\displaystyle W=\Sigma ^{-1/2}}
(Mahalanobis or ZCA whitening),
W
=
L
T
{\displaystyle W=L^{T}}
where
L
{\displaystyle L}
is the Cholesky decomposition of
Σ
−
1
{\displaystyle \Sigma ^{-1}}
(Cholesky whitening), or the eigen-system of
Σ
{\displaystyle \Sigma }
(PCA whitening).
Optimal whitening transforms can be singled out by investigating the cross-covariance and cross-correlation of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
. For example, the unique optimal whitening transformation achieving maximal component-wise correlation between original
X
{\displaystyle X}
and whitened
Y
{\displaystyle Y}
is produced by the whitening matrix
W
=
P
−
1
/
2
V
−
1
/
2
{\displaystyle W=P^{-1/2}V^{-1/2}}
where
P
{\displaystyle P}
is the correlation matrix and
V
{\displaystyle V}
the diagonal variance matrix.
== Whitening a data matrix ==
Whitening a data matrix follows the same transformation as for random variables. An empirical whitening transform is obtained by estimating the covariance (e.g. by maximum likelihood) and subsequently constructing a corresponding estimated whitening matrix (e.g. by Cholesky decomposition).
== High-dimensional whitening ==
This modality is a generalization of the pre-whitening procedure extended to more general spaces where
X
{\displaystyle X}
is usually assumed to be a random function or other random objects in a Hilbert space
H
{\displaystyle H}
. One of the main issues of extending whitening to infinite dimensions is that the covariance operator has an unbounded inverse in
H
{\displaystyle H}
. Nevertheless, if one assumes that Picard condition holds for
X
{\displaystyle X}
in the range space of the covariance operator, whitening becomes possible. A whitening operator can be then defined from the factorization of the Moore–Penrose inverse of the covariance operator, which has effective mapping on Karhunen–Loève type expansions of
X
{\displaystyle X}
. The advantage of these whitening transformations is that they can be optimized according to the underlying topological properties of the data, thus producing more robust whitening representations. High-dimensional features of the data can be exploited through kernel regressors or basis function systems.
== R implementation ==
An implementation of several whitening procedures in R, including ZCA-whitening and PCA whitening but also CCA whitening, is available in the "whitening" R package published on CRAN. The R package "pfica" allows the computation of high-dimensional whitening representations using basis function systems (B-splines, Fourier basis, etc.).
== See also ==
Decorrelation
Principal component analysis
Weighted least squares
Canonical correlation
Mahalanobis distance (is Euclidean after W. transformation).
== References ==
== External links ==
http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf
The ZCA whitening transformation. Appendix A of Learning Multiple Layers of Features from Tiny Images by A. Krizhevsky. | Wikipedia/Whitening_transformation |
In cryptography, higher-order differential cryptanalysis is a generalization of differential cryptanalysis, an attack used against block ciphers. While in standard differential cryptanalysis the difference between only two texts is used, higher-order differential cryptanalysis studies the propagation of a set of differences between a larger set of texts. Xuejia Lai, in 1994, laid the groundwork by showing that differentials are a special case of the more general case of higher order derivates. Lars Knudsen, in the same year, was able to show how the concept of higher order derivatives can be used to mount attacks on block ciphers. These attacks can be superior to standard differential cryptanalysis. Higher-order differential cryptanalysis has notably been used to break the KN-Cipher, a cipher which had previously been proved to be immune against standard differential cryptanalysis.
== Higher-order derivatives ==
A block cipher which maps
n
{\displaystyle n}
-bit strings to
n
{\displaystyle n}
-bit strings can, for a fixed key, be thought of as a function
f
:
F
2
n
→
F
2
n
{\displaystyle f:\mathbb {F} _{2}^{n}\to \mathbb {F} _{2}^{n}}
. In standard differential cryptanalysis, one is interested in finding a pair of an input difference
α
{\displaystyle \alpha }
and an output difference
β
{\displaystyle \beta }
such that two input texts with difference
α
{\displaystyle \alpha }
are likely to result in output texts with a difference
β
{\displaystyle \beta }
i.e., that
f
(
m
⊕
α
)
⊕
f
(
m
)
=
β
{\displaystyle f(m\oplus \alpha )\oplus f(m)=\beta }
is true for many
m
∈
F
2
n
{\displaystyle m\in \mathbb {F} _{2}^{n}}
. Note that the difference used here is the XOR which is the usual case, though other definitions of difference are possible.
This motivates defining the derivative of a function
f
:
F
2
n
→
F
2
n
{\displaystyle f:\mathbb {F} _{2}^{n}\to \mathbb {F} _{2}^{n}}
at a point
α
{\displaystyle \alpha }
as
Using this definition, the
i
{\displaystyle i}
-th derivative at
(
α
1
,
α
2
,
…
,
α
i
)
{\displaystyle (\alpha _{1},\alpha _{2},\dots ,\alpha _{i})}
can recursively be defined as
Thus for example
Δ
α
1
,
α
2
(
2
)
f
(
x
)
=
f
(
x
)
⊕
f
(
x
⊕
α
1
)
⊕
f
(
x
⊕
α
2
)
⊕
f
(
x
⊕
α
1
⊕
α
2
)
{\displaystyle \Delta _{\alpha _{1},\alpha _{2}}^{(2)}f(x)=f(x)\oplus f(x\oplus \alpha _{1})\oplus f(x\oplus \alpha _{2})\oplus f(x\oplus \alpha _{1}\oplus \alpha _{2})}
.
Higher order derivatives as defined here have many properties in common with ordinary derivative such as the sum rule and the product rule. Importantly also, taking the derivative reduces the algebraic degree of the function.
== Higher-order differential attacks ==
To implement an attack using higher order derivatives, knowledge about the probability distribution of the derivative of the cipher is needed. Calculating or estimating this distribution is generally a hard problem but if the cipher in question is known to have a low algebraic degree, the fact that derivatives reduce this degree can be used. For example, if a cipher (or the S-box function under analysis) is known to only have an algebraic degree of 8, any 9th order derivative must be 0.
Therefore, it is important for any cipher or S-box function in specific to have a maximal (or close to maximal) degree to defy this attack.
Cube attacks have been considered a variant of higher-order differential attacks.
== Resistance against Higher-order differential attacks ==
== Limitations of Higher-order differential attacks ==
Works for small or low algebraic degree S-boxes or small S-boxes. In addition to AND and XOR operations.
== See also ==
Differential Cryptanalysis
KN-Cipher
Cube attack
== References == | Wikipedia/Higher-order_differential_cryptanalysis |
The MD2 Message-Digest Algorithm is a cryptographic hash function developed by Ronald Rivest in 1989. The algorithm is optimized for 8-bit computers. MD2 is specified in IETF RFC 1319. The "MD" in MD2 stands for "Message Digest".
Even though MD2 is not yet fully compromised, the IETF retired MD2 to "historic" status in 2011, citing "signs of weakness". It is deprecated in favor of SHA-256 and other strong hashing algorithms.
Nevertheless, as of 2014, it remained in use in public key infrastructures as part of certificates generated with MD2 and RSA.
== Description ==
The 128-bit hash value of any message is formed by padding it to a multiple of the block length (128 bits or 16 bytes) and adding a 16-byte checksum to it. For the actual calculation, a 48-byte auxiliary block and a 256-byte S-table are used. The constants were generated by shuffling the integers 0 through 255 using a variant of Durstenfeld's algorithm with a pseudorandom number generator based on decimal digits of π (pi) (see nothing up my sleeve number). The algorithm runs through a loop where it permutes each byte in the auxiliary block 18 times for every 16 input bytes processed. Once all of the blocks of the (lengthened) message have been processed, the first partial block of the auxiliary block becomes the hash value of the message.
The S-table values in hex are:
{ 0x29, 0x2E, 0x43, 0xC9, 0xA2, 0xD8, 0x7C, 0x01, 0x3D, 0x36, 0x54, 0xA1, 0xEC, 0xF0, 0x06, 0x13,
0x62, 0xA7, 0x05, 0xF3, 0xC0, 0xC7, 0x73, 0x8C, 0x98, 0x93, 0x2B, 0xD9, 0xBC, 0x4C, 0x82, 0xCA,
0x1E, 0x9B, 0x57, 0x3C, 0xFD, 0xD4, 0xE0, 0x16, 0x67, 0x42, 0x6F, 0x18, 0x8A, 0x17, 0xE5, 0x12,
0xBE, 0x4E, 0xC4, 0xD6, 0xDA, 0x9E, 0xDE, 0x49, 0xA0, 0xFB, 0xF5, 0x8E, 0xBB, 0x2F, 0xEE, 0x7A,
0xA9, 0x68, 0x79, 0x91, 0x15, 0xB2, 0x07, 0x3F, 0x94, 0xC2, 0x10, 0x89, 0x0B, 0x22, 0x5F, 0x21,
0x80, 0x7F, 0x5D, 0x9A, 0x5A, 0x90, 0x32, 0x27, 0x35, 0x3E, 0xCC, 0xE7, 0xBF, 0xF7, 0x97, 0x03,
0xFF, 0x19, 0x30, 0xB3, 0x48, 0xA5, 0xB5, 0xD1, 0xD7, 0x5E, 0x92, 0x2A, 0xAC, 0x56, 0xAA, 0xC6,
0x4F, 0xB8, 0x38, 0xD2, 0x96, 0xA4, 0x7D, 0xB6, 0x76, 0xFC, 0x6B, 0xE2, 0x9C, 0x74, 0x04, 0xF1,
0x45, 0x9D, 0x70, 0x59, 0x64, 0x71, 0x87, 0x20, 0x86, 0x5B, 0xCF, 0x65, 0xE6, 0x2D, 0xA8, 0x02,
0x1B, 0x60, 0x25, 0xAD, 0xAE, 0xB0, 0xB9, 0xF6, 0x1C, 0x46, 0x61, 0x69, 0x34, 0x40, 0x7E, 0x0F,
0x55, 0x47, 0xA3, 0x23, 0xDD, 0x51, 0xAF, 0x3A, 0xC3, 0x5C, 0xF9, 0xCE, 0xBA, 0xC5, 0xEA, 0x26,
0x2C, 0x53, 0x0D, 0x6E, 0x85, 0x28, 0x84, 0x09, 0xD3, 0xDF, 0xCD, 0xF4, 0x41, 0x81, 0x4D, 0x52,
0x6A, 0xDC, 0x37, 0xC8, 0x6C, 0xC1, 0xAB, 0xFA, 0x24, 0xE1, 0x7B, 0x08, 0x0C, 0xBD, 0xB1, 0x4A,
0x78, 0x88, 0x95, 0x8B, 0xE3, 0x63, 0xE8, 0x6D, 0xE9, 0xCB, 0xD5, 0xFE, 0x3B, 0x00, 0x1D, 0x39,
0xF2, 0xEF, 0xB7, 0x0E, 0x66, 0x58, 0xD0, 0xE4, 0xA6, 0x77, 0x72, 0xF8, 0xEB, 0x75, 0x4B, 0x0A,
0x31, 0x44, 0x50, 0xB4, 0x8F, 0xED, 0x1F, 0x1A, 0xDB, 0x99, 0x8D, 0x33, 0x9F, 0x11, 0x83, 0x14 }
== MD2 hashes ==
The 128-bit (16-byte) MD2 hashes (also termed message digests) are typically represented as 32-digit hexadecimal numbers. The following demonstrates a 43-byte ASCII input and the corresponding MD2 hash:
MD2("The quick brown fox jumps over the lazy dog") =
03d85a0d629d2c442e987525319fc471
As the result of the avalanche effect in MD2, even a small change in the input message will (with overwhelming probability) result in a completely different hash. For example, changing the letter d to c in the message results in:
MD2("The quick brown fox jumps over the lazy cog") =
6b890c9292668cdbbfda00a4ebf31f05
The hash of the zero-length string is:
MD2("") =
8350e5a3e24c153df2275c9f80692773
== Security ==
Rogier and Chauvaud presented in 1995 collisions of MD2's compression function, although they were unable to extend the attack to the full MD2. The described collisions was published in 1997.
In 2004, MD2 was shown to be vulnerable to a preimage attack with time complexity equivalent to 2104 applications of the compression function. The author concludes, "MD2 can no longer be considered a secure one-way hash function".
In 2008, MD2 has further improvements on a preimage attack with time complexity of 273 compression function evaluations and memory requirements of 273 message blocks.
In 2009, MD2 was shown to be vulnerable to a collision attack with time complexity of 263.3 compression function evaluations and memory requirements of 252 hash values. This is slightly better than the birthday attack which is expected to take 265.5 compression function evaluations.
In 2009, security updates were issued disabling MD2 in OpenSSL, GnuTLS, and Network Security Services.
== See also ==
Hash function security summary
Comparison of cryptographic hash functions
MD4
MD5
MD6
SHA-1
== References ==
== Further reading ==
== External links == | Wikipedia/MD2_(hash_function) |
In modern cryptography, symmetric key ciphers are generally divided into stream ciphers and block ciphers. Block ciphers operate on a fixed length string of bits. The length of this bit string is the block size. Both the input (plaintext) and output (ciphertext) are the same length; the output cannot be shorter than the input – this follows logically from the pigeonhole principle and the fact that the cipher must be reversible – and it is undesirable for the output to be longer than the input.
Until the announcement of NIST's AES contest, the majority of block ciphers followed the example of the DES in using a block size of 64 bits (8 bytes). However, the birthday paradox indicates that after accumulating several blocks equal to the square root of the total number possible, there will be an approximately 50% chance of two or more being the same, which would start to leak information about the message contents. Thus even when used with a proper encryption mode (e.g. CBC or OFB), only 232 × 8 B = 32 GB of data can be safely sent under one key. In practice a greater margin of security is desired, restricting a single key to the encryption of much less data — say a few hundred megabytes. At one point that seemed like a fair amount of data, but today it is easily exceeded. If the cipher mode does not properly randomise the input, the limit is even lower.
Consequently, AES candidates were required to support a block length of 128 bits (16 bytes). This should be acceptable for up to 264 × 16 B = 256 exabytes of data, and would suffice for many years after introduction. The winner of the AES contest, Rijndael, supports block and key sizes of 128, 192, and 256 bits, but in AES the block size is always 128 bits. The extra block sizes were not adopted by the AES standard.
Many block ciphers, such as RC5, support a variable block size. The Luby-Rackoff construction and the Outerbridge construction can both increase the effective block size of a cipher.
Joan Daemen's 3-Way and BaseKing have unusual block sizes of 96 and 192 bits, respectively.
== See also ==
Ciphertext stealing
Format-preserving encryption
== References == | Wikipedia/Block_size_(cryptography) |
The Cayley–Purser algorithm was a public-key cryptography algorithm published in early 1999 by 16-year-old Irishwoman Sarah Flannery, based on an unpublished work by Michael Purser, founder of Baltimore Technologies, a Dublin data security company. Flannery named it for mathematician Arthur Cayley. It has since been found to be flawed as a public-key algorithm, but was the subject of considerable media attention.
== History ==
During a work-experience placement with Baltimore Technologies, Flannery was shown an unpublished paper by Michael Purser which outlined a new public-key cryptographic scheme using non-commutative multiplication. She was asked to write an implementation of this scheme in Mathematica.
Before this placement, Flannery had attended the 1998 ESAT Young Scientist and Technology Exhibition with a project describing already existing cryptographic techniques from the Caesar cipher to RSA. This had won her the Intel Student Award which included the opportunity to compete in the 1998 Intel International Science and Engineering Fair in the United States. Feeling that she needed some original work to add to her exhibition project, Flannery asked Michael Purser for permission to include work based on his cryptographic scheme.
On advice from her mathematician father, Flannery decided to use matrices to implement Purser's scheme as matrix multiplication has the necessary property of being non-commutative. As the resulting algorithm would depend on multiplication it would be a great deal faster than the RSA algorithm which uses an exponential step. For her Intel Science Fair project Flannery prepared a demonstration where the same plaintext was enciphered using both RSA and her new Cayley–Purser algorithm and it did indeed show a significant time improvement.
Returning to the ESAT Young Scientist and Technology Exhibition in 1999, Flannery formalised Cayley-Purser's runtime and analyzed a variety of known attacks, none of which were determined to be effective.
Flannery did not make any claims that the Cayley–Purser algorithm would replace RSA, knowing that any new cryptographic system would need to stand the test of time before it could be acknowledged as a secure system. The media were not so circumspect however and when she received first prize at the ESAT exhibition, newspapers around the world reported the story that a young girl genius had revolutionised cryptography.
In fact an attack on the algorithm was discovered shortly afterwards but she analyzed it and included it as an appendix in later competitions, including a Europe-wide competition in which she won a major award.
== Overview ==
Notation used in this discussion is as in Flannery's original paper.
=== Key generation ===
Like RSA, Cayley-Purser begins by generating two large primes p and q and their product n, a semiprime. Next, consider GL(2,n), the general linear group of 2×2 matrices with integer elements and modular arithmetic mod n. For example, if n=5, we could write:
[
0
1
2
3
]
+
[
1
2
3
4
]
=
[
1
3
5
7
]
≡
[
1
3
0
2
]
{\displaystyle {\begin{bmatrix}0&1\\2&3\end{bmatrix}}+{\begin{bmatrix}1&2\\3&4\end{bmatrix}}={\begin{bmatrix}1&3\\5&7\end{bmatrix}}\equiv {\begin{bmatrix}1&3\\0&2\end{bmatrix}}}
[
0
1
2
3
]
[
1
2
3
4
]
=
[
3
4
11
16
]
≡
[
3
4
1
1
]
{\displaystyle {\begin{bmatrix}0&1\\2&3\end{bmatrix}}{\begin{bmatrix}1&2\\3&4\end{bmatrix}}={\begin{bmatrix}3&4\\11&16\end{bmatrix}}\equiv {\begin{bmatrix}3&4\\1&1\end{bmatrix}}}
This group is chosen because it has large order (for large semiprime n), equal to (p2−1)(p2−p)(q2−1)(q2−q).
Let
χ
{\displaystyle \chi }
and
α
{\displaystyle \alpha }
be two such matrices from GL(2,n) chosen such that
χ
α
≠
α
χ
{\displaystyle \chi \alpha \not =\alpha \chi }
. Choose some natural number r and compute:
β
=
χ
−
1
α
−
1
χ
,
{\displaystyle \beta =\chi ^{-1}\alpha ^{-1}\chi ,}
γ
=
χ
r
.
{\displaystyle \gamma =\chi ^{r}.}
The public key is
n
{\displaystyle n}
,
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
, and
γ
{\displaystyle \gamma }
. The private key is
χ
{\displaystyle \chi }
.
=== Encryption ===
The sender begins by generating a random natural number s and computing:
δ
=
γ
s
{\displaystyle \delta =\gamma ^{s}}
ϵ
=
δ
−
1
α
δ
{\displaystyle \epsilon =\delta ^{-1}\alpha \delta }
κ
=
δ
−
1
β
δ
{\displaystyle \kappa =\delta ^{-1}\beta \delta }
Then, to encrypt a message, each message block is encoded as a number (as in RSA) and they are placed four at a time as elements of a plaintext matrix
μ
{\displaystyle \mu }
. Each
μ
{\displaystyle \mu }
is encrypted using:
μ
′
=
κ
μ
κ
.
{\displaystyle \mu '=\kappa \mu \kappa .}
Then
μ
′
{\displaystyle \mu '}
and
ϵ
{\displaystyle \epsilon }
are sent to the receiver.
=== Decryption ===
The receiver recovers the original plaintext matrix
μ
{\displaystyle \mu }
via:
λ
=
χ
−
1
ϵ
χ
,
{\displaystyle \lambda =\chi ^{-1}\epsilon \chi ,}
μ
=
λ
μ
′
λ
.
{\displaystyle \mu =\lambda \mu '\lambda .}
== Security ==
Recovering the private key
χ
{\displaystyle \chi }
from
γ
{\displaystyle \gamma }
is computationally infeasible, at least as hard as finding square roots mod n (see quadratic residue). It could be recovered from
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
if the system
χ
β
=
α
−
1
χ
{\displaystyle \chi \beta =\alpha ^{-1}\chi }
could be solved, but the number of solutions to this system is large as long as elements in the group have a large order, which can be guaranteed for almost every element.
However, the system can be broken by finding a multiple
χ
′
{\displaystyle \chi '}
of
χ
{\displaystyle \chi }
by solving for
d
{\displaystyle d}
in the following congruence:
d
(
β
−
α
−
1
)
≡
(
α
−
1
γ
−
γ
β
)
(
mod
n
)
{\displaystyle d\left(\beta -\alpha ^{-1}\right)\equiv \left(\alpha ^{-1}\gamma -\gamma \beta \right){\pmod {n}}}
Observe that a solution exists if for some
i
,
j
∈
|
γ
|
{\displaystyle i,j\in \left|\gamma \right|}
and
x
,
y
∈
Z
n
{\displaystyle x,y\in \mathbb {Z} _{n}}
x
(
β
i
j
−
1
−
α
i
j
)
≡
y
(
mod
n
)
.
{\displaystyle x\left(\beta _{ij}^{-1}-\alpha _{ij}\right)\equiv y{\pmod {n}}.}
If
d
{\displaystyle d}
is known,
d
I
+
γ
=
χ
′
{\displaystyle d\mathrm {I} +\gamma =\chi '}
— a multiple of
χ
{\displaystyle \chi }
. Any multiple of
χ
{\displaystyle \chi }
yields
λ
=
κ
−
1
=
v
−
1
χ
−
1
ϵ
v
χ
{\displaystyle \lambda =\kappa ^{-1}=v^{-1}\chi ^{-1}\epsilon v\chi }
. This presents a fatal weakness for the system, which has not yet been reconciled.
This flaw does not preclude the algorithm's use as a mixed private-key/public-key algorithm, if the sender transmits
ϵ
{\displaystyle \epsilon }
secretly, but this approach presents no advantage over the common approach of transmitting a symmetric encryption key using a public-key encryption scheme and then switching to symmetric encryption, which is faster than Cayley-Purser.
== See also ==
Non-commutative cryptography
== References ==
Sarah Flannery and David Flannery. In Code: A Mathematical Journey. ISBN 0-7611-2384-9 | Wikipedia/Cayley–Purser_algorithm |
In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.
== Definitions ==
Intuitively, an expander graph is a finite, undirected multigraph in which every subset of the vertices that is not "too large" has a "large" boundary. Different formalisations of these notions give rise to different notions of expanders: edge expanders, vertex expanders, and spectral expanders, as defined below.
A disconnected graph is not an expander, since the boundary of a connected component is empty. Every connected finite graph is an expander; however, different connected graphs have different expansion parameters. The complete graph has the best expansion property, but it has largest possible degree. Informally, a graph is a good expander if it has low degree and high expansion parameters.
=== Edge expansion ===
The edge expansion (also isoperimetric number or Cheeger constant) h(G) of a graph G on n vertices is defined as
h
(
G
)
=
min
0
<
|
S
|
≤
n
2
|
∂
S
|
|
S
|
,
{\displaystyle h(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial S|}{|S|}},}
where
∂
S
:=
{
{
u
,
v
}
∈
E
(
G
)
:
u
∈
S
,
v
∉
S
}
,
{\displaystyle \partial S:=\{\{u,v\}\in E(G)\ :\ u\in S,v\notin S\},}
which can also be written as ∂S = E(S, S) with S := V(G) \ S the complement of S and
E
(
A
,
B
)
=
{
{
u
,
v
}
∈
E
(
G
)
:
u
∈
A
,
v
∈
B
}
{\displaystyle E(A,B)=\{\{u,v\}\in E(G)\ :\ u\in A,v\in B\}}
the edges between the subsets of vertices A,B ⊆ V(G).
In the equation, the minimum is over all nonempty sets S of at most n⁄2 vertices and ∂S is the edge boundary of S, i.e., the set of edges with exactly one endpoint in S.
Intuitively,
min
|
∂
S
|
=
min
E
(
S
,
S
¯
)
{\displaystyle \min {|\partial S|}=\min E({S},{\overline {S}})}
is the minimum number of edges that need to be cut in order to split the graph in two.
The edge expansion normalizes this concept by dividing with smallest number of vertices among the two parts.
To see how the normalization can drastically change the value, consider the following example.
Take two complete graphs with the same number of vertices n and add n edges between the two graphs by connecting their vertices one-to-one.
The minimum cut will be n but the edge expansion will be 1.
Notice that in min |∂S|, the optimization can be equivalently done either over 0 ≤ |S| ≤ n⁄2 or over any non-empty subset, since
E
(
S
,
S
¯
)
=
E
(
S
¯
,
S
)
{\displaystyle E(S,{\overline {S}})=E({\overline {S}},S)}
. The same is not true for h(G) because of the normalization by |S|.
If we want to write h(G) with an optimization over all non-empty subsets, we can rewrite it as
h
(
G
)
=
min
∅
⊊
S
⊊
V
(
G
)
E
(
S
,
S
¯
)
min
{
|
S
|
,
|
S
¯
|
}
.
{\displaystyle h(G)=\min _{\emptyset \subsetneq S\subsetneq V(G)}{\frac {E({S},{\overline {S}})}{\min\{|S|,|{\overline {S}}|\}}}.}
=== Vertex expansion ===
The vertex isoperimetric number hout(G) (also vertex expansion or magnification) of a graph G is defined as
h
out
(
G
)
=
min
0
<
|
S
|
≤
n
2
|
∂
out
(
S
)
|
|
S
|
,
{\displaystyle h_{\text{out}}(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial _{\text{out}}(S)|}{|S|}},}
where ∂out(S) is the outer boundary of S, i.e., the set of vertices in V(G) \ S with at least one neighbor in S. In a variant of this definition (called unique neighbor expansion) ∂out(S) is replaced by the set of vertices in V with exactly one neighbor in S.
The vertex isoperimetric number hin(G) of a graph G is defined as
h
in
(
G
)
=
min
0
<
|
S
|
≤
n
2
|
∂
in
(
S
)
|
|
S
|
,
{\displaystyle h_{\text{in}}(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial _{\text{in}}(S)|}{|S|}},}
where
∂
in
(
S
)
{\displaystyle \partial _{\text{in}}(S)}
is the inner boundary of S, i.e., the set of vertices in S with at least one neighbor in V(G) \ S.
=== Spectral expansion ===
When G is d-regular, a linear algebraic definition of expansion is possible based on the eigenvalues of the adjacency matrix A = A(G) of G, where Aij is the number of edges between vertices i and j. Because A is symmetric, the spectral theorem implies that A has n real-valued eigenvalues λ1 ≥ λ2 ≥ … ≥ λn. It is known that all these eigenvalues are in [−d, d] and more specifically, it is known that λn = −d if and only if G is bipartite.
More formally, we refer to an n-vertex, d-regular graph with
max
i
≠
1
|
λ
i
|
≤
λ
{\displaystyle \max _{i\neq 1}|\lambda _{i}|\leq \lambda }
as an (n, d, λ)-graph. The bound given by an (n, d, λ)-graph on λi for i ≠ 1 is useful in many contexts, including the expander mixing lemma.
Spectral expansion can be two-sided, as above, with
max
i
≠
1
|
λ
i
|
≤
λ
{\displaystyle \max _{i\neq 1}|\lambda _{i}|\leq \lambda }
, or it can be one-sided, with
max
i
≠
1
λ
i
≤
λ
{\displaystyle \max _{i\neq 1}\lambda _{i}\leq \lambda }
. The latter is a weaker notion that holds also for bipartite graphs and is still useful for many applications, such as the Alon–Chung lemma.
Because G is regular, the uniform distribution
u
∈
R
n
{\displaystyle u\in \mathbb {R} ^{n}}
with ui = 1⁄n for all i = 1, …, n is the stationary distribution of G. That is, we have Au = du, and u is an eigenvector of A with eigenvalue λ1 = d, where d is the degree of the vertices of G. The spectral gap of G is defined to be d − λ2, and it measures the spectral expansion of the graph G.
If we set
λ
=
max
{
|
λ
2
|
,
|
λ
n
|
}
{\displaystyle \lambda =\max\{|\lambda _{2}|,|\lambda _{n}|\}}
as this is the largest eigenvalue corresponding to an eigenvector orthogonal to u, it can be equivalently defined using the Rayleigh quotient:
λ
=
max
v
⊥
u
,
v
≠
0
‖
A
v
‖
2
‖
v
‖
2
,
{\displaystyle \lambda =\max _{v\perp u,v\neq 0}{\frac {\|Av\|_{2}}{\|v\|_{2}}},}
where
‖
v
‖
2
=
(
∑
i
=
1
n
v
i
2
)
1
/
2
{\displaystyle \|v\|_{2}=\left(\sum _{i=1}^{n}v_{i}^{2}\right)^{1/2}}
is the 2-norm of the vector
v
∈
R
n
{\displaystyle v\in \mathbb {R} ^{n}}
.
The normalized versions of these definitions are also widely used and more convenient in stating some results. Here one considers the matrix 1/dA, which is the Markov transition matrix of the graph G. Its eigenvalues are between −1 and 1. For not necessarily regular graphs, the spectrum of a graph can be defined similarly using the eigenvalues of the Laplacian matrix. For directed graphs, one considers the singular values of the adjacency matrix A, which are equal to the roots of the eigenvalues of the symmetric matrix ATA.
=== Expander families ===
A family
(
G
i
)
i
∈
N
{\displaystyle (G_{i})_{i\in \mathbb {N} }}
of
d
{\displaystyle d}
-regular graphs of increasing size is an expander family if
h
(
G
i
)
{\displaystyle h(G_{i})}
is bounded away from zero.
== Relationships between different expansion properties ==
The expansion parameters defined above are related to each other. In particular, for any d-regular graph G,
h
out
(
G
)
≤
h
(
G
)
≤
d
⋅
h
out
(
G
)
.
{\displaystyle h_{\text{out}}(G)\leq h(G)\leq d\cdot h_{\text{out}}(G).}
Consequently, for constant degree graphs, vertex and edge expansion are qualitatively the same.
=== Cheeger inequalities ===
When G is d-regular, meaning each vertex is of degree d, there is a relationship between the isoperimetric constant h(G) and the gap d − λ2 in the spectrum of the adjacency operator of G. By standard spectral graph theory, the trivial eigenvalue of the adjacency operator of a d-regular graph is λ1 = d and the first non-trivial eigenvalue is λ2. If G is connected, then λ2 < d. An inequality due to Dodziuk and independently Alon and Milman states that
1
2
(
d
−
λ
2
)
≤
h
(
G
)
≤
2
d
(
d
−
λ
2
)
.
{\displaystyle {\tfrac {1}{2}}(d-\lambda _{2})\leq h(G)\leq {\sqrt {2d(d-\lambda _{2})}}.}
In fact, the lower bound is tight. The lower bound is achieved in limit for the hypercube Qn, where h(G) = 1 and d – λ2 = 2. The upper bound is (asymptotically) achieved for a cycle, where h(Cn) = 4/n = Θ(1/n) and d – λ2 = 2 – 2cos(2
π
{\displaystyle \pi }
/n) ≈ (2
π
{\displaystyle \pi }
/n)2 = Θ(1/n2). A better bound is given in as
h
(
G
)
≤
d
2
−
λ
2
2
.
{\displaystyle h(G)\leq {\sqrt {d^{2}-\lambda _{2}^{2}}}.}
These inequalities are closely related to the Cheeger bound for Markov chains and can be seen as a discrete version of Cheeger's inequality in Riemannian geometry.
Similar connections between vertex isoperimetric numbers and the spectral gap have also been studied:
h
out
(
G
)
≤
(
4
(
d
−
λ
2
)
+
1
)
2
−
1
{\displaystyle h_{\text{out}}(G)\leq \left({\sqrt {4(d-\lambda _{2})}}+1\right)^{2}-1}
h
in
(
G
)
≤
8
(
d
−
λ
2
)
.
{\displaystyle h_{\text{in}}(G)\leq {\sqrt {8(d-\lambda _{2})}}.}
Asymptotically speaking, the quantities h2⁄d, hout, and hin2 are all bounded above by the spectral gap O(d – λ2).
== Constructions ==
There are four general strategies for explicitly constructing families of expander graphs. The first strategy is algebraic and group-theoretic, the second strategy is analytic and uses additive combinatorics, the third strategy is combinatorial and uses the zig-zag and related graph products, and the fourth strategy is based on lifts. Noga Alon showed that certain graphs constructed from finite geometries are the sparsest examples of highly expanding graphs.
=== Margulis–Gabber–Galil ===
Algebraic constructions based on Cayley graphs are known for various variants of expander graphs. The following construction is due to Margulis and has been analysed by Gabber and Galil. For every natural number n, one considers the graph Gn with the vertex set
Z
n
×
Z
n
{\displaystyle \mathbb {Z} _{n}\times \mathbb {Z} _{n}}
, where
Z
n
=
Z
/
n
Z
{\displaystyle \mathbb {Z} _{n}=\mathbb {Z} /n\mathbb {Z} }
: For every vertex
(
x
,
y
)
∈
Z
n
×
Z
n
{\displaystyle (x,y)\in \mathbb {Z} _{n}\times \mathbb {Z} _{n}}
, its eight adjacent vertices are
(
x
±
2
y
,
y
)
,
(
x
±
(
2
y
+
1
)
,
y
)
,
(
x
,
y
±
2
x
)
,
(
x
,
y
±
(
2
x
+
1
)
)
.
{\displaystyle (x\pm 2y,y),(x\pm (2y+1),y),(x,y\pm 2x),(x,y\pm (2x+1)).}
Then the following holds:
Theorem. For all n, the graph Gn has second-largest eigenvalue
λ
(
G
)
≤
5
2
{\displaystyle \lambda (G)\leq 5{\sqrt {2}}}
.
=== Ramanujan graphs ===
By a theorem of Alon and Boppana, all sufficiently large d-regular graphs satisfy
λ
2
≥
2
d
−
1
−
o
(
1
)
{\displaystyle \lambda _{2}\geq 2{\sqrt {d-1}}-o(1)}
, where λ2 is the second largest eigenvalue in absolute value. As a direct consequence, we know that for every fixed d and
λ
<
2
d
−
1
{\displaystyle \lambda <2{\sqrt {d-1}}}
, there are only finitely many (n, d, λ)-graphs. Ramanujan graphs are d-regular graphs for which this bound is tight, satisfying
λ
=
max
|
λ
i
|
<
d
|
λ
i
|
≤
2
d
−
1
.
{\displaystyle \lambda =\max _{|\lambda _{i}|<d}|\lambda _{i}|\leq 2{\sqrt {d-1}}.}
Hence Ramanujan graphs have an asymptotically smallest possible value of λ2. This makes them excellent spectral expanders.
Lubotzky, Phillips, and Sarnak (1988), Margulis (1988), and Morgenstern (1994) show how Ramanujan graphs can be constructed explicitly.
In 1985, Alon, conjectured that most d-regular graphs on n vertices, for sufficiently large n, are almost Ramanujan. That is, for ε > 0, they satisfy
λ
≤
2
d
−
1
+
ε
{\displaystyle \lambda \leq 2{\sqrt {d-1}}+\varepsilon }
.
In 2003, Joel Friedman both proved the conjecture and specified what is meant by "most d-regular graphs" by showing that random d-regular graphs have
λ
≤
2
d
−
1
+
ε
{\displaystyle \lambda \leq 2{\sqrt {d-1}}+\varepsilon }
for every ε > 0 with probability 1 – O(n-τ), where
τ
=
⌈
d
−
1
+
1
2
⌉
.
{\displaystyle \tau =\left\lceil {\frac {{\sqrt {d-1}}+1}{2}}\right\rceil .}
A simpler proof of a slightly weaker result was given by Puder.
Marcus, Spielman and Srivastava, gave a construction of bipartite Ramanujan graphs based on lifts.
In 2024 a preprint by Jiaoyang Huang, Theo McKenzieand and Horng-Tzer Yau proved that
λ
≤
2
d
−
1
{\displaystyle \lambda \leq 2{\sqrt {d-1}}}
.
with the fraction of eigenvalues that hit the Alon-Boppana bound approximately 69% from proving that edge universality holds, that is they follow a Tracy-Widom distribution associated with the Gaussian Orthogonal Ensemble
=== Zig-zag product ===
Reingold, Vadhan, and Wigderson introduced the zig-zag product in 2003. Roughly speaking, the zig-zag product of two expander graphs produces a graph with only slightly worse expansion. Therefore, a zig-zag product can also be used to construct families of expander graphs. If G is a (n, d, λ1)-graph and H is an (m, d, λ2)-graph, then the zig-zag product G ◦ H is a (nm, d2, φ(λ1, λ2))-graph where φ has the following properties.
If λ1 < 1 and λ2 < 1, then φ(λ1, λ2) < 1;
φ(λ1, λ2) ≤ λ1 + λ2.
Specifically,
ϕ
(
λ
1
,
λ
2
)
=
1
2
(
1
−
λ
2
2
)
λ
2
+
1
2
(
1
−
λ
2
2
)
2
λ
1
2
+
4
λ
2
2
.
{\displaystyle \phi (\lambda _{1},\lambda _{2})={\frac {1}{2}}(1-\lambda _{2}^{2})\lambda _{2}+{\frac {1}{2}}{\sqrt {(1-\lambda _{2}^{2})^{2}\lambda _{1}^{2}+4\lambda _{2}^{2}}}.}
Note that property (1) implies that the zig-zag product of two expander graphs is also an expander graph, thus zig-zag products can be used inductively to create a family of expander graphs.
Intuitively, the construction of the zig-zag product can be thought of in the following way. Each vertex of G is blown up to a "cloud" of m vertices, each associated to a different edge connected to the vertex. Each vertex is now labeled as (v, k) where v refers to an original vertex of G and k refers to the kth edge of v. Two vertices, (v, k) and (w,ℓ) are connected if it is possible to get from (v, k) to (w, ℓ) through the following sequence of moves.
Zig – Move from (v, k) to (v, k' ), using an edge of H.
Jump across clouds using edge k' in G to get to (w, ℓ′).
Zag – Move from (w, ℓ′) to (w, ℓ) using an edge of H.
=== Lifts ===
An r-lift of a graph is formed by replacing each vertex by r vertices, and each edge by a matching between the corresponding sets of
r
{\displaystyle r}
vertices. The lifted graph inherits the eigenvalues of the original graph, and has some additional eigenvalues. Bilu and Linial showed that every d-regular graph has a 2-lift in which the additional eigenvalues are at most
O
(
d
log
3
d
)
{\displaystyle O({\sqrt {d\log ^{3}d}})}
in magnitude. They also showed that if the starting graph is a good enough expander, then a good 2-lift can be found in polynomial time, thus giving an efficient construction of d-regular expanders for every d.
Bilu and Linial conjectured that the bound
O
(
d
log
3
d
)
{\displaystyle O({\sqrt {d\log ^{3}d}})}
can be improved to
2
d
−
1
{\displaystyle 2{\sqrt {d-1}}}
, which would be optimal due to the Alon–Boppana bound. This conjecture was proved in the bipartite setting by Marcus, Spielman and Srivastava, who used the method of interlacing polynomials. As a result, they obtained an alternative construction of bipartite Ramanujan graphs. The original non-constructive proof was turned into an algorithm by Michael B. Cohen. Later the method was generalized to r-lifts by Hall, Puder and Sawin.
=== Randomized constructions ===
There are many results that show the existence of graphs with good expansion properties through probabilistic arguments. In fact, the existence of expanders was first proved by Pinsker who showed that for a randomly chosen n vertex left d regular bipartite graph, |N(S)| ≥ (d – 2)|S| for all subsets of vertices |S| ≤ cdn with high probability, where cd is a constant depending on d that is O(d-4). Alon and Roichman showed that for every 1 > ε > 0, there is some c(ε) > 0 such that the following holds: For a group G of order n, consider the Cayley graph on G with c(ε) log2 n randomly chosen elements from G. Then, in the limit of n getting to infinity, the resulting graph is almost surely an ε-expander.
In 2021, Alexander modified an MCMC algorithm to look for randomized constructions to produce Ramanujan graphs with a fixed vertex size and degree of regularity. The results show the Ramanujan graphs exist for every vertex size and degree pair up to 2000 vertices.
In 2024 Alon produced an explicit construction for near Ramanujan graphs of every vertex size and degree pair.
== Applications and useful properties ==
The original motivation for expanders is to build economical robust networks (phone or computer): an expander with bounded degree is precisely an asymptotic robust graph with the number of edges growing linearly with size (number of vertices), for all subsets.
Expander graphs have found extensive applications in computer science, in designing algorithms, error correcting codes, extractors, pseudorandom generators, sorting networks (Ajtai, Komlós & Szemerédi (1983)) and robust computer networks. They have also been used in proofs of many important results in computational complexity theory, such as SL = L (Reingold (2008)) and the PCP theorem (Dinur (2007)). In cryptography, expander graphs are used to construct hash functions.
In a 2006 survey of expander graphs, Hoory, Linial, and Wigderson split the study of expander graphs into four categories: extremal problems, typical behavior, explicit constructions, and algorithms. Extremal problems focus on the bounding of expansion parameters, while typical behavior problems characterize how the expansion parameters are distributed over random graphs. Explicit constructions focus on constructing graphs that optimize certain parameters, and algorithmic questions study the evaluation and estimation of parameters.
=== Expander mixing lemma ===
The expander mixing lemma states that for an (n, d, λ)-graph, for any two subsets of the vertices S, T ⊆ V, the number of edges between S and T is approximately what you would expect in a random d-regular graph. The approximation is better the smaller λ is. In a random d-regular graph, as well as in an Erdős–Rényi random graph with edge probability d⁄n, we expect d⁄n • |S| • |T| edges between S and T.
More formally, let E(S, T) denote the number of edges between S and T. If the two sets are not disjoint, edges in their intersection are counted twice, that is,
E
(
S
,
T
)
=
2
|
E
(
G
[
S
∩
T
]
)
|
+
E
(
S
∖
T
,
T
)
+
E
(
S
,
T
∖
S
)
.
{\displaystyle E(S,T)=2|E(G[S\cap T])|+E(S\setminus T,T)+E(S,T\setminus S).}
Then the expander mixing lemma says that the following inequality holds:
|
E
(
S
,
T
)
−
d
⋅
|
S
|
⋅
|
T
|
n
|
≤
λ
|
S
|
⋅
|
T
|
.
{\displaystyle \left|E(S,T)-{\frac {d\cdot |S|\cdot |T|}{n}}\right|\leq \lambda {\sqrt {|S|\cdot |T|}}.}
Many properties of (n, d, λ)-graphs are corollaries of the expander mixing lemmas, including the following.
An independent set of a graph is a subset of vertices with no two vertices adjacent. In an (n, d, λ)-graph, an independent set has size at most λn⁄d.
The chromatic number of a graph G, χ(G), is the minimum number of colors needed such that adjacent vertices have different colors. Hoffman showed that d⁄λ ≤ χ(G), while Alon, Krivelevich, and Sudakov showed that if d < 2n⁄3, then
χ
(
G
)
≤
O
(
d
log
(
1
+
d
/
λ
)
)
.
{\displaystyle \chi (G)\leq O\left({\frac {d}{\log(1+d/\lambda )}}\right).}
The diameter of a graph is the maximum distance between two vertices, where the distance between two vertices is defined to be the shortest path between them. Chung showed that the diameter of an (n, d, λ)-graph is at most
⌈
log
n
log
(
d
/
λ
)
⌉
.
{\displaystyle \left\lceil \log {\frac {n}{\log(d/\lambda )}}\right\rceil .}
=== Expander walk sampling ===
The Chernoff bound states that, when sampling many independent samples from a random variable in the range [−1, 1], with high probability the average of our samples is close to the expectation of the random variable. The expander walk sampling lemma, due to Ajtai, Komlós & Szemerédi (1987) and Gillman (1998), states that this also holds true when sampling from a walk on an expander graph. This is particularly useful in the theory of derandomization, since sampling according to an expander walk uses many fewer random bits than sampling independently.
=== AKS sorting network and approximate halvers ===
Sorting networks take a set of inputs and perform a series of parallel steps to sort the inputs. A parallel step consists of performing any number of disjoint comparisons and potentially swapping pairs of compared inputs. The depth of a network is given by the number of parallel steps it takes. Expander graphs play an important role in the AKS sorting network, which achieves depth O(log n). While this is asymptotically the best known depth for a sorting network, the reliance on expanders makes the constant bound too large for practical use.
Within the AKS sorting network, expander graphs are used to construct bounded depth ε-halvers. An ε-halver takes as input a length n permutation of (1, …, n) and halves the inputs into two disjoint sets A and B such that for each integer k ≤ n⁄2 at most εk of the k smallest inputs are in B and at most εk of the k largest inputs are in A. The sets A and B are an ε-halving.
Following Ajtai, Komlós & Szemerédi (1983), a depth d ε-halver can be constructed as follows. Take an n vertex, degree d bipartite expander with parts X and Y of equal size such that every subset of vertices of size at most εn has at least 1 – ε/ε neighbors.
The vertices of the graph can be thought of as registers that contain inputs and the edges can be thought of as wires that compare the inputs of two registers. At the start, arbitrarily place half of the inputs in X and half of the inputs in Y and decompose the edges into d perfect matchings. The goal is to end with X roughly containing the smaller half of the inputs and Y containing roughly the larger half of the inputs. To achieve this, sequentially process each matching by comparing the registers paired up by the edges of this matching and correct any inputs that are out of order. Specifically, for each edge of the matching, if the larger input is in the register in X and the smaller input is in the register in Y, then swap the two inputs so that the smaller one is in X and the larger one is in Y. It is clear that this process consists of d parallel steps.
After all d rounds, take A to be the set of inputs in registers in X and B to be the set of inputs in registers in Y to obtain an ε-halving. To see this, notice that if a register u in X and v in Y are connected by an edge uv then after the matching with this edge is processed, the input in u is less than that of v. Furthermore, this property remains true throughout the rest of the process. Now, suppose for some k ≤ n⁄2 that more than εk of the inputs (1, …, k) are in B. Then by expansion properties of the graph, the registers of these inputs in Y are connected with at least 1 – ε/εk registers in X. Altogether, this constitutes more than k registers so there must be some register A in X connected to some register B in Y such that the final input of A is not in (1, …, k), while the final input of B is. This violates the previous property however, and thus the output sets A and B must be an ε-halving.
== See also ==
Algebraic connectivity
Zig-zag product
Superstrong approximation
Spectral graph theory
== Notes ==
Alexander, Clark (2021). "On Near Optimal Spectral Expander Graphs of Fixed Size". arXiv:2110.01407 [cs.DM].
== References ==
== External links ==
Brief introduction in Notices of the American Mathematical Society
Introductory paper by Michael Nielsen Archived 2016-08-17 at the Wayback Machine
Lecture notes from a course on expanders (by Nati Linial and Avi Wigderson)
Lecture notes from a course on expanders (by Prahladh Harsha)
Definition and application of spectral gap | Wikipedia/Expander_graph |
In group theory, a subfield of abstract algebra, a cycle graph of a group is an undirected graph that illustrates the various cycles of that group, given a set of generators for the group. Cycle graphs are particularly useful in visualizing the structure of small finite groups.
A cycle is the set of powers of a given group element a, where an, the n-th power of an element a, is defined as the product of a multiplied by itself n times. The element a is said to generate the cycle. In a finite group, some non-zero power of a must be the group identity, which we denote either as e or 1; the lowest such power is the order of the element a, the number of distinct elements in the cycle that it generates. In a cycle graph, the cycle is represented as a polygon, with its vertices representing the group elements and its edges indicating how they are linked together to form the cycle.
== Definition ==
Each group element is represented by a node in the cycle graph, and enough cycles are represented as polygons in the graph so that every node lies on at least one cycle. All of those polygons pass through the node representing the identity, and some other nodes may also lie on more than one cycle.
Suppose that a group element a generates a cycle of order 6 (has order 6), so that the nodes a, a2, a3, a4, a5, and a6 = e are the vertices of a hexagon in the cycle graph. The element a2 then has order 3; but making the nodes a2, a4, and e be the vertices of a triangle in the graph would add no new information. So, only the primitive cycles need be considered, those that are not subsets of another cycle. Also, the node a5, which also has order 6, generates the same cycle as does a itself; so we have at least two choices for which element to use in generating a cycle --- often more.
To build a cycle graph for a group, we start with a node for each group element. For each primitive cycle, we then choose some element a that generates that cycle, and we connect the node for e to the one for a, a to a2, ..., ak−1 to ak, etc., until returning to e. The result is a cycle graph for the group.
When a group element a has order 2 (so that multiplication by a is an involution), the rule above would connect e to a by two edges, one going out and the other coming back. Except when the intent is to emphasize the two edges of such a cycle, it is typically drawn as a single line between the two elements.
Note that this correspondence between groups and graphs is not one-to-one in either direction: Two different groups can have the same cycle graph, and two different graphs can be cycle graphs for a single group. We give examples of each in the non-uniqueness section.
== Example and properties ==
As an example of a group cycle graph, consider the dihedral group Dih4. The multiplication table for this group is shown on the left, and the cycle graph is shown on the right, with e specifying the identity element.
Notice the cycle {e, a, a2, a3} in the multiplication table, with a4 = e. The inverse a−1 = a3 is also a generator of this cycle: (a3)2 = a2, (a3)3 = a, and (a3)4 = e. Similarly, any cycle in any group has at least two generators, and may be traversed in either direction. More generally, the number of generators of a cycle with n elements is given by the Euler φ function of n, and any of these generators may be written as the first node in the cycle (next to the identity e); or more commonly the nodes are left unmarked. Two distinct cycles cannot intersect in a generator.
Cycles that contain a non-prime number of elements have cyclic subgroups that are not shown in the graph. For the group Dih4 above, we could draw a line between a2 and e since (a2)2 = e, but since a2 is part of a larger cycle, this is not an edge of the cycle graph.
There can be ambiguity when two cycles share a non-identity element. For example, the 8-element quaternion group has cycle graph shown at right. Each of the elements in the middle row when multiplied by itself gives −1 (where 1 is the identity element). In this case we may use different colors to keep track of the cycles, although symmetry considerations will work as well.
As noted earlier, the two edges of a 2-element cycle are typically represented as a single line.
The inverse of an element is the node symmetric to it in its cycle, with respect to the reflection which fixes the identity.
== Non-uniqueness ==
The cycle graph of a group is not uniquely determined up to graph isomorphism; nor does it uniquely determine the group up to group isomorphism. That is, the graph obtained depends on the set of generators chosen, and two different groups (with chosen sets of generators) can generate the same cycle graph.
=== A single group can have different cycle graphs ===
For some groups, choosing different elements to generate the various primitive cycles of that group can lead to different cycle graphs. There is an example of this for the abelian group
C
5
×
C
2
×
C
2
{\displaystyle C_{5}\times C_{2}\times C_{2}}
, which has order 20. We denote an element of that group as a triple of numbers
(
i
;
j
,
k
)
{\displaystyle (i;j,k)}
, where
0
≤
i
<
5
{\displaystyle 0\leq i<5}
and each of
j
{\displaystyle j}
and
k
{\displaystyle k}
is either 0 or 1. The triple
(
0
;
0
,
0
)
{\displaystyle (0;0,0)}
is the identity element. In the drawings below,
i
{\displaystyle i}
is shown above
j
{\displaystyle j}
and
k
{\displaystyle k}
.
This group has three primitive cycles, each of order 10. In the first cycle graph, we choose, as the generators of those three cycles, the nodes
(
1
;
1
,
0
)
{\displaystyle (1;1,0)}
,
(
1
;
0
,
1
)
{\displaystyle (1;0,1)}
, and
(
1
;
1
,
1
)
{\displaystyle (1;1,1)}
. In the second, we generate the third of those cycles --- the blue one --- by starting instead with
(
2
;
1
,
1
)
{\displaystyle (2;1,1)}
.
The two resulting graphs are not isomorphic because they have diameters 5 and 4 respectively.
=== Different groups can have the same cycle graph ===
Two different (non-isomorphic) groups can have cycle graphs that are isomorphic, where the latter isomorphism ignores the labels on the nodes of the graphs. It follows that the structure of a group is not uniquely determined by its cycle graph.
There is an example of this already for groups of order 16, the two groups being
C
8
×
C
2
{\displaystyle C_{8}\times C_{2}}
and
C
8
⋊
5
C
2
{\displaystyle C_{8}\rtimes _{5}C_{2}}
. The abelian group
C
8
×
C
2
{\displaystyle C_{8}\times C_{2}}
is the direct product of the cyclic groups of orders 8 and 2. The non-abelian group
C
8
⋊
5
C
2
{\displaystyle C_{8}\rtimes _{5}C_{2}}
is that semidirect product of
C
8
{\displaystyle C_{8}}
and
C
2
{\displaystyle C_{2}}
in which the non-identity element of
C
2
{\displaystyle C_{2}}
maps to the multiply-by-5 automorphism of
C
8
{\displaystyle C_{8}}
.
In drawing the cycle graphs of those two groups, we take
C
8
×
C
2
{\displaystyle C_{8}\times C_{2}}
to be generated by elements s and t with
s
8
=
t
2
=
1
and
t
s
=
s
t
,
{\displaystyle s^{8}=t^{2}=1\quad {\text{and}}\quad ts=st,}
where that latter relation makes
C
8
×
C
2
{\displaystyle C_{8}\times C_{2}}
abelian. And we take
C
8
⋊
5
C
2
{\displaystyle C_{8}\rtimes _{5}C_{2}}
to be generated by elements 𝜎 and 𝜏 with
σ
8
=
τ
2
=
1
and
τ
σ
=
σ
5
τ
.
{\displaystyle \sigma ^{8}=\tau ^{2}=1\quad {\text{and}}\quad \tau \sigma =\sigma ^{5}\tau .}
Here are cycle graphs for those two groups, where we choose
s
t
{\displaystyle st}
to generate the green cycle on the left and
σ
τ
{\displaystyle \sigma \tau }
to generate that cycle on the right:
In the right-hand graph, the green cycle, after moving from 1 to
σ
τ
{\displaystyle \sigma \tau }
, moves next to
σ
6
,
{\displaystyle \sigma ^{6},}
because
(
σ
τ
)
(
σ
τ
)
=
σ
(
τ
σ
)
τ
=
σ
(
σ
5
τ
)
τ
=
σ
6
.
{\displaystyle (\sigma \tau )(\sigma \tau )=\sigma (\tau \sigma )\tau =\sigma (\sigma ^{5}\tau )\tau =\sigma ^{6}.}
== History ==
Cycle graphs were investigated by the number theorist Daniel Shanks in the early 1950s as a tool to study multiplicative groups of residue classes. Shanks first published the idea in the 1962 first edition of his book Solved and Unsolved Problems in Number Theory. In the book, Shanks investigates which groups have isomorphic cycle graphs and when a cycle graph is planar. In the 1978 second edition, Shanks reflects on his research on class groups and the development of the baby-step giant-step method: The cycle graphs have proved to be useful when working with finite Abelian groups; and I have used them frequently in finding my way around an intricate structure [77, p. 852], in obtaining a wanted multiplicative relation [78, p. 426], or in isolating some wanted subgroup [79].
Cycle graphs are used as a pedagogical tool in Nathan Carter's 2009 introductory textbook Visual Group Theory.
== Graph characteristics of particular group families ==
Certain group types give typical graphs:
Cyclic groups Zn, order n, is a single cycle graphed simply as an n-sided polygon with the elements at the vertices:
When n is a prime number, groups of the form (Zn)m will have (nm − 1)/(n − 1) n-element cycles sharing the identity element:
Dihedral groups Dihn, order 2n consists of an n-element cycle and n 2-element cycles:
Dicyclic groups, Dicn = Q4n, order 4n:
Other direct products:
Symmetric groups – The symmetric group Sn contains, for any group of order n, a subgroup isomorphic to that group. Thus the cycle graph of every group of order n will be found in the cycle graph of Sn.
See example: Subgroups of S4
== Extended example: Subgroups of the full octahedral group ==
The full octahedral group is the direct product
S
4
×
Z
2
{\displaystyle S_{4}\times Z_{2}}
of the symmetric group S4 and the cyclic group Z2.
Its order is 48, and it has subgroups of every order that divides 48.
In the examples below nodes that are related to each other are placed next to each other,
so these are not the simplest possible cycle graphs for these groups (like those on the right).
Like all graphs a cycle graph can be represented in different ways to emphasize different properties. The two representations of the cycle graph of S4 are an example of that.
== See also ==
List of small groups
Cayley graph
== References ==
Skiena, S. (1990). Cycles, Stars, and Wheels. Implementing Discrete Mathematics: Combinatorics and Graph Theory with Mathematica (pp. 144-147).
Shanks, Daniel (1978) [1962], Solved and Unsolved Problems in Number Theory (2nd ed.), New York: Chelsea Publishing Company, ISBN 0-8284-0297-3
Pemmaraju, S., & Skiena, S. (2003). Cycles, Stars, and Wheels. Computational Discrete Mathematics: Combinatorics and Graph Theory with Mathematica (pp. 248-249). Cambridge University Press.
== External links ==
Weisstein, Eric W. "Group Cycle Graph". MathWorld. | Wikipedia/Cycle_graph_(algebra) |
In graph theory, a lattice graph, mesh graph, or grid graph is a graph whose drawing, embedded in some Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
, forms a regular tiling. This implies that the group of bijective transformations that send the graph to itself is a lattice in the group-theoretical sense.
Typically, no clear distinction is made between such a graph in the more abstract sense of graph theory, and its drawing in space (often the plane or 3D space). This type of graph may more shortly be called just a lattice, mesh, or grid. Moreover, these terms are also commonly used for a finite section of the infinite graph, as in "an 8 × 8 square grid".
The term lattice graph has also been given in the literature to various other kinds of graphs with some regular structure, such as the Cartesian product of a number of complete graphs.
== Square grid graph ==
A common type of lattice graph (known under different names, such as grid graph or square grid graph) is the graph whose vertices correspond to the points in the plane with integer coordinates, x-coordinates being in the range 1, ..., n, y-coordinates being in the range 1, ..., m, and two vertices being connected by an edge whenever the corresponding points are at distance 1. In other words, it is the unit distance graph for the integer points in a rectangle with sides parallel to the axes.
=== Properties ===
A square grid graph is a Cartesian product of graphs, namely, of two path graphs with n − 1 and m − 1 edges. Since a path graph is a median graph, the latter fact implies that the square grid graph is also a median graph. All square grid graphs are bipartite, which is easily verified by the fact that one can color the vertices in a checkerboard fashion.
A path graph is a grid graph on the
1
×
n
{\displaystyle 1\times n}
grid. A
2
×
2
{\displaystyle 2\times 2}
grid graph is a 4-cycle.
Every planar graph H is a minor of the h × h grid, where
h
=
2
|
V
(
H
)
|
+
4
|
E
(
H
)
|
{\displaystyle h=2|V(H)|+4|E(H)|}
.
Grid graphs are fundamental objects in the theory of graph minors because of the grid exclusion theorem. They play a major role in bidimensionality theory.
== Other kinds ==
A triangular grid graph is a graph that corresponds to a triangular grid.
A Hanan grid graph for a finite set of points in the plane is produced by the grid obtained by intersections of all vertical and horizontal lines through each point of the set.
The rook's graph (the graph that represents all legal moves of the rook chess piece on a chessboard) is also sometimes called the lattice graph, although this graph is different from the lattice graph described here because all points in one row or column are adjacent. The valid moves of the fairy chess piece the wazir form a square lattice graph.
== See also ==
Lattice path
Pick's theorem
Integer triangles in a 2D lattice
Regular graph
== References == | Wikipedia/Grid_graph |
In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.
== Definitions ==
Intuitively, an expander graph is a finite, undirected multigraph in which every subset of the vertices that is not "too large" has a "large" boundary. Different formalisations of these notions give rise to different notions of expanders: edge expanders, vertex expanders, and spectral expanders, as defined below.
A disconnected graph is not an expander, since the boundary of a connected component is empty. Every connected finite graph is an expander; however, different connected graphs have different expansion parameters. The complete graph has the best expansion property, but it has largest possible degree. Informally, a graph is a good expander if it has low degree and high expansion parameters.
=== Edge expansion ===
The edge expansion (also isoperimetric number or Cheeger constant) h(G) of a graph G on n vertices is defined as
h
(
G
)
=
min
0
<
|
S
|
≤
n
2
|
∂
S
|
|
S
|
,
{\displaystyle h(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial S|}{|S|}},}
where
∂
S
:=
{
{
u
,
v
}
∈
E
(
G
)
:
u
∈
S
,
v
∉
S
}
,
{\displaystyle \partial S:=\{\{u,v\}\in E(G)\ :\ u\in S,v\notin S\},}
which can also be written as ∂S = E(S, S) with S := V(G) \ S the complement of S and
E
(
A
,
B
)
=
{
{
u
,
v
}
∈
E
(
G
)
:
u
∈
A
,
v
∈
B
}
{\displaystyle E(A,B)=\{\{u,v\}\in E(G)\ :\ u\in A,v\in B\}}
the edges between the subsets of vertices A,B ⊆ V(G).
In the equation, the minimum is over all nonempty sets S of at most n⁄2 vertices and ∂S is the edge boundary of S, i.e., the set of edges with exactly one endpoint in S.
Intuitively,
min
|
∂
S
|
=
min
E
(
S
,
S
¯
)
{\displaystyle \min {|\partial S|}=\min E({S},{\overline {S}})}
is the minimum number of edges that need to be cut in order to split the graph in two.
The edge expansion normalizes this concept by dividing with smallest number of vertices among the two parts.
To see how the normalization can drastically change the value, consider the following example.
Take two complete graphs with the same number of vertices n and add n edges between the two graphs by connecting their vertices one-to-one.
The minimum cut will be n but the edge expansion will be 1.
Notice that in min |∂S|, the optimization can be equivalently done either over 0 ≤ |S| ≤ n⁄2 or over any non-empty subset, since
E
(
S
,
S
¯
)
=
E
(
S
¯
,
S
)
{\displaystyle E(S,{\overline {S}})=E({\overline {S}},S)}
. The same is not true for h(G) because of the normalization by |S|.
If we want to write h(G) with an optimization over all non-empty subsets, we can rewrite it as
h
(
G
)
=
min
∅
⊊
S
⊊
V
(
G
)
E
(
S
,
S
¯
)
min
{
|
S
|
,
|
S
¯
|
}
.
{\displaystyle h(G)=\min _{\emptyset \subsetneq S\subsetneq V(G)}{\frac {E({S},{\overline {S}})}{\min\{|S|,|{\overline {S}}|\}}}.}
=== Vertex expansion ===
The vertex isoperimetric number hout(G) (also vertex expansion or magnification) of a graph G is defined as
h
out
(
G
)
=
min
0
<
|
S
|
≤
n
2
|
∂
out
(
S
)
|
|
S
|
,
{\displaystyle h_{\text{out}}(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial _{\text{out}}(S)|}{|S|}},}
where ∂out(S) is the outer boundary of S, i.e., the set of vertices in V(G) \ S with at least one neighbor in S. In a variant of this definition (called unique neighbor expansion) ∂out(S) is replaced by the set of vertices in V with exactly one neighbor in S.
The vertex isoperimetric number hin(G) of a graph G is defined as
h
in
(
G
)
=
min
0
<
|
S
|
≤
n
2
|
∂
in
(
S
)
|
|
S
|
,
{\displaystyle h_{\text{in}}(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial _{\text{in}}(S)|}{|S|}},}
where
∂
in
(
S
)
{\displaystyle \partial _{\text{in}}(S)}
is the inner boundary of S, i.e., the set of vertices in S with at least one neighbor in V(G) \ S.
=== Spectral expansion ===
When G is d-regular, a linear algebraic definition of expansion is possible based on the eigenvalues of the adjacency matrix A = A(G) of G, where Aij is the number of edges between vertices i and j. Because A is symmetric, the spectral theorem implies that A has n real-valued eigenvalues λ1 ≥ λ2 ≥ … ≥ λn. It is known that all these eigenvalues are in [−d, d] and more specifically, it is known that λn = −d if and only if G is bipartite.
More formally, we refer to an n-vertex, d-regular graph with
max
i
≠
1
|
λ
i
|
≤
λ
{\displaystyle \max _{i\neq 1}|\lambda _{i}|\leq \lambda }
as an (n, d, λ)-graph. The bound given by an (n, d, λ)-graph on λi for i ≠ 1 is useful in many contexts, including the expander mixing lemma.
Spectral expansion can be two-sided, as above, with
max
i
≠
1
|
λ
i
|
≤
λ
{\displaystyle \max _{i\neq 1}|\lambda _{i}|\leq \lambda }
, or it can be one-sided, with
max
i
≠
1
λ
i
≤
λ
{\displaystyle \max _{i\neq 1}\lambda _{i}\leq \lambda }
. The latter is a weaker notion that holds also for bipartite graphs and is still useful for many applications, such as the Alon–Chung lemma.
Because G is regular, the uniform distribution
u
∈
R
n
{\displaystyle u\in \mathbb {R} ^{n}}
with ui = 1⁄n for all i = 1, …, n is the stationary distribution of G. That is, we have Au = du, and u is an eigenvector of A with eigenvalue λ1 = d, where d is the degree of the vertices of G. The spectral gap of G is defined to be d − λ2, and it measures the spectral expansion of the graph G.
If we set
λ
=
max
{
|
λ
2
|
,
|
λ
n
|
}
{\displaystyle \lambda =\max\{|\lambda _{2}|,|\lambda _{n}|\}}
as this is the largest eigenvalue corresponding to an eigenvector orthogonal to u, it can be equivalently defined using the Rayleigh quotient:
λ
=
max
v
⊥
u
,
v
≠
0
‖
A
v
‖
2
‖
v
‖
2
,
{\displaystyle \lambda =\max _{v\perp u,v\neq 0}{\frac {\|Av\|_{2}}{\|v\|_{2}}},}
where
‖
v
‖
2
=
(
∑
i
=
1
n
v
i
2
)
1
/
2
{\displaystyle \|v\|_{2}=\left(\sum _{i=1}^{n}v_{i}^{2}\right)^{1/2}}
is the 2-norm of the vector
v
∈
R
n
{\displaystyle v\in \mathbb {R} ^{n}}
.
The normalized versions of these definitions are also widely used and more convenient in stating some results. Here one considers the matrix 1/dA, which is the Markov transition matrix of the graph G. Its eigenvalues are between −1 and 1. For not necessarily regular graphs, the spectrum of a graph can be defined similarly using the eigenvalues of the Laplacian matrix. For directed graphs, one considers the singular values of the adjacency matrix A, which are equal to the roots of the eigenvalues of the symmetric matrix ATA.
=== Expander families ===
A family
(
G
i
)
i
∈
N
{\displaystyle (G_{i})_{i\in \mathbb {N} }}
of
d
{\displaystyle d}
-regular graphs of increasing size is an expander family if
h
(
G
i
)
{\displaystyle h(G_{i})}
is bounded away from zero.
== Relationships between different expansion properties ==
The expansion parameters defined above are related to each other. In particular, for any d-regular graph G,
h
out
(
G
)
≤
h
(
G
)
≤
d
⋅
h
out
(
G
)
.
{\displaystyle h_{\text{out}}(G)\leq h(G)\leq d\cdot h_{\text{out}}(G).}
Consequently, for constant degree graphs, vertex and edge expansion are qualitatively the same.
=== Cheeger inequalities ===
When G is d-regular, meaning each vertex is of degree d, there is a relationship between the isoperimetric constant h(G) and the gap d − λ2 in the spectrum of the adjacency operator of G. By standard spectral graph theory, the trivial eigenvalue of the adjacency operator of a d-regular graph is λ1 = d and the first non-trivial eigenvalue is λ2. If G is connected, then λ2 < d. An inequality due to Dodziuk and independently Alon and Milman states that
1
2
(
d
−
λ
2
)
≤
h
(
G
)
≤
2
d
(
d
−
λ
2
)
.
{\displaystyle {\tfrac {1}{2}}(d-\lambda _{2})\leq h(G)\leq {\sqrt {2d(d-\lambda _{2})}}.}
In fact, the lower bound is tight. The lower bound is achieved in limit for the hypercube Qn, where h(G) = 1 and d – λ2 = 2. The upper bound is (asymptotically) achieved for a cycle, where h(Cn) = 4/n = Θ(1/n) and d – λ2 = 2 – 2cos(2
π
{\displaystyle \pi }
/n) ≈ (2
π
{\displaystyle \pi }
/n)2 = Θ(1/n2). A better bound is given in as
h
(
G
)
≤
d
2
−
λ
2
2
.
{\displaystyle h(G)\leq {\sqrt {d^{2}-\lambda _{2}^{2}}}.}
These inequalities are closely related to the Cheeger bound for Markov chains and can be seen as a discrete version of Cheeger's inequality in Riemannian geometry.
Similar connections between vertex isoperimetric numbers and the spectral gap have also been studied:
h
out
(
G
)
≤
(
4
(
d
−
λ
2
)
+
1
)
2
−
1
{\displaystyle h_{\text{out}}(G)\leq \left({\sqrt {4(d-\lambda _{2})}}+1\right)^{2}-1}
h
in
(
G
)
≤
8
(
d
−
λ
2
)
.
{\displaystyle h_{\text{in}}(G)\leq {\sqrt {8(d-\lambda _{2})}}.}
Asymptotically speaking, the quantities h2⁄d, hout, and hin2 are all bounded above by the spectral gap O(d – λ2).
== Constructions ==
There are four general strategies for explicitly constructing families of expander graphs. The first strategy is algebraic and group-theoretic, the second strategy is analytic and uses additive combinatorics, the third strategy is combinatorial and uses the zig-zag and related graph products, and the fourth strategy is based on lifts. Noga Alon showed that certain graphs constructed from finite geometries are the sparsest examples of highly expanding graphs.
=== Margulis–Gabber–Galil ===
Algebraic constructions based on Cayley graphs are known for various variants of expander graphs. The following construction is due to Margulis and has been analysed by Gabber and Galil. For every natural number n, one considers the graph Gn with the vertex set
Z
n
×
Z
n
{\displaystyle \mathbb {Z} _{n}\times \mathbb {Z} _{n}}
, where
Z
n
=
Z
/
n
Z
{\displaystyle \mathbb {Z} _{n}=\mathbb {Z} /n\mathbb {Z} }
: For every vertex
(
x
,
y
)
∈
Z
n
×
Z
n
{\displaystyle (x,y)\in \mathbb {Z} _{n}\times \mathbb {Z} _{n}}
, its eight adjacent vertices are
(
x
±
2
y
,
y
)
,
(
x
±
(
2
y
+
1
)
,
y
)
,
(
x
,
y
±
2
x
)
,
(
x
,
y
±
(
2
x
+
1
)
)
.
{\displaystyle (x\pm 2y,y),(x\pm (2y+1),y),(x,y\pm 2x),(x,y\pm (2x+1)).}
Then the following holds:
Theorem. For all n, the graph Gn has second-largest eigenvalue
λ
(
G
)
≤
5
2
{\displaystyle \lambda (G)\leq 5{\sqrt {2}}}
.
=== Ramanujan graphs ===
By a theorem of Alon and Boppana, all sufficiently large d-regular graphs satisfy
λ
2
≥
2
d
−
1
−
o
(
1
)
{\displaystyle \lambda _{2}\geq 2{\sqrt {d-1}}-o(1)}
, where λ2 is the second largest eigenvalue in absolute value. As a direct consequence, we know that for every fixed d and
λ
<
2
d
−
1
{\displaystyle \lambda <2{\sqrt {d-1}}}
, there are only finitely many (n, d, λ)-graphs. Ramanujan graphs are d-regular graphs for which this bound is tight, satisfying
λ
=
max
|
λ
i
|
<
d
|
λ
i
|
≤
2
d
−
1
.
{\displaystyle \lambda =\max _{|\lambda _{i}|<d}|\lambda _{i}|\leq 2{\sqrt {d-1}}.}
Hence Ramanujan graphs have an asymptotically smallest possible value of λ2. This makes them excellent spectral expanders.
Lubotzky, Phillips, and Sarnak (1988), Margulis (1988), and Morgenstern (1994) show how Ramanujan graphs can be constructed explicitly.
In 1985, Alon, conjectured that most d-regular graphs on n vertices, for sufficiently large n, are almost Ramanujan. That is, for ε > 0, they satisfy
λ
≤
2
d
−
1
+
ε
{\displaystyle \lambda \leq 2{\sqrt {d-1}}+\varepsilon }
.
In 2003, Joel Friedman both proved the conjecture and specified what is meant by "most d-regular graphs" by showing that random d-regular graphs have
λ
≤
2
d
−
1
+
ε
{\displaystyle \lambda \leq 2{\sqrt {d-1}}+\varepsilon }
for every ε > 0 with probability 1 – O(n-τ), where
τ
=
⌈
d
−
1
+
1
2
⌉
.
{\displaystyle \tau =\left\lceil {\frac {{\sqrt {d-1}}+1}{2}}\right\rceil .}
A simpler proof of a slightly weaker result was given by Puder.
Marcus, Spielman and Srivastava, gave a construction of bipartite Ramanujan graphs based on lifts.
In 2024 a preprint by Jiaoyang Huang, Theo McKenzieand and Horng-Tzer Yau proved that
λ
≤
2
d
−
1
{\displaystyle \lambda \leq 2{\sqrt {d-1}}}
.
with the fraction of eigenvalues that hit the Alon-Boppana bound approximately 69% from proving that edge universality holds, that is they follow a Tracy-Widom distribution associated with the Gaussian Orthogonal Ensemble
=== Zig-zag product ===
Reingold, Vadhan, and Wigderson introduced the zig-zag product in 2003. Roughly speaking, the zig-zag product of two expander graphs produces a graph with only slightly worse expansion. Therefore, a zig-zag product can also be used to construct families of expander graphs. If G is a (n, d, λ1)-graph and H is an (m, d, λ2)-graph, then the zig-zag product G ◦ H is a (nm, d2, φ(λ1, λ2))-graph where φ has the following properties.
If λ1 < 1 and λ2 < 1, then φ(λ1, λ2) < 1;
φ(λ1, λ2) ≤ λ1 + λ2.
Specifically,
ϕ
(
λ
1
,
λ
2
)
=
1
2
(
1
−
λ
2
2
)
λ
2
+
1
2
(
1
−
λ
2
2
)
2
λ
1
2
+
4
λ
2
2
.
{\displaystyle \phi (\lambda _{1},\lambda _{2})={\frac {1}{2}}(1-\lambda _{2}^{2})\lambda _{2}+{\frac {1}{2}}{\sqrt {(1-\lambda _{2}^{2})^{2}\lambda _{1}^{2}+4\lambda _{2}^{2}}}.}
Note that property (1) implies that the zig-zag product of two expander graphs is also an expander graph, thus zig-zag products can be used inductively to create a family of expander graphs.
Intuitively, the construction of the zig-zag product can be thought of in the following way. Each vertex of G is blown up to a "cloud" of m vertices, each associated to a different edge connected to the vertex. Each vertex is now labeled as (v, k) where v refers to an original vertex of G and k refers to the kth edge of v. Two vertices, (v, k) and (w,ℓ) are connected if it is possible to get from (v, k) to (w, ℓ) through the following sequence of moves.
Zig – Move from (v, k) to (v, k' ), using an edge of H.
Jump across clouds using edge k' in G to get to (w, ℓ′).
Zag – Move from (w, ℓ′) to (w, ℓ) using an edge of H.
=== Lifts ===
An r-lift of a graph is formed by replacing each vertex by r vertices, and each edge by a matching between the corresponding sets of
r
{\displaystyle r}
vertices. The lifted graph inherits the eigenvalues of the original graph, and has some additional eigenvalues. Bilu and Linial showed that every d-regular graph has a 2-lift in which the additional eigenvalues are at most
O
(
d
log
3
d
)
{\displaystyle O({\sqrt {d\log ^{3}d}})}
in magnitude. They also showed that if the starting graph is a good enough expander, then a good 2-lift can be found in polynomial time, thus giving an efficient construction of d-regular expanders for every d.
Bilu and Linial conjectured that the bound
O
(
d
log
3
d
)
{\displaystyle O({\sqrt {d\log ^{3}d}})}
can be improved to
2
d
−
1
{\displaystyle 2{\sqrt {d-1}}}
, which would be optimal due to the Alon–Boppana bound. This conjecture was proved in the bipartite setting by Marcus, Spielman and Srivastava, who used the method of interlacing polynomials. As a result, they obtained an alternative construction of bipartite Ramanujan graphs. The original non-constructive proof was turned into an algorithm by Michael B. Cohen. Later the method was generalized to r-lifts by Hall, Puder and Sawin.
=== Randomized constructions ===
There are many results that show the existence of graphs with good expansion properties through probabilistic arguments. In fact, the existence of expanders was first proved by Pinsker who showed that for a randomly chosen n vertex left d regular bipartite graph, |N(S)| ≥ (d – 2)|S| for all subsets of vertices |S| ≤ cdn with high probability, where cd is a constant depending on d that is O(d-4). Alon and Roichman showed that for every 1 > ε > 0, there is some c(ε) > 0 such that the following holds: For a group G of order n, consider the Cayley graph on G with c(ε) log2 n randomly chosen elements from G. Then, in the limit of n getting to infinity, the resulting graph is almost surely an ε-expander.
In 2021, Alexander modified an MCMC algorithm to look for randomized constructions to produce Ramanujan graphs with a fixed vertex size and degree of regularity. The results show the Ramanujan graphs exist for every vertex size and degree pair up to 2000 vertices.
In 2024 Alon produced an explicit construction for near Ramanujan graphs of every vertex size and degree pair.
== Applications and useful properties ==
The original motivation for expanders is to build economical robust networks (phone or computer): an expander with bounded degree is precisely an asymptotic robust graph with the number of edges growing linearly with size (number of vertices), for all subsets.
Expander graphs have found extensive applications in computer science, in designing algorithms, error correcting codes, extractors, pseudorandom generators, sorting networks (Ajtai, Komlós & Szemerédi (1983)) and robust computer networks. They have also been used in proofs of many important results in computational complexity theory, such as SL = L (Reingold (2008)) and the PCP theorem (Dinur (2007)). In cryptography, expander graphs are used to construct hash functions.
In a 2006 survey of expander graphs, Hoory, Linial, and Wigderson split the study of expander graphs into four categories: extremal problems, typical behavior, explicit constructions, and algorithms. Extremal problems focus on the bounding of expansion parameters, while typical behavior problems characterize how the expansion parameters are distributed over random graphs. Explicit constructions focus on constructing graphs that optimize certain parameters, and algorithmic questions study the evaluation and estimation of parameters.
=== Expander mixing lemma ===
The expander mixing lemma states that for an (n, d, λ)-graph, for any two subsets of the vertices S, T ⊆ V, the number of edges between S and T is approximately what you would expect in a random d-regular graph. The approximation is better the smaller λ is. In a random d-regular graph, as well as in an Erdős–Rényi random graph with edge probability d⁄n, we expect d⁄n • |S| • |T| edges between S and T.
More formally, let E(S, T) denote the number of edges between S and T. If the two sets are not disjoint, edges in their intersection are counted twice, that is,
E
(
S
,
T
)
=
2
|
E
(
G
[
S
∩
T
]
)
|
+
E
(
S
∖
T
,
T
)
+
E
(
S
,
T
∖
S
)
.
{\displaystyle E(S,T)=2|E(G[S\cap T])|+E(S\setminus T,T)+E(S,T\setminus S).}
Then the expander mixing lemma says that the following inequality holds:
|
E
(
S
,
T
)
−
d
⋅
|
S
|
⋅
|
T
|
n
|
≤
λ
|
S
|
⋅
|
T
|
.
{\displaystyle \left|E(S,T)-{\frac {d\cdot |S|\cdot |T|}{n}}\right|\leq \lambda {\sqrt {|S|\cdot |T|}}.}
Many properties of (n, d, λ)-graphs are corollaries of the expander mixing lemmas, including the following.
An independent set of a graph is a subset of vertices with no two vertices adjacent. In an (n, d, λ)-graph, an independent set has size at most λn⁄d.
The chromatic number of a graph G, χ(G), is the minimum number of colors needed such that adjacent vertices have different colors. Hoffman showed that d⁄λ ≤ χ(G), while Alon, Krivelevich, and Sudakov showed that if d < 2n⁄3, then
χ
(
G
)
≤
O
(
d
log
(
1
+
d
/
λ
)
)
.
{\displaystyle \chi (G)\leq O\left({\frac {d}{\log(1+d/\lambda )}}\right).}
The diameter of a graph is the maximum distance between two vertices, where the distance between two vertices is defined to be the shortest path between them. Chung showed that the diameter of an (n, d, λ)-graph is at most
⌈
log
n
log
(
d
/
λ
)
⌉
.
{\displaystyle \left\lceil \log {\frac {n}{\log(d/\lambda )}}\right\rceil .}
=== Expander walk sampling ===
The Chernoff bound states that, when sampling many independent samples from a random variable in the range [−1, 1], with high probability the average of our samples is close to the expectation of the random variable. The expander walk sampling lemma, due to Ajtai, Komlós & Szemerédi (1987) and Gillman (1998), states that this also holds true when sampling from a walk on an expander graph. This is particularly useful in the theory of derandomization, since sampling according to an expander walk uses many fewer random bits than sampling independently.
=== AKS sorting network and approximate halvers ===
Sorting networks take a set of inputs and perform a series of parallel steps to sort the inputs. A parallel step consists of performing any number of disjoint comparisons and potentially swapping pairs of compared inputs. The depth of a network is given by the number of parallel steps it takes. Expander graphs play an important role in the AKS sorting network, which achieves depth O(log n). While this is asymptotically the best known depth for a sorting network, the reliance on expanders makes the constant bound too large for practical use.
Within the AKS sorting network, expander graphs are used to construct bounded depth ε-halvers. An ε-halver takes as input a length n permutation of (1, …, n) and halves the inputs into two disjoint sets A and B such that for each integer k ≤ n⁄2 at most εk of the k smallest inputs are in B and at most εk of the k largest inputs are in A. The sets A and B are an ε-halving.
Following Ajtai, Komlós & Szemerédi (1983), a depth d ε-halver can be constructed as follows. Take an n vertex, degree d bipartite expander with parts X and Y of equal size such that every subset of vertices of size at most εn has at least 1 – ε/ε neighbors.
The vertices of the graph can be thought of as registers that contain inputs and the edges can be thought of as wires that compare the inputs of two registers. At the start, arbitrarily place half of the inputs in X and half of the inputs in Y and decompose the edges into d perfect matchings. The goal is to end with X roughly containing the smaller half of the inputs and Y containing roughly the larger half of the inputs. To achieve this, sequentially process each matching by comparing the registers paired up by the edges of this matching and correct any inputs that are out of order. Specifically, for each edge of the matching, if the larger input is in the register in X and the smaller input is in the register in Y, then swap the two inputs so that the smaller one is in X and the larger one is in Y. It is clear that this process consists of d parallel steps.
After all d rounds, take A to be the set of inputs in registers in X and B to be the set of inputs in registers in Y to obtain an ε-halving. To see this, notice that if a register u in X and v in Y are connected by an edge uv then after the matching with this edge is processed, the input in u is less than that of v. Furthermore, this property remains true throughout the rest of the process. Now, suppose for some k ≤ n⁄2 that more than εk of the inputs (1, …, k) are in B. Then by expansion properties of the graph, the registers of these inputs in Y are connected with at least 1 – ε/εk registers in X. Altogether, this constitutes more than k registers so there must be some register A in X connected to some register B in Y such that the final input of A is not in (1, …, k), while the final input of B is. This violates the previous property however, and thus the output sets A and B must be an ε-halving.
== See also ==
Algebraic connectivity
Zig-zag product
Superstrong approximation
Spectral graph theory
== Notes ==
Alexander, Clark (2021). "On Near Optimal Spectral Expander Graphs of Fixed Size". arXiv:2110.01407 [cs.DM].
== References ==
== External links ==
Brief introduction in Notices of the American Mathematical Society
Introductory paper by Michael Nielsen Archived 2016-08-17 at the Wayback Machine
Lecture notes from a course on expanders (by Nati Linial and Avi Wigderson)
Lecture notes from a course on expanders (by Prahladh Harsha)
Definition and application of spectral gap | Wikipedia/Expander_graphs |
In the mathematical field of graph theory, a zero-symmetric graph is a connected graph in which each vertex has exactly three incident edges and, for each two vertices, there is a unique symmetry taking one vertex to the other. Such a graph is a vertex-transitive graph but cannot be an edge-transitive graph: the number of symmetries equals the number of vertices, too few to take every edge to every other edge.
The name for this class of graphs was coined by R. M. Foster in a 1966 letter to H. S. M. Coxeter. In the context of group theory, zero-symmetric graphs are also called graphical regular representations of their symmetry groups.
== Examples ==
The smallest zero-symmetric graph is a nonplanar graph with 18 vertices. Its LCF notation is [5,−5]9.
Among planar graphs, the truncated cuboctahedral and truncated icosidodecahedral graphs are also zero-symmetric.
These examples are all bipartite graphs. However, there exist larger examples of zero-symmetric graphs that are not bipartite.
These examples also have three different symmetry classes (orbits) of edges. However, there exist zero-symmetric graphs with only two orbits of edges.
The smallest such graph has 20 vertices, with LCF notation [6,6,-6,-6]5.
== Properties ==
Every finite zero-symmetric graph is a Cayley graph, a property that does not always hold for cubic vertex-transitive graphs more generally and that helps in the solution of combinatorial enumeration tasks concerning zero-symmetric graphs. There are 97687 zero-symmetric graphs on up to 1280 vertices. These graphs form 89% of the cubic Cayley graphs and 88% of all connected vertex-transitive cubic graphs on the same number of vertices.
All known finite connected zero-symmetric graphs contain a Hamiltonian cycle, but it is unknown whether every finite connected zero-symmetric graph is necessarily Hamiltonian. This is a special case of the Lovász conjecture that (with five known exceptions, none of which is zero-symmetric) every finite connected vertex-transitive graph and every finite Cayley graph is Hamiltonian.
== See also ==
Semi-symmetric graph, graphs that have symmetries between every two edges but not between every two vertices (reversing the roles of edges and vertices in the definition of zero-symmetric graphs)
== References == | Wikipedia/Zero-symmetric_graph |
In the mathematical field of graph theory, a graph G is symmetric or arc-transitive if, given any two ordered pairs of adjacent vertices
(
u
1
,
v
1
)
{\displaystyle (u_{1},v_{1})}
and
(
u
2
,
v
2
)
{\displaystyle (u_{2},v_{2})}
of G, there is an automorphism
f
:
V
(
G
)
→
V
(
G
)
{\displaystyle f:V(G)\rightarrow V(G)}
such that
f
(
u
1
)
=
u
2
{\displaystyle f(u_{1})=u_{2}}
and
f
(
v
1
)
=
v
2
.
{\displaystyle f(v_{1})=v_{2}.}
In other words, a graph is symmetric if its automorphism group acts transitively on ordered pairs of adjacent vertices (that is, upon edges considered as having a direction). Such a graph is sometimes also called 1-arc-transitive or flag-transitive.
By definition (ignoring u1 and u2), a symmetric graph without isolated vertices must also be vertex-transitive. Since the definition above maps one edge to another, a symmetric graph must also be edge-transitive. However, an edge-transitive graph need not be symmetric, since a—b might map to c—d, but not to d—c. Star graphs are a simple example of being edge-transitive without being vertex-transitive or symmetric. As a further example, semi-symmetric graphs are edge-transitive and regular, but not vertex-transitive.
Every connected symmetric graph must thus be both vertex-transitive and edge-transitive, and the converse is true for graphs of odd degree. However, for even degree, there exist connected graphs which are vertex-transitive and edge-transitive, but not symmetric. Such graphs are called half-transitive. The smallest connected half-transitive graph is Holt's graph, with degree 4 and 27 vertices. Confusingly, some authors use the term "symmetric graph" to mean a graph which is vertex-transitive and edge-transitive, rather than an arc-transitive graph. Such a definition would include half-transitive graphs, which are excluded under the definition above.
A distance-transitive graph is one where instead of considering pairs of adjacent vertices (i.e. vertices a distance of 1 apart), the definition covers two pairs of vertices, each the same distance apart. Such graphs are automatically symmetric, by definition.
A t-arc is defined to be a sequence of t + 1 vertices, such that any two consecutive vertices in the sequence are adjacent, and with any repeated vertices being more than 2 steps apart. A t-transitive graph is a graph such that the automorphism group acts transitively on t-arcs, but not on (t + 1)-arcs. Since 1-arcs are simply edges, every symmetric graph of degree 3 or more must be t-transitive for some t, and the value of t can be used to further classify symmetric graphs. The cube is 2-transitive, for example.
Note that conventionally the term "symmetric graph" is not complementary to the term "asymmetric graph," as the latter refers to a graph that has no nontrivial symmetries at all.
== Examples ==
Two basic families of symmetric graphs for any number of vertices are the cycle graphs (of degree 2) and the complete graphs. Further symmetric graphs are formed by the vertices and edges of the regular and quasiregular polyhedra: the cube, octahedron, icosahedron, dodecahedron, cuboctahedron, and icosidodecahedron. Extension of the cube to n dimensions gives the hypercube graphs (with 2n vertices and degree n). Similarly extension of the octahedron to n dimensions gives the graphs of the cross-polytopes, this family of graphs (with 2n vertices and degree 2n − 2) are sometimes referred to as the cocktail party graphs - they are complete graphs with a set of edges making a perfect matching removed. Additional families of symmetric graphs with an even number of vertices 2n, are the evenly split complete bipartite graphs Kn,n and the crown graphs on 2n vertices. Many other symmetric graphs can be classified as circulant graphs (but not all).
The Rado graph forms an example of a symmetric graph with infinitely many vertices and infinite degree.
=== Cubic symmetric graphs ===
Combining the symmetry condition with the restriction that graphs be cubic (i.e. all vertices have degree 3) yields quite a strong condition, and such graphs are rare enough to be listed. They all have an even number of vertices. The Foster census and its extensions provide such lists. The Foster census was begun in the 1930s by Ronald M. Foster while he was employed by Bell Labs, and in 1988 (when Foster was 92) the then current Foster census (listing all cubic symmetric graphs up to 512 vertices) was published in book form. The first thirteen items in the list are cubic symmetric graphs with up to 30 vertices (ten of these are also distance-transitive; the exceptions are as indicated):
Other well known cubic symmetric graphs are the Dyck graph, the Foster graph and the Biggs–Smith graph. The ten distance-transitive graphs listed above, together with the Foster graph and the Biggs–Smith graph, are the only cubic distance-transitive graphs.
== Properties ==
The vertex-connectivity of a symmetric graph is always equal to the degree d. In contrast, for vertex-transitive graphs in general, the vertex-connectivity is bounded below by 2(d + 1)/3.
A t-transitive graph of degree 3 or more has girth at least 2(t − 1). However, there are no finite t-transitive graphs of degree 3 or more for t ≥ 8. In the case of the degree being exactly 3 (cubic symmetric graphs), there are none for t ≥ 6.
== See also ==
Algebraic graph theory
Gallery of named graphs
Regular map
== References ==
== External links ==
Cubic symmetric graphs (The Foster Census). Data files for all cubic symmetric graphs up to 768 vertices, and some cubic graphs with up to 1000 vertices. Gordon Royle, updated February 2001, retrieved 2009-04-18.
Trivalent (cubic) symmetric graphs on up to 10000 vertices. Marston Conder, 2011. | Wikipedia/Symmetric_graph |
In graph theory, the Lovász conjecture (1969) is a classical problem on Hamiltonian paths in graphs. It says:
Every finite connected vertex-transitive graph contains a Hamiltonian path.
Originally László Lovász stated the problem in the opposite way, but
this version became standard. In 1996, László Babai published a conjecture sharply contradicting this conjecture, but both conjectures remain widely open. It is not even known if a single counterexample would necessarily lead to a series of counterexamples.
== Historical remarks ==
The problem of finding Hamiltonian paths in highly symmetric graphs is quite old. As Donald Knuth describes it in volume 4 of The Art of Computer Programming, the problem originated in British campanology (bell-ringing). Such Hamiltonian paths and cycles are also closely connected to Gray codes. In each case the constructions are explicit.
== Variants of the Lovász conjecture ==
=== Hamiltonian cycle ===
Another version of Lovász conjecture states that
Every finite connected vertex-transitive graph contains a Hamiltonian cycle except the five known counterexamples.
There are 5 known examples of vertex-transitive graphs with no Hamiltonian cycles (but with Hamiltonian paths): the complete graph
K
2
{\displaystyle K_{2}}
, the Petersen graph, the Coxeter graph and two graphs derived from the Petersen and Coxeter graphs by replacing each vertex with a triangle.
=== Cayley graphs ===
None of the 5 vertex-transitive graphs with no Hamiltonian cycles is a Cayley graph. This observation leads to a weaker version of the conjecture:
Every finite connected Cayley graph contains a Hamiltonian cycle.
The advantage of the Cayley graph formulation is that such graphs correspond to a finite group
G
{\displaystyle G}
and a
generating set
S
{\displaystyle S}
. Thus one can ask for which
G
{\displaystyle G}
and
S
{\displaystyle S}
the conjecture holds rather than attack it in full generality.
=== Directed Cayley graph ===
For directed Cayley graphs (digraphs) the Lovász conjecture is false. Various counterexamples were obtained by Robert Alexander Rankin. Still, many of the below results hold in this restrictive setting.
== Special cases ==
Every directed Cayley graph of an abelian group has a Hamiltonian path; however, every cyclic group whose order is not a prime power has a directed Cayley graph that does not have a Hamiltonian cycle.
In 1986, D. Witte proved that the Lovász conjecture holds for the Cayley graphs of p-groups. It is open even for dihedral groups, although for special sets of generators some progress has been made.
For the symmetric group
S
n
{\displaystyle S_{n}}
, there are many attractive generating sets. For example, the Lovász conjecture holds in the following cases of generating sets:
a
=
(
1
,
2
,
…
,
n
)
,
b
=
(
1
,
2
)
{\displaystyle a=(1,2,\dots ,n),b=(1,2)}
(long cycle and a transposition).
s
1
=
(
1
,
2
)
,
s
2
=
(
2
,
3
)
,
…
,
s
n
−
1
=
(
n
−
1
,
n
)
{\displaystyle s_{1}=(1,2),s_{2}=(2,3),\dots ,s_{n-1}=(n-1,n)}
(Coxeter generators). In this case a Hamiltonian cycle is generated by the Steinhaus–Johnson–Trotter algorithm.
any set of transpositions corresponding to a labelled tree on
{
1
,
2
,
.
.
,
n
}
{\displaystyle \{1,2,..,n\}}
.
a
=
(
1
,
2
)
,
b
=
(
1
,
2
)
(
3
,
4
)
⋯
,
c
=
(
2
,
3
)
(
4
,
5
)
⋯
{\displaystyle a=(1,2),b=(1,2)(3,4)\cdots ,c=(2,3)(4,5)\cdots }
Stong has shown that the conjecture holds for the Cayley graph of the wreath product Zm wr Zn with the natural minimal generating set when m is either even or three. In particular this holds for the cube-connected cycles, which can be generated as the Cayley graph of the wreath product Z2 wr Zn.
== General groups ==
For general finite groups, only a few results are known:
S
=
{
a
,
b
}
,
(
a
b
)
2
=
1
{\displaystyle S=\{a,b\},(ab)^{2}=1}
(Rankin generators)
S
=
{
a
,
b
,
c
}
,
a
2
=
b
2
=
c
2
=
[
a
,
b
]
=
1
{\displaystyle S=\{a,b,c\},a^{2}=b^{2}=c^{2}=[a,b]=1}
(Rapaport–Strasser generators)
S
=
{
a
,
b
,
c
}
,
a
2
=
1
,
c
=
a
−
1
b
a
{\displaystyle S=\{a,b,c\},a^{2}=1,c=a^{-1}ba}
(Pak–Radoičić generators)
S
=
{
a
,
b
}
,
a
2
=
b
s
=
(
a
b
)
3
=
1
,
{\displaystyle S=\{a,b\},a^{2}=b^{s}=(ab)^{3}=1,}
where
|
G
|
,
s
=
2
m
o
d
4
{\displaystyle |G|,s=2~mod~4}
(here we have (2,s,3)-presentation, Glover–Marušič theorem).
Finally, it is known that for every finite group
G
{\displaystyle G}
there exists a generating set of size at most
log
2
|
G
|
{\displaystyle \log _{2}|G|}
such that the corresponding Cayley graph is Hamiltonian (Pak-Radoičić). This result is based on classification of finite simple groups.
The Lovász conjecture was also established for random generating sets of size
Ω
(
log
5
|
G
|
)
{\displaystyle \Omega (\log ^{5}|G|)}
.
== References == | Wikipedia/Lovász_conjecture |
In the mathematical field of graph theory, a semi-symmetric graph is an undirected graph that is edge-transitive and regular, but not vertex-transitive. In other words, a graph is semi-symmetric if each vertex has the same number of incident edges, and there is a symmetry taking any of the graph's edges to any other of its edges, but there is some pair of vertices such that no symmetry maps the first into the second.
== Properties ==
A semi-symmetric graph must be bipartite, and its automorphism group must act transitively on each of the two vertex sets of the bipartition (in fact, regularity is not required for this property to hold). For instance, in the diagram of the Folkman graph shown here, green vertices can not be mapped to red ones by any automorphism, but every two vertices of the same color are symmetric with each other.
== History ==
Semi-symmetric graphs were first studied E. Dauber, a student of F. Harary, in a paper, no longer available, titled "On line- but not point-symmetric graphs". This was seen by Jon Folkman, whose paper, published in 1967, includes the smallest semi-symmetric graph, now known as the Folkman graph, on 20 vertices.
The term "semi-symmetric" was first used by Klin et al. in a paper they published in 1978.
== Cubic graphs ==
The smallest cubic semi-symmetric graph (that is, one in which each vertex is incident to exactly three edges) is the Gray graph on 54 vertices. It was first observed to be semi-symmetric by Bouwer (1968). It was proven to be the smallest cubic semi-symmetric graph by Dragan Marušič and Aleksander Malnič.
All the cubic semi-symmetric graphs on up to 10000 vertices are known. According to Conder, Malnič, Marušič and Potočnik, the four smallest possible cubic semi-symmetric graphs after the Gray graph are the Iofinova–Ivanov graph on 110 vertices, the Ljubljana graph on 112 vertices, a graph on 120 vertices with girth 8 and the Tutte 12-cage.
== References ==
== External links ==
Weisstein, Eric W., "Semisymmetric Graph", MathWorld | Wikipedia/Semi-symmetric_graph |
In graph theory, a branch of mathematics, an undirected graph is called an asymmetric graph if it has no nontrivial symmetries.
Formally, an automorphism of a graph is a permutation p of its vertices with the property that any two vertices u and v are adjacent if and only if p(u) and p(v) are adjacent.
The identity mapping of a graph is always an automorphism, and is called the trivial automorphism of the graph. An asymmetric graph is a graph for which there are no other automorphisms.
Note that the term "asymmetric graph" is not a negation of the term "symmetric graph," as the latter refers to a stronger condition than possessing nontrivial symmetries.
== Examples ==
The smallest asymmetric non-trivial graphs have 6 vertices. The smallest asymmetric regular graphs have ten vertices; there exist 10-vertex asymmetric graphs that are 4-regular and 5-regular. One of the five smallest asymmetric cubic graphs is the twelve-vertex Frucht graph discovered in 1939. According to a strengthened version of Frucht's theorem, there are infinitely many asymmetric cubic graphs.
== Properties ==
The class of asymmetric graphs is closed under complements: a graph G is asymmetric if and only if its complement is. Any n-vertex asymmetric graph can be made symmetric by adding and removing a total of at most n/2 + o(n) edges.
== Random graphs ==
The proportion of graphs on n vertices with a nontrivial automorphism tends to zero as n grows, which is informally expressed as "almost all finite graphs are asymmetric". In contrast, again informally, "almost all infinite graphs have nontrivial symmetries." More specifically, countably infinite random graphs in the Erdős–Rényi model are, with probability 1, isomorphic to the highly symmetric Rado graph.
== Trees ==
The smallest asymmetric tree has seven vertices: it consists of three paths of lengths 1, 2, and 3, linked at a common endpoint. In contrast to the situation for graphs, almost all trees are symmetric. In particular, if a tree is chosen uniformly at random among all trees on n labeled nodes, then with probability tending to 1 as n increases, the tree will contain some two leaves adjacent to the same node and will have symmetries exchanging these two leaves.
== References == | Wikipedia/Asymmetric_graph |
In graph-theoretic mathematics, a biregular graph or semiregular bipartite graph is a bipartite graph
G
=
(
U
,
V
,
E
)
{\displaystyle G=(U,V,E)}
for which every two vertices on the same side of the given bipartition have the same degree as each other. If the degree of the vertices in
U
{\displaystyle U}
is
x
{\displaystyle x}
and the degree of the vertices in
V
{\displaystyle V}
is
y
{\displaystyle y}
, then the graph is said to be
(
x
,
y
)
{\displaystyle (x,y)}
-biregular.
== Example ==
Every complete bipartite graph
K
a
,
b
{\displaystyle K_{a,b}}
is
(
b
,
a
)
{\displaystyle (b,a)}
-biregular.
The rhombic dodecahedron is another example; it is (3,4)-biregular.
== Vertex counts ==
An
(
x
,
y
)
{\displaystyle (x,y)}
-biregular graph
G
=
(
U
,
V
,
E
)
{\displaystyle G=(U,V,E)}
must satisfy the equation
x
|
U
|
=
y
|
V
|
{\displaystyle x|U|=y|V|}
. This follows from a simple double counting argument: the number of endpoints of edges in
U
{\displaystyle U}
is
x
|
U
|
{\displaystyle x|U|}
, the number of endpoints of edges in
V
{\displaystyle V}
is
y
|
V
|
{\displaystyle y|V|}
, and each edge contributes the same amount (one) to both numbers.
== Symmetry ==
Every regular bipartite graph is also biregular.
Every edge-transitive graph (disallowing graphs with isolated vertices) that is not also vertex-transitive must be biregular. In particular every edge-transitive graph is either regular or biregular.
== Configurations ==
The Levi graphs of geometric configurations are biregular; a biregular graph is the Levi graph of an (abstract) configuration if and only if its girth is at least six.
== References == | Wikipedia/Biregular_graph |
In mathematics, especially group theory, two elements
a
{\displaystyle a}
and
b
{\displaystyle b}
of a group are conjugate if there is an element
g
{\displaystyle g}
in the group such that
b
=
g
a
g
−
1
.
{\displaystyle b=gag^{-1}.}
This is an equivalence relation whose equivalence classes are called conjugacy classes. In other words, each conjugacy class is closed under
b
=
g
a
g
−
1
{\displaystyle b=gag^{-1}}
for all elements
g
{\displaystyle g}
in the group.
Members of the same conjugacy class cannot be distinguished by using only the group structure, and therefore share many properties. The study of conjugacy classes of non-abelian groups is fundamental for the study of their structure. For an abelian group, each conjugacy class is a set containing one element (singleton set).
Functions that are constant for members of the same conjugacy class are called class functions.
== Definition ==
Let
G
{\displaystyle G}
be a group. Two elements
a
,
b
∈
G
{\displaystyle a,b\in G}
are conjugate if there exists an element
g
∈
G
{\displaystyle g\in G}
such that
g
a
g
−
1
=
b
,
{\displaystyle gag^{-1}=b,}
in which case
b
{\displaystyle b}
is called a conjugate of
a
{\displaystyle a}
and
a
{\displaystyle a}
is called a conjugate of
b
.
{\displaystyle b.}
In the case of the general linear group
GL
(
n
)
{\displaystyle \operatorname {GL} (n)}
of invertible matrices, the conjugacy relation is called matrix similarity.
It can be easily shown that conjugacy is an equivalence relation and therefore partitions
G
{\displaystyle G}
into equivalence classes. (This means that every element of the group belongs to precisely one conjugacy class, and the classes
Cl
(
a
)
{\displaystyle \operatorname {Cl} (a)}
and
Cl
(
b
)
{\displaystyle \operatorname {Cl} (b)}
are equal if and only if
a
{\displaystyle a}
and
b
{\displaystyle b}
are conjugate, and disjoint otherwise.) The equivalence class that contains the element
a
∈
G
{\displaystyle a\in G}
is
Cl
(
a
)
=
{
g
a
g
−
1
:
g
∈
G
}
{\displaystyle \operatorname {Cl} (a)=\left\{gag^{-1}:g\in G\right\}}
and is called the conjugacy class of
a
.
{\displaystyle a.}
The class number of
G
{\displaystyle G}
is the number of distinct (nonequivalent) conjugacy classes. All elements belonging to the same conjugacy class have the same order.
Conjugacy classes may be referred to by describing them, or more briefly by abbreviations such as "6A", meaning "a certain conjugacy class with elements of order 6", and "6B" would be a different conjugacy class with elements of order 6; the conjugacy class 1A is the conjugacy class of the identity which has order 1. In some cases, conjugacy classes can be described in a uniform way; for example, in the symmetric group they can be described by cycle type.
== Examples ==
The symmetric group
S
3
,
{\displaystyle S_{3},}
consisting of the 6 permutations of three elements, has three conjugacy classes:
No change
(
a
b
c
→
a
b
c
)
{\displaystyle (abc\to abc)}
. The single member has order 1.
Transposing two
(
a
b
c
→
a
c
b
,
a
b
c
→
b
a
c
,
a
b
c
→
c
b
a
)
{\displaystyle (abc\to acb,abc\to bac,abc\to cba)}
. The 3 members all have order 2.
A cyclic permutation of all three
(
a
b
c
→
b
c
a
,
a
b
c
→
c
a
b
)
{\displaystyle (abc\to bca,abc\to cab)}
. The 2 members both have order 3.
These three classes also correspond to the classification of the isometries of an equilateral triangle.
The symmetric group
S
4
,
{\displaystyle S_{4},}
consisting of the 24 permutations of four elements, has five conjugacy classes, listed with their description, cycle type, member order, and members:
No change. Cycle type = [14]. Order = 1. Members = { (1, 2, 3, 4) }. The single row containing this conjugacy class is shown as a row of black circles in the adjacent table.
Interchanging two (other two remain unchanged). Cycle type = [1221]. Order = 2. Members = { (1, 2, 4, 3), (1, 4, 3, 2), (1, 3, 2, 4), (4, 2, 3, 1), (3, 2, 1, 4), (2, 1, 3, 4) }). The 6 rows containing this conjugacy class are highlighted in green in the adjacent table.
A cyclic permutation of three (other one remains unchanged). Cycle type = [1131]. Order = 3. Members = { (1, 3, 4, 2), (1, 4, 2, 3), (3, 2, 4, 1), (4, 2, 1, 3), (4, 1, 3, 2), (2, 4, 3, 1), (3, 1, 2, 4), (2, 3, 1, 4) }). The 8 rows containing this conjugacy class are shown with normal print (no boldface or color highlighting) in the adjacent table.
A cyclic permutation of all four. Cycle type = [41]. Order = 4. Members = { (2, 3, 4, 1), (2, 4, 1, 3), (3, 1, 4, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 3, 1, 2) }). The 6 rows containing this conjugacy class are highlighted in orange in the adjacent table.
Interchanging two, and also the other two. Cycle type = [22]. Order = 2. Members = { (2, 1, 4, 3), (4, 3, 2, 1), (3, 4, 1, 2) }). The 3 rows containing this conjugacy class are shown with boldface entries in the adjacent table.
The proper rotations of the cube, which can be characterized by permutations of the body diagonals, are also described by conjugation in
S
4
.
{\displaystyle S_{4}.}
In general, the number of conjugacy classes in the symmetric group
S
n
{\displaystyle S_{n}}
is equal to the number of integer partitions of
n
.
{\displaystyle n.}
This is because each conjugacy class corresponds to exactly one partition of
{
1
,
2
,
…
,
n
}
{\displaystyle \{1,2,\ldots ,n\}}
into cycles, up to permutation of the elements of
{
1
,
2
,
…
,
n
}
.
{\displaystyle \{1,2,\ldots ,n\}.}
In general, the Euclidean group can be studied by conjugation of isometries in Euclidean space.
Example
Let G =
S
3
{\displaystyle S_{3}}
a
=
(
23
)
{\displaystyle a=(23)}
x
=
(
123
)
{\displaystyle x=(123)}
x
−
1
=
(
321
)
{\displaystyle x^{-1}=(321)}
Then
x
a
x
−
1
{\displaystyle xax^{-1}}
=
(
123
)
(
23
)
(
321
)
=
(
31
)
{\displaystyle =(123)(23)(321)=(31)}
=
(
31
)
{\displaystyle =(31)}
is Conjugate of
(
23
)
{\displaystyle (23)}
== Properties ==
The identity element is always the only element in its class, that is
Cl
(
e
)
=
{
e
}
.
{\displaystyle \operatorname {Cl} (e)=\{e\}.}
If
G
{\displaystyle G}
is abelian then
g
a
g
−
1
=
a
{\displaystyle gag^{-1}=a}
for all
a
,
g
∈
G
{\displaystyle a,g\in G}
, i.e.
Cl
(
a
)
=
{
a
}
{\displaystyle \operatorname {Cl} (a)=\{a\}}
for all
a
∈
G
{\displaystyle a\in G}
(and the converse is also true: if all conjugacy classes are singletons then
G
{\displaystyle G}
is abelian).
If two elements
a
,
b
∈
G
{\displaystyle a,b\in G}
belong to the same conjugacy class (that is, if they are conjugate), then they have the same order. More generally, every statement about
a
{\displaystyle a}
can be translated into a statement about
b
=
g
a
g
−
1
,
{\displaystyle b=gag^{-1},}
because the map
φ
(
x
)
=
g
x
g
−
1
{\displaystyle \varphi (x)=gxg^{-1}}
is an automorphism of
G
{\displaystyle G}
called an inner automorphism. See the next property for an example.
If
a
{\displaystyle a}
and
b
{\displaystyle b}
are conjugate, then so are their powers
a
k
{\displaystyle a^{k}}
and
b
k
.
{\displaystyle b^{k}.}
(Proof: if
a
=
g
b
g
−
1
{\displaystyle a=gbg^{-1}}
then
a
k
=
(
g
b
g
−
1
)
(
g
b
g
−
1
)
⋯
(
g
b
g
−
1
)
=
g
b
k
g
−
1
.
{\displaystyle a^{k}=\left(gbg^{-1}\right)\left(gbg^{-1}\right)\cdots \left(gbg^{-1}\right)=gb^{k}g^{-1}.}
) Thus taking kth powers gives a map on conjugacy classes, and one may consider which conjugacy classes are in its preimage. For example, in the symmetric group, the square of an element of type (3)(2) (a 3-cycle and a 2-cycle) is an element of type (3), therefore one of the power-up classes of (3) is the class (3)(2) (where
a
{\displaystyle a}
is a power-up class of
a
k
{\displaystyle a^{k}}
).
An element
a
∈
G
{\displaystyle a\in G}
lies in the center
Z
(
G
)
{\displaystyle \operatorname {Z} (G)}
of
G
{\displaystyle G}
if and only if its conjugacy class has only one element,
a
{\displaystyle a}
itself. More generally, if
C
G
(
a
)
{\displaystyle \operatorname {C} _{G}(a)}
denotes the centralizer of
a
∈
G
,
{\displaystyle a\in G,}
i.e., the subgroup consisting of all elements
g
{\displaystyle g}
such that
g
a
=
a
g
,
{\displaystyle ga=ag,}
then the index
[
G
:
C
G
(
a
)
]
{\displaystyle \left[G:\operatorname {C} _{G}(a)\right]}
is equal to the number of elements in the conjugacy class of
a
{\displaystyle a}
(by the orbit-stabilizer theorem).
Take
σ
∈
S
n
{\displaystyle \sigma \in S_{n}}
and let
m
1
,
m
2
,
…
,
m
s
{\displaystyle m_{1},m_{2},\ldots ,m_{s}}
be the distinct integers which appear as lengths of cycles in the cycle type of
σ
{\displaystyle \sigma }
(including 1-cycles). Let
k
i
{\displaystyle k_{i}}
be the number of cycles of length
m
i
{\displaystyle m_{i}}
in
σ
{\displaystyle \sigma }
for each
i
=
1
,
2
,
…
,
s
{\displaystyle i=1,2,\ldots ,s}
(so that
∑
i
=
1
s
k
i
m
i
=
n
{\displaystyle \sum \limits _{i=1}^{s}k_{i}m_{i}=n}
). Then the number of conjugates of
σ
{\displaystyle \sigma }
is:
n
!
(
k
1
!
m
1
k
1
)
(
k
2
!
m
2
k
2
)
⋯
(
k
s
!
m
s
k
s
)
.
{\displaystyle {\frac {n!}{\left(k_{1}!m_{1}^{k_{1}}\right)\left(k_{2}!m_{2}^{k_{2}}\right)\cdots \left(k_{s}!m_{s}^{k_{s}}\right)}}.}
== Conjugacy as group action ==
For any two elements
g
,
x
∈
G
,
{\displaystyle g,x\in G,}
let
g
⋅
x
:=
g
x
g
−
1
.
{\displaystyle g\cdot x:=gxg^{-1}.}
This defines a group action of
G
{\displaystyle G}
on
G
.
{\displaystyle G.}
The orbits of this action are the conjugacy classes, and the stabilizer of a given element is the element's centralizer.
Similarly, we can define a group action of
G
{\displaystyle G}
on the set of all subsets of
G
,
{\displaystyle G,}
by writing
g
⋅
S
:=
g
S
g
−
1
,
{\displaystyle g\cdot S:=gSg^{-1},}
or on the set of the subgroups of
G
.
{\displaystyle G.}
== Conjugacy class equation ==
If
G
{\displaystyle G}
is a finite group, then for any group element
a
,
{\displaystyle a,}
the elements in the conjugacy class of
a
{\displaystyle a}
are in one-to-one correspondence with cosets of the centralizer
C
G
(
a
)
.
{\displaystyle \operatorname {C} _{G}(a).}
This can be seen by observing that any two elements
b
{\displaystyle b}
and
c
{\displaystyle c}
belonging to the same coset (and hence,
b
=
c
z
{\displaystyle b=cz}
for some
z
{\displaystyle z}
in the centralizer
C
G
(
a
)
{\displaystyle \operatorname {C} _{G}(a)}
) give rise to the same element when conjugating
a
{\displaystyle a}
:
b
a
b
−
1
=
c
z
a
(
c
z
)
−
1
=
c
z
a
z
−
1
c
−
1
=
c
a
z
z
−
1
c
−
1
=
c
a
c
−
1
.
{\displaystyle bab^{-1}=cza(cz)^{-1}=czaz^{-1}c^{-1}=cazz^{-1}c^{-1}=cac^{-1}.}
That can also be seen from the orbit-stabilizer theorem, when considering the group as acting on itself through conjugation, so that orbits are conjugacy classes and stabilizer subgroups are centralizers. The converse holds as well.
Thus the number of elements in the conjugacy class of
a
{\displaystyle a}
is the index
[
G
:
C
G
(
a
)
]
{\displaystyle \left[G:\operatorname {C} _{G}(a)\right]}
of the centralizer
C
G
(
a
)
{\displaystyle \operatorname {C} _{G}(a)}
in
G
{\displaystyle G}
; hence the size of each conjugacy class divides the order of the group.
Furthermore, if we choose a single representative element
x
i
{\displaystyle x_{i}}
from every conjugacy class, we infer from the disjointness of the conjugacy classes that
|
G
|
=
∑
i
[
G
:
C
G
(
x
i
)
]
,
{\displaystyle |G|=\sum _{i}\left[G:\operatorname {C} _{G}(x_{i})\right],}
where
C
G
(
x
i
)
{\displaystyle \operatorname {C} _{G}(x_{i})}
is the centralizer of the element
x
i
.
{\displaystyle x_{i}.}
Observing that each element of the center
Z
(
G
)
{\displaystyle \operatorname {Z} (G)}
forms a conjugacy class containing just itself gives rise to the class equation:
|
G
|
=
|
Z
(
G
)
|
+
∑
i
[
G
:
C
G
(
x
i
)
]
,
{\displaystyle |G|=|{\operatorname {Z} (G)}|+\sum _{i}\left[G:\operatorname {C} _{G}(x_{i})\right],}
where the sum is over a representative element from each conjugacy class that is not in the center.
Knowledge of the divisors of the group order
|
G
|
{\displaystyle |G|}
can often be used to gain information about the order of the center or of the conjugacy classes.
=== Example ===
Consider a finite
p
{\displaystyle p}
-group
G
{\displaystyle G}
(that is, a group with order
p
n
,
{\displaystyle p^{n},}
where
p
{\displaystyle p}
is a prime number and
n
>
0
{\displaystyle n>0}
). We are going to prove that every finite
p
{\displaystyle p}
-group has a non-trivial center.
Since the order of any conjugacy class of
G
{\displaystyle G}
must divide the order of
G
,
{\displaystyle G,}
it follows that each conjugacy class
H
i
{\displaystyle H_{i}}
that is not in the center also has order some power of
p
k
i
,
{\displaystyle p^{k_{i}},}
where
0
<
k
i
<
n
.
{\displaystyle 0<k_{i}<n.}
But then the class equation requires that
|
G
|
=
p
n
=
|
Z
(
G
)
|
+
∑
i
p
k
i
.
{\textstyle |G|=p^{n}=|{\operatorname {Z} (G)}|+\sum _{i}p^{k_{i}}.}
From this we see that
p
{\displaystyle p}
must divide
|
Z
(
G
)
|
,
{\displaystyle |{\operatorname {Z} (G)}|,}
so
|
Z
(
G
)
|
>
1.
{\displaystyle |\operatorname {Z} (G)|>1.}
In particular, when
n
=
2
,
{\displaystyle n=2,}
then
G
{\displaystyle G}
is an abelian group since any non-trivial group element is of order
p
{\displaystyle p}
or
p
2
.
{\displaystyle p^{2}.}
If some element
a
{\displaystyle a}
of
G
{\displaystyle G}
is of order
p
2
,
{\displaystyle p^{2},}
then
G
{\displaystyle G}
is isomorphic to the cyclic group of order
p
2
,
{\displaystyle p^{2},}
hence abelian. On the other hand, if every non-trivial element in
G
{\displaystyle G}
is of order
p
,
{\displaystyle p,}
hence by the conclusion above
|
Z
(
G
)
|
>
1
,
{\displaystyle |\operatorname {Z} (G)|>1,}
then
|
Z
(
G
)
|
=
p
>
1
{\displaystyle |\operatorname {Z} (G)|=p>1}
or
p
2
.
{\displaystyle p^{2}.}
We only need to consider the case when
|
Z
(
G
)
|
=
p
>
1
,
{\displaystyle |\operatorname {Z} (G)|=p>1,}
then there is an element
b
{\displaystyle b}
of
G
{\displaystyle G}
which is not in the center of
G
.
{\displaystyle G.}
Note that
C
G
(
b
)
{\displaystyle \operatorname {C} _{G}(b)}
includes
b
{\displaystyle b}
and the center which does not contain
b
{\displaystyle b}
but at least
p
{\displaystyle p}
elements. Hence the order of
C
G
(
b
)
{\displaystyle \operatorname {C} _{G}(b)}
is strictly larger than
p
,
{\displaystyle p,}
therefore
|
C
G
(
b
)
|
=
p
2
,
{\displaystyle \left|\operatorname {C} _{G}(b)\right|=p^{2},}
therefore
b
{\displaystyle b}
is an element of the center of
G
,
{\displaystyle G,}
a contradiction. Hence
G
{\displaystyle G}
is abelian and in fact isomorphic to the direct product of two cyclic groups each of order
p
.
{\displaystyle p.}
== Conjugacy of subgroups and general subsets ==
More generally, given any subset
S
⊆
G
{\displaystyle S\subseteq G}
(
S
{\displaystyle S}
not necessarily a subgroup), define a subset
T
⊆
G
{\displaystyle T\subseteq G}
to be conjugate to
S
{\displaystyle S}
if there exists some
g
∈
G
{\displaystyle g\in G}
such that
T
=
g
S
g
−
1
.
{\displaystyle T=gSg^{-1}.}
Let
Cl
(
S
)
{\displaystyle \operatorname {Cl} (S)}
be the set of all subsets
T
⊆
G
{\displaystyle T\subseteq G}
such that
T
{\displaystyle T}
is conjugate to
S
.
{\displaystyle S.}
A frequently used theorem is that, given any subset
S
⊆
G
,
{\displaystyle S\subseteq G,}
the index of
N
(
S
)
{\displaystyle \operatorname {N} (S)}
(the normalizer of
S
{\displaystyle S}
) in
G
{\displaystyle G}
equals the cardinality of
Cl
(
S
)
{\displaystyle \operatorname {Cl} (S)}
:
|
Cl
(
S
)
|
=
[
G
:
N
(
S
)
]
.
{\displaystyle |{\operatorname {Cl} (S)}|=[G:N(S)].}
This follows since, if
g
,
h
∈
G
,
{\displaystyle g,h\in G,}
then
g
S
g
−
1
=
h
S
h
−
1
{\displaystyle gSg^{-1}=hSh^{-1}}
if and only if
g
−
1
h
∈
N
(
S
)
,
{\displaystyle g^{-1}h\in \operatorname {N} (S),}
in other words, if and only if
g
and
h
{\displaystyle g{\text{ and }}h}
are in the same coset of
N
(
S
)
.
{\displaystyle \operatorname {N} (S).}
By using
S
=
{
a
}
,
{\displaystyle S=\{a\},}
this formula generalizes the one given earlier for the number of elements in a conjugacy class.
The above is particularly useful when talking about subgroups of
G
.
{\displaystyle G.}
The subgroups can thus be divided into conjugacy classes, with two subgroups belonging to the same class if and only if they are conjugate.
Conjugate subgroups are isomorphic, but isomorphic subgroups need not be conjugate. For example, an abelian group may have two different subgroups which are isomorphic, but they are never conjugate.
== Geometric interpretation ==
Conjugacy classes in the fundamental group of a path-connected topological space can be thought of as equivalence classes of free loops under free homotopy.
== Conjugacy class and irreducible representations in finite group ==
In any finite group, the number of nonisomorphic irreducible representations over the complex numbers is precisely the number of conjugacy classes.
== See also ==
Topological conjugacy – Concept in topology
FC-group – Group in group theory mathematics
Conjugacy-closed subgroup
== Notes ==
== References ==
Grillet, Pierre Antoine (2007). Abstract algebra. Graduate texts in mathematics. Vol. 242 (2 ed.). Springer. ISBN 978-0-387-71567-4.
== External links ==
"Conjugate elements", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Conjugation_(group_theory) |
In graph theory, a circulant graph is an undirected graph acted on by a cyclic group of symmetries which takes any vertex to any other vertex. It is sometimes called a cyclic graph, but this term has other meanings.
== Equivalent definitions ==
Circulant graphs can be described in several equivalent ways:
The automorphism group of the graph includes a cyclic subgroup that acts transitively on the graph's vertices. In other words, the graph has an automorphism which is a cyclic permutation of its vertices.
The graph has an adjacency matrix that is a circulant matrix.
The n vertices of the graph can be numbered from 0 to n − 1 in such a way that, if some two vertices numbered x and (x + d) mod n are adjacent, then every two vertices numbered z and (z + d) mod n are adjacent.
The graph can be drawn (possibly with crossings) so that its vertices lie on the corners of a regular polygon, and every rotational symmetry of the polygon is also a symmetry of the drawing.
The graph is a Cayley graph of a cyclic group.
== Examples ==
Every cycle graph is a circulant graph, as is every crown graph with number of vertices congruent to 2 modulo 4.
The Paley graphs of order n (where n is a prime number congruent to 1 modulo 4) is a graph in which the vertices are the numbers from 0 to n − 1 and two vertices are adjacent if their difference is a quadratic residue modulo n. Since the presence or absence of an edge depends only on the difference modulo n of two vertex numbers, any Paley graph is a circulant graph.
Every Möbius ladder is a circulant graph, as is every complete graph. A complete bipartite graph is a circulant graph if it has the same number of vertices on both sides of its bipartition.
If two numbers m and n are relatively prime, then the m × n rook's graph (a graph that has a vertex for each square of an m × n chessboard and an edge for each two squares that a rook can move between in a single move) is a circulant graph. This is because its symmetries include as a subgroup the cyclic group Cmn
≃
{\displaystyle \simeq }
Cm×Cn. More generally, in this case, the tensor product of graphs between any m- and n-vertex circulants is itself a circulant.
Many of the known lower bounds on Ramsey numbers come from examples of circulant graphs that have small maximum cliques and small maximum independent sets.
== A specific example ==
The circulant graph
C
n
s
1
,
…
,
s
k
{\displaystyle C_{n}^{s_{1},\ldots ,s_{k}}}
with jumps
s
1
,
…
,
s
k
{\displaystyle s_{1},\ldots ,s_{k}}
is defined as the graph with
n
{\displaystyle n}
nodes labeled
0
,
1
,
…
,
n
−
1
{\displaystyle 0,1,\ldots ,n-1}
where each node i is adjacent to 2k nodes
i
±
s
1
,
…
,
i
±
s
k
mod
n
{\displaystyle i\pm s_{1},\ldots ,i\pm s_{k}\mod n}
.
The graph
C
n
s
1
,
…
,
s
k
{\displaystyle C_{n}^{s_{1},\ldots ,s_{k}}}
is connected if and only if
gcd
(
n
,
s
1
,
…
,
s
k
)
=
1
{\displaystyle \gcd(n,s_{1},\ldots ,s_{k})=1}
.
If
1
≤
s
1
<
⋯
<
s
k
{\displaystyle 1\leq s_{1}<\cdots <s_{k}}
are fixed integers then the number of spanning trees
t
(
C
n
s
1
,
…
,
s
k
)
=
n
a
n
2
{\displaystyle t(C_{n}^{s_{1},\ldots ,s_{k}})=na_{n}^{2}}
where
a
n
{\displaystyle a_{n}}
satisfies a recurrence relation of order
2
s
k
−
1
{\displaystyle 2^{s_{k}-1}}
.
In particular,
t
(
C
n
1
,
2
)
=
n
F
n
2
{\displaystyle t(C_{n}^{1,2})=nF_{n}^{2}}
where
F
n
{\displaystyle F_{n}}
is the n-th Fibonacci number.
== Self-complementary circulants ==
A self-complementary graph is a graph in which replacing every edge by a non-edge and vice versa produces an isomorphic graph.
For instance, a five-vertex cycle graph is self-complementary, and is also a circulant graph. More generally every Paley graph of prime order is a self-complementary circulant graph. Horst Sachs showed that, if a number n has the property that every prime factor of n is congruent to 1 modulo 4, then there exists a self-complementary circulant with n vertices. He conjectured that this condition is also necessary: that no other values of n allow a self-complementary circulant to exist. The conjecture was proven some 40 years later, by Vilfred.
== Ádám's conjecture ==
Define a circulant numbering of a circulant graph to be a labeling of the vertices of the graph by the numbers from 0 to n − 1 in such a way that, if some two vertices numbered x and y are adjacent, then every two vertices numbered z and (z − x + y) mod n are adjacent. Equivalently, a circulant numbering is a numbering of the vertices for which the adjacency matrix of the graph is a circulant matrix.
Let a be an integer that is relatively prime to n, and let b be any integer. Then the linear function that takes a number x to ax + b transforms a circulant numbering to another circulant numbering. András Ádám conjectured that these linear maps are the only ways of renumbering a circulant graph while preserving the circulant property: that is, if G and H are isomorphic circulant graphs, with different numberings, then there is a linear map that transforms the numbering for G into the numbering for H. However, Ádám's conjecture is now known to be false. A counterexample is given by graphs G and H with 16 vertices each; a vertex x in G is connected to the six neighbors x ± 1, x ± 2, and x ± 7 modulo 16, while in H the six neighbors are x ± 2, x ± 3, and x ± 5 modulo 16. These two graphs are isomorphic, but their isomorphism cannot be realized by a linear map.
Toida's conjecture refines Ádám's conjecture by considering only a special class of circulant graphs, in which all of the differences between adjacent graph vertices are relatively prime to the number of vertices. According to this refined conjecture, these special circulant graphs should have the property that all of their symmetries come from symmetries of the underlying additive group of numbers modulo n. It was proven by two groups in 2001 and 2002.
== Algorithmic questions ==
There is a polynomial-time recognition algorithm for circulant graphs, and the isomorphism problem for circulant graphs can be solved in polynomial time.
== References ==
== External links ==
Weisstein, Eric W. "Circulant Graph". MathWorld. | Wikipedia/Circulant_graph |
In the mathematical field of graph theory, a half-transitive graph is a graph that is both vertex-transitive and edge-transitive, but not symmetric. In other words, a graph is half-transitive if its automorphism group acts transitively upon both its vertices and its edges, but not on ordered pairs of linked vertices.
Every connected symmetric graph must be vertex-transitive and edge-transitive, and the converse is true for graphs of odd degree, so that half-transitive graphs of odd degree do not exist. However, there do exist half-transitive graphs of even degree. The smallest half-transitive graph is the Holt graph, with degree 4 and 27 vertices.
== References == | Wikipedia/Half-transitive_graph |
In the mathematical field of graph theory, an edge-transitive graph is a graph G such that, given any two edges e1 and e2 of G, there is an automorphism of G that maps e1 to e2.
In other words, a graph is edge-transitive if its automorphism group acts transitively on its edges.
== Examples and properties ==
The number of connected simple edge-transitive graphs on n vertices is 1, 1, 2, 3, 4, 6, 5, 8, 9, 13, 7, 19, 10, 16, 25, 26, 12, 28 ... (sequence A095424 in the OEIS)
Edge-transitive graphs include all symmetric graphs, such as the vertices and edges of the cube. Symmetric graphs are also vertex-transitive (if they are connected), but in general edge-transitive graphs need not be vertex-transitive. Every connected edge-transitive graph that is not vertex-transitive must be bipartite, (and hence can be colored with only two colors), and either semi-symmetric or biregular.
Examples of edge but not vertex transitive graphs include the complete bipartite graphs
K
m
,
n
{\displaystyle K_{m,n}}
where m ≠ n, which includes the star graphs
K
1
,
n
{\displaystyle K_{1,n}}
. For graphs on n vertices, there are (n-1)/2 such graphs for odd n and (n-2) for even n.
Additional edge transitive graphs which are not symmetric can be formed as subgraphs of these complete bi-partite graphs in certain cases. Subgraphs of complete bipartite graphs Km,n exist when m and n share a factor greater than 2. When the greatest common factor is 2, subgraphs exist when 2n/m is even or if m=4 and n is an odd multiple of 6. So edge transitive subgraphs exist for K3,6, K4,6 and K5,10 but not K4,10. An alternative construction for some edge transitive graphs is to add vertices to the midpoints of edges of a symmetric graph with v vertices and e edges, creating a bipartite graph with e vertices of order 2, and v of order 2e/v.
An edge-transitive graph that is also regular, but still not vertex-transitive, is called semi-symmetric. The Gray graph, a cubic graph on 54 vertices, is an example of a regular graph which is edge-transitive but not vertex-transitive. The Folkman graph, a quartic graph on 20 vertices is the smallest such graph.
The vertex connectivity of an edge-transitive graph always equals its minimum degree.
== See also ==
Edge-transitive (in geometry)
== References ==
== External links ==
Weisstein, Eric W. "Edge-transitive graph". MathWorld. | Wikipedia/Edge-transitive_graph |
In physics, relativistic mechanics refers to mechanics compatible with special relativity (SR) and general relativity (GR). It provides a non-quantum mechanical description of a system of particles, or of a fluid, in cases where the velocities of moving objects are comparable to the speed of light c. As a result, classical mechanics is extended correctly to particles traveling at high velocities and energies, and provides a consistent inclusion of electromagnetism with the mechanics of particles. This was not possible in Galilean relativity, where it would be permitted for particles and light to travel at any speed, including faster than light. The foundations of relativistic mechanics are the postulates of special relativity and general relativity. The unification of SR with quantum mechanics is relativistic quantum mechanics, while attempts for that of GR is quantum gravity, an unsolved problem in physics.
As with classical mechanics, the subject can be divided into "kinematics"; the description of motion by specifying positions, velocities and accelerations, and "dynamics"; a full description by considering energies, momenta, and angular momenta and their conservation laws, and forces acting on particles or exerted by particles. There is however a subtlety; what appears to be "moving" and what is "at rest"—which is termed by "statics" in classical mechanics—depends on the relative motion of observers who measure in frames of reference.
Some definitions and concepts from classical mechanics do carry over to SR, such as force as the time derivative of momentum (Newton's second law), the work done by a particle as the line integral of force exerted on the particle along a path, and power as the time derivative of work done. However, there are a number of significant modifications to the remaining definitions and formulae. SR states that motion is relative and the laws of physics are the same for all experimenters irrespective of their inertial reference frames. In addition to modifying notions of space and time, SR forces one to reconsider the concepts of mass, momentum, and energy all of which are important constructs in Newtonian mechanics. SR shows that these concepts are all different aspects of the same physical quantity in much the same way that it shows space and time to be interrelated.
The equations become more complicated in the more familiar three-dimensional vector calculus formalism, due to the nonlinearity in the Lorentz factor, which accurately accounts for relativistic velocity dependence and the speed limit of all particles and fields. However, they have a simpler and elegant form in four-dimensional spacetime, which includes flat Minkowski space (SR) and curved spacetime (GR), because three-dimensional vectors derived from space and scalars derived from time can be collected into four vectors, or four-dimensional tensors. The six-component angular momentum tensor is sometimes called a bivector because in the 3D viewpoint it is two vectors (one of these, the conventional angular momentum, being an axial vector).
== Relativistic kinematics ==
The relativistic four-velocity, that is the four-vector representing velocity in relativity, is defined as follows:
U
=
d
X
d
τ
=
(
c
d
t
d
τ
,
d
x
d
τ
)
{\displaystyle {\boldsymbol {\mathbf {U} }}={\frac {d{\boldsymbol {\mathbf {X} }}}{d\tau }}=\left({\frac {cdt}{d\tau }},{\frac {d\mathbf {x} }{d\tau }}\right)}
In the above,
τ
{\displaystyle {\tau }}
is the proper time of the path through spacetime, called the world-line, followed by the object velocity the above represents, and
X
=
(
c
t
,
x
)
{\displaystyle {\boldsymbol {\mathbf {X} }}=(ct,\mathbf {x} )}
is the four-position; the coordinates of an event. Due to time dilation, the proper time is the time between two events in a frame of reference where they take place at the same location. The proper time is related to coordinate time t by:
d
τ
d
t
=
1
γ
(
v
)
{\displaystyle {\frac {d\tau }{dt}}={\frac {1}{\gamma (\mathbf {v} )}}}
where
γ
(
v
)
{\displaystyle {\gamma }(\mathbf {v} )}
is the Lorentz factor:
γ
(
v
)
=
1
1
−
v
⋅
v
/
c
2
⇌
γ
(
v
)
=
1
1
−
(
v
/
c
)
2
.
{\displaystyle \gamma (\mathbf {v} )={\frac {1}{\sqrt {1-\mathbf {v} \cdot \mathbf {v} /c^{2}}}}\,\rightleftharpoons \,\gamma (v)={\frac {1}{\sqrt {1-(v/c)^{2}}}}.}
(either version may be quoted) so it follows:
U
=
γ
(
v
)
(
c
,
v
)
{\displaystyle {\boldsymbol {\mathbf {U} }}=\gamma (\mathbf {v} )(c,\mathbf {v} )}
The first three terms, excepting the factor of
γ
(
v
)
{\displaystyle {\gamma (\mathbf {v} )}}
, is the velocity as seen by the observer in their own reference frame. The
γ
(
v
)
{\displaystyle {\gamma (\mathbf {v} )}}
is determined by the velocity
v
{\displaystyle \mathbf {v} }
between the observer's reference frame and the object's frame, which is the frame in which its proper time is measured. This quantity is invariant under Lorentz transformation, so to check to see what an observer in a different reference frame sees, one simply multiplies the velocity four-vector by the Lorentz transformation matrix between the two reference frames.
== Relativistic dynamics ==
=== Rest mass and relativistic mass ===
The mass of an object as measured in its own frame of reference is called its rest mass or invariant mass and is sometimes written
m
0
{\displaystyle m_{0}}
. If an object moves with velocity
v
{\displaystyle \mathbf {v} }
in some other reference frame, the quantity
m
=
γ
(
v
)
m
0
{\displaystyle m=\gamma (\mathbf {v} )m_{0}}
is often called the object's "relativistic mass" in that frame.
Some authors use
m
{\displaystyle m}
to denote rest mass, but for the sake of clarity this article will follow the convention of using
m
{\displaystyle m}
for relativistic mass and
m
0
{\displaystyle m_{0}}
for rest mass.
Lev Okun has suggested that the concept of relativistic mass "has no rational justification today" and should no longer be taught.
Other physicists, including Wolfgang Rindler and T. R. Sandin, contend that the concept is useful.
See mass in special relativity for more information on this debate.
A particle whose rest mass is zero is called massless. Photons and gravitons are thought to be massless, and neutrinos are nearly so.
=== Relativistic energy and momentum ===
There are a couple of (equivalent) ways to define momentum and energy in SR. One method uses conservation laws. If these laws are to remain valid in SR they must be true in every possible reference frame. However, if one does some simple thought experiments using the Newtonian definitions of momentum and energy, one sees that these quantities are not conserved in SR. One can rescue the idea of conservation by making some small modifications to the definitions to account for relativistic velocities. It is these new definitions which are taken as the correct ones for momentum and energy in SR.
The four-momentum of an object is straightforward, identical in form to the classical momentum, but replacing 3-vectors with 4-vectors:
P
=
m
0
U
=
(
E
/
c
,
p
)
{\displaystyle {\boldsymbol {\mathbf {P} }}=m_{0}{\boldsymbol {\mathbf {U} }}=(E/c,\mathbf {p} )}
The energy and momentum of an object with invariant mass
m
0
{\displaystyle m_{0}}
, moving with velocity
v
{\displaystyle \mathbf {v} }
with respect to a given frame of reference, are respectively given by
E
=
γ
(
v
)
m
0
c
2
p
=
γ
(
v
)
m
0
v
{\displaystyle {\begin{aligned}E&=\gamma (\mathbf {v} )m_{0}c^{2}\\\mathbf {p} &=\gamma (\mathbf {v} )m_{0}\mathbf {v} \end{aligned}}}
The factor
γ
{\displaystyle \gamma }
comes from the definition of the four-velocity described above. The appearance of
γ
{\displaystyle \gamma }
may be stated in an alternative way, which will be explained in the next section.
The kinetic energy,
K
{\displaystyle K}
, is defined as
K
=
(
γ
−
1
)
m
0
c
2
=
E
−
m
0
c
2
,
{\displaystyle K=(\gamma -1)m_{0}c^{2}=E-m_{0}c^{2}\,,}
and the speed as a function of kinetic energy is given by
v
=
c
1
−
(
m
0
c
2
K
+
m
0
c
2
)
2
=
c
K
(
K
+
2
m
0
c
2
)
K
+
m
0
c
2
=
c
(
E
−
m
0
c
2
)
(
E
+
m
0
c
2
)
E
=
p
c
2
E
.
{\displaystyle v=c{\sqrt {1-\left({\frac {m_{0}c^{2}}{K+m_{0}c^{2}}}\right)^{2}}}={\frac {c{\sqrt {K(K+2m_{0}c^{2})}}}{K+m_{0}c^{2}}}={\frac {c{\sqrt {(E-m_{0}c^{2})(E+m_{0}c^{2})}}}{E}}={\frac {pc^{2}}{E}}\,.}
The spatial momentum may be written as
p
=
m
v
{\displaystyle \mathbf {p} =m\mathbf {v} }
, preserving the form from Newtonian mechanics with relativistic mass substituted for Newtonian mass. However, this substitution fails for some quantities, including force and kinetic energy. Moreover, the relativistic mass is not invariant under Lorentz transformations, while the rest mass is. For this reason, many people prefer to use the rest mass and account for
γ
{\displaystyle \gamma }
explicitly through the 4-velocity or coordinate time.
A simple relation between energy, momentum, and velocity may be obtained from the definitions of energy and momentum by multiplying the energy by
v
{\displaystyle \mathbf {v} }
, multiplying the momentum by
c
2
{\displaystyle c^{2}}
, and noting that the two expressions are equal. This yields
p
c
2
=
E
v
{\displaystyle \mathbf {p} c^{2}=E\mathbf {v} }
v
{\displaystyle \mathbf {v} }
may then be eliminated by dividing this equation by
c
{\displaystyle c}
and squaring,
(
p
c
)
2
=
E
2
(
v
/
c
)
2
{\displaystyle (pc)^{2}=E^{2}(v/c)^{2}}
dividing the definition of energy by
γ
{\displaystyle \gamma }
and squaring,
E
2
(
1
−
(
v
/
c
)
2
)
=
(
m
0
c
2
)
2
{\displaystyle E^{2}\left(1-(v/c)^{2}\right)=\left(m_{0}c^{2}\right)^{2}}
and substituting:
E
2
−
(
p
c
)
2
=
(
m
0
c
2
)
2
{\displaystyle E^{2}-(pc)^{2}=\left(m_{0}c^{2}\right)^{2}}
This is the relativistic energy–momentum relation.
While the energy
E
{\displaystyle E}
and the momentum
p
{\displaystyle \mathbf {p} }
depend on the frame of reference in which they are measured, the quantity
E
2
−
(
p
c
)
2
{\displaystyle E^{2}-(pc)^{2}}
is invariant. Its value is
−
c
2
{\displaystyle -c^{2}}
times the squared magnitude of the 4-momentum vector.
The invariant mass of a system may be written as
m
0
tot
=
E
tot
2
−
(
p
tot
c
)
2
c
2
{\displaystyle {m_{0}}_{\text{tot}}={\frac {\sqrt {E_{\text{tot}}^{2}-(p_{\text{tot}}c)^{2}}}{c^{2}}}}
Due to kinetic energy and binding energy, this quantity is different from the sum of the rest masses of the particles of which the system is composed. Rest mass is not a conserved quantity in special relativity, unlike the situation in Newtonian physics. However, even if an object is changing internally, so long as it does not exchange energy or momentum with its surroundings, its rest mass will not change and can be calculated with the same result in any reference frame.
=== Mass–energy equivalence ===
The relativistic energy–momentum equation holds for all particles, even for massless particles for which m0 = 0. In this case:
E
=
p
c
{\displaystyle E=pc}
When substituted into Ev = c2p, this gives v = c: massless particles (such as photons) always travel at the speed of light.
Notice that the rest mass of a composite system will generally be slightly different from the sum of the rest masses of its parts since, in its rest frame, their kinetic energy will increase its mass and their (negative) binding energy will decrease its mass. In particular, a hypothetical "box of light" would have rest mass even though made of particles which do not since their momenta would cancel.
Looking at the above formula for invariant mass of a system, one sees that, when a single massive object is at rest (v = 0, p = 0), there is a non-zero mass remaining: m0 = E/c2.
The corresponding energy, which is also the total energy when a single particle is at rest, is referred to as "rest energy". In systems of particles which are seen from a moving inertial frame, total energy increases and so does momentum. However, for single particles the rest mass remains constant, and for systems of particles the invariant mass remain constant, because in both cases, the energy and momentum increases subtract from each other, and cancel. Thus, the invariant mass of systems of particles is a calculated constant for all observers, as is the rest mass of single particles.
=== The mass of systems and conservation of invariant mass ===
For systems of particles, the energy–momentum equation requires summing the momentum vectors of the particles:
E
2
−
p
⋅
p
c
2
=
m
0
2
c
4
{\displaystyle E^{2}-\mathbf {p} \cdot \mathbf {p} c^{2}=m_{0}^{2}c^{4}}
The inertial frame in which the momenta of all particles sums to zero is called the center of momentum frame. In this special frame, the relativistic energy–momentum equation has p = 0, and thus gives the invariant mass of the system as merely the total energy of all parts of the system, divided by c2
m
0
,
s
y
s
t
e
m
=
∑
n
E
n
/
c
2
{\displaystyle m_{0,\,{\rm {system}}}=\sum _{n}E_{n}/c^{2}}
This is the invariant mass of any system which is measured in a frame where it has zero total momentum, such as a bottle of hot gas on a scale. In such a system, the mass which the scale weighs is the invariant mass, and it depends on the total energy of the system. It is thus more than the sum of the rest masses of the molecules, but also includes all the totaled energies in the system as well. Like energy and momentum, the invariant mass of isolated systems cannot be changed so long as the system remains totally closed (no mass or energy allowed in or out), because the total relativistic energy of the system remains constant so long as nothing can enter or leave it.
An increase in the energy of such a system which is caused by translating the system to an inertial frame which is not the center of momentum frame, causes an increase in energy and momentum without an increase in invariant mass. E = m0c2, however, applies only to isolated systems in their center-of-momentum frame where momentum sums to zero.
Taking this formula at face value, we see that in relativity, mass is simply energy by another name (and measured in different units). In 1927 Einstein remarked about special relativity, "Under this theory mass is not an unalterable magnitude, but a magnitude dependent on (and, indeed, identical with) the amount of energy."
=== Closed (isolated) systems ===
In a "totally-closed" system (i.e., isolated system) the total energy, the total momentum, and hence the total invariant mass are conserved. Einstein's formula for change in mass translates to its simplest ΔE = Δmc2 form, however, only in non-closed systems in which energy is allowed to escape (for example, as heat and light), and thus invariant mass is reduced. Einstein's equation shows that such systems must lose mass, in accordance with the above formula, in proportion to the energy they lose to the surroundings. Conversely, if one can measure the differences in mass between a system before it undergoes a reaction which releases heat and light, and the system after the reaction when heat and light have escaped, one can estimate the amount of energy which escapes the system.
==== Chemical and nuclear reactions ====
In both nuclear and chemical reactions, such energy represents the difference in binding energies of electrons in atoms (for chemistry) or between nucleons in nuclei (in atomic reactions). In both cases, the mass difference between reactants and (cooled) products measures the mass of heat and light which will escape the reaction, and thus (using the equation) give the equivalent energy of heat and light which may be emitted if the reaction proceeds.
In chemistry, the mass differences associated with the emitted energy are around 10−9 of the molecular mass. However, in nuclear reactions the energies are so large that they are associated with mass differences, which can be estimated in advance, if the products and reactants have been weighed (atoms can be weighed indirectly by using atomic masses, which are always the same for each nuclide). Thus, Einstein's formula becomes important when one has measured the masses of different atomic nuclei. By looking at the difference in masses, one can predict which nuclei have stored energy that can be released by certain nuclear reactions, providing important information which was useful in the development of nuclear energy and, consequently, the nuclear bomb. Historically, for example, Lise Meitner was able to use the mass differences in nuclei to estimate that there was enough energy available to make nuclear fission a favorable process. The implications of this special form of Einstein's formula have thus made it one of the most famous equations in all of science.
==== Center of momentum frame ====
The equation E = m0c2 applies only to isolated systems in their center of momentum frame. It has been popularly misunderstood to mean that mass may be converted to energy, after which the mass disappears. However, popular explanations of the equation as applied to systems include open (non-isolated) systems for which heat and light are allowed to escape, when they otherwise would have contributed to the mass (invariant mass) of the system.
Historically, confusion about mass being "converted" to energy has been aided by confusion between mass and "matter", where matter is defined as fermion particles. In such a definition, electromagnetic radiation and kinetic energy (or heat) are not considered "matter". In some situations, matter may indeed be converted to non-matter forms of energy (see above), but in all these situations, the matter and non-matter forms of energy still retain their original mass.
For isolated systems (closed to all mass and energy exchange), mass never disappears in the center of momentum frame, because energy cannot disappear. Instead, this equation, in context, means only that when any energy is added to, or escapes from, a system in the center-of-momentum frame, the system will be measured as having gained or lost mass, in proportion to energy added or removed. Thus, in theory, if an atomic bomb were placed in a box strong enough to hold its blast, and detonated upon a scale, the mass of this closed system would not change, and the scale would not move. Only when a transparent "window" was opened in the super-strong plasma-filled box, and light and heat were allowed to escape in a beam, and the bomb components to cool, would the system lose the mass associated with the energy of the blast. In a 21 kiloton bomb, for example, about a gram of light and heat is created. If this heat and light were allowed to escape, the remains of the bomb would lose a gram of mass, as it cooled. In this thought-experiment, the light and heat carry away the gram of mass, and would therefore deposit this gram of mass in the objects that absorb them.
=== Angular momentum ===
In relativistic mechanics, the time-varying mass moment
N
=
m
(
x
−
t
v
)
{\displaystyle \mathbf {N} =m\left(\mathbf {x} -t\mathbf {v} \right)}
and orbital 3-angular momentum
L
=
x
×
p
{\displaystyle \mathbf {L} =\mathbf {x} \times \mathbf {p} }
of a point-like particle are combined into a four-dimensional bivector in terms of the 4-position X and the 4-momentum P of the particle:
M
=
X
∧
P
{\displaystyle \mathbf {M} =\mathbf {X} \wedge \mathbf {P} }
where ∧ denotes the exterior product. This tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system. So, for an assembly of discrete particles one sums the angular momentum tensors over the particles, or integrates the density of angular momentum over the extent of a continuous mass distribution.
Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields.
=== Force ===
In special relativity, Newton's second law does not hold in the form F = ma, but it does if it is expressed as
F
=
d
p
d
t
{\displaystyle \mathbf {F} ={\frac {d\mathbf {p} }{dt}}}
where p = γ(v)m0v is the momentum as defined above and m0 is the invariant mass. Thus, the force is given by
F
=
γ
3
m
0
a
∥
+
γ
m
0
a
⊥
w
h
e
r
e
γ
=
γ
(
v
)
{\displaystyle \mathbf {F} =\gamma ^{3}m_{0}\,\mathbf {a} _{\parallel }+\gamma m_{0}\,\mathbf {a} _{\perp }\ \mathrm {where} \ \gamma =\gamma (\mathbf {v} )}
Consequently, in some old texts, γ(v)3m0 is referred to as the longitudinal mass, and γ(v)m0 is referred to as the transverse mass, which is numerically the same as the relativistic mass. See mass in special relativity.
If one inverts this to calculate acceleration from force, one gets
a
=
1
m
0
γ
(
v
)
(
F
−
(
v
⋅
F
)
v
c
2
)
.
{\displaystyle \mathbf {a} ={\frac {1}{m_{0}\gamma (\mathbf {v} )}}\left(\mathbf {F} -{\frac {(\mathbf {v} \cdot \mathbf {F} )\mathbf {v} }{c^{2}}}\right)\,.}
The force described in this section is the classical 3-D force which is not a four-vector. This 3-D force is the appropriate concept of force since it is the force which obeys Newton's third law of motion. It should not be confused with the so-called four-force which is merely the 3-D force in the comoving frame of the object transformed as if it were a four-vector. However, the density of 3-D force (linear momentum transferred per unit four-volume) is a four-vector (density of weight +1) when combined with the negative of the density of power transferred.
=== Torque ===
The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time:
Γ
=
d
M
d
τ
=
X
∧
F
{\displaystyle {\boldsymbol {\Gamma }}={\frac {d\mathbf {M} }{d\tau }}=\mathbf {X} \wedge \mathbf {F} }
or in tensor components:
Γ
α
β
=
X
α
F
β
−
X
β
F
α
{\displaystyle \Gamma _{\alpha \beta }=X_{\alpha }F_{\beta }-X_{\beta }F_{\alpha }}
where F is the 4d force acting on the particle at the event X. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass.
=== Kinetic energy ===
The work-energy theorem says the change in kinetic energy is equal to the work done on the body. In special relativity:
Δ
K
=
W
=
[
γ
1
−
γ
0
]
m
0
c
2
.
{\displaystyle {\begin{aligned}\Delta K=W=[\gamma _{1}-\gamma _{0}]m_{0}c^{2}.\end{aligned}}}
If in the initial state the body was at rest, so v0 = 0 and γ0(v0) = 1, and in the final state it has speed v1 = v, setting γ1(v1) = γ(v), the kinetic energy is then;
K
=
[
γ
(
v
)
−
1
]
m
0
c
2
,
{\displaystyle K=[\gamma (v)-1]m_{0}c^{2}\,,}
a result that can be directly obtained by subtracting the rest energy m0c2 from the total relativistic energy γ(v)m0c2.
=== Newtonian limit ===
The Lorentz factor γ(v) can be expanded into a Taylor series or binomial series for (v/c)2 < 1, obtaining:
γ
=
1
1
−
(
v
/
c
)
2
=
∑
n
=
0
∞
(
v
c
)
2
n
∏
k
=
1
n
(
2
k
−
1
2
k
)
=
1
+
1
2
(
v
c
)
2
+
3
8
(
v
c
)
4
+
5
16
(
v
c
)
6
+
⋯
{\displaystyle \gamma ={\dfrac {1}{\sqrt {1-(v/c)^{2}}}}=\sum _{n=0}^{\infty }\left({\dfrac {v}{c}}\right)^{2n}\prod _{k=1}^{n}\left({\dfrac {2k-1}{2k}}\right)=1+{\dfrac {1}{2}}\left({\dfrac {v}{c}}\right)^{2}+{\dfrac {3}{8}}\left({\dfrac {v}{c}}\right)^{4}+{\dfrac {5}{16}}\left({\dfrac {v}{c}}\right)^{6}+\cdots }
and consequently
E
−
m
0
c
2
=
1
2
m
0
v
2
+
3
8
m
0
v
4
c
2
+
5
16
m
0
v
6
c
4
+
⋯
;
{\displaystyle E-m_{0}c^{2}={\frac {1}{2}}m_{0}v^{2}+{\frac {3}{8}}{\frac {m_{0}v^{4}}{c^{2}}}+{\frac {5}{16}}{\frac {m_{0}v^{6}}{c^{4}}}+\cdots ;}
p
=
m
0
v
+
1
2
m
0
v
2
v
c
2
+
3
8
m
0
v
4
v
c
4
+
5
16
m
0
v
6
v
c
6
+
⋯
.
{\displaystyle \mathbf {p} =m_{0}\mathbf {v} +{\frac {1}{2}}{\frac {m_{0}v^{2}\mathbf {v} }{c^{2}}}+{\frac {3}{8}}{\frac {m_{0}v^{4}\mathbf {v} }{c^{4}}}+{\frac {5}{16}}{\frac {m_{0}v^{6}\mathbf {v} }{c^{6}}}+\cdots .}
For velocities much smaller than that of light, one can neglect the terms with c2 and higher in the denominator. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities.
== See also ==
== References ==
== Further reading ==
General scope and special/general relativity
P.M. Whelan; M.J. Hodgeson (1978). Essential Principles of Physics (2nd ed.). John Murray. ISBN 0-7195-3382-1.
G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. ISBN 978-0-521-57507-2.
P.A. Tipler; G. Mosca (2008). Physics for Scientists and Engineers: With Modern Physics (6th ed.). W.H. Freeman and Co. ISBN 978-1-4292-0265-7.
R.G. Lerner; G.L. Trigg (2005). Encyclopaedia of Physics (2nd ed.). VHC Publishers, Hans Warlimont, Springer. ISBN 978-0-07-025734-4.
Concepts of Modern Physics (4th Edition), A. Beiser, Physics, McGraw-Hill (International), 1987, ISBN 0-07-100144-1
C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. ISBN 0-07-051400-3.
T. Frankel (2012). The Geometry of Physics (3rd ed.). Cambridge University Press. ISBN 978-1-107-60260-1.
L.H. Greenberg (1978). Physics with Modern Applications. Holt-Saunders International W.B. Saunders and Co. ISBN 0-7216-4247-0.
A. Halpern (1988). 3000 Solved Problems in Physics, Schaum Series. Mc Graw Hill. ISBN 978-0-07-025734-4.
Electromagnetism and special relativity
G.A.G. Bennet (1974). Electricity and Modern Physics (2nd ed.). Edward Arnold (UK). ISBN 0-7131-2459-8.
I.S. Grant; W.R. Phillips; Manchester Physics (2008). Electromagnetism (2nd ed.). John Wiley & Sons. ISBN 978-0-471-92712-9.
D.J. Griffiths (2007). Introduction to Electrodynamics (3rd ed.). Pearson Education, Dorling Kindersley. ISBN 978-81-7758-293-2.
Classical mechanics and special relativity
J.R. Forshaw; A.G. Smith (2009). Dynamics and Relativity. Wiley. ISBN 978-0-470-01460-8.
D. Kleppner; R.J. Kolenkow (2010). An Introduction to Mechanics. Cambridge University Press. ISBN 978-0-521-19821-9.
L.N. Hand; J.D. Finch (2008). Analytical Mechanics. Cambridge University Press. ISBN 978-0-521-57572-0.
P.J. O'Donnell (2015). Essential Dynamics and Relativity. CRC Press. ISBN 978-1-4665-8839-4.
General relativity
D. McMahon (2006). Relativity DeMystified. Mc Graw Hill. ISBN 0-07-145545-0.
J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. ISBN 0-7167-0344-0.
J.A. Wheeler; I. Ciufolini (1995). Gravitation and Inertia. Princeton University Press. ISBN 978-0-691-03323-5.
R.J.A. Lambourne (2010). Relativity, Gravitation, and Cosmology. Cambridge University Press. ISBN 978-0-521-13138-4. | Wikipedia/Relativistic_physics |
The Applied Mechanics Division (AMD) is a division in the American Society of Mechanical Engineers (ASME). The AMD was founded in 1927, with Stephen Timoshenko being the first chair. The current AMD membership is over 5000, out of about 90,000 members of the ASME. AMD is the largest of the six divisions in the ASME Basic Engineering Technical Group.
== Mission ==
The mission of the Applied Mechanics Division is to foster fundamental research in, and intelligent application of, applied mechanics.
== Summer Meeting ==
The Division participates annually in a Summer Meeting by programming Symposia and committee meetings. The principal organisers of the Summer Meetings rotate among several organizations, with a period of four years, as described below.
Year 4n (2020, 2024, etc.): International Union of Theoretical and Applied Mechanics (IUTAM).
Year 4n + 1 (2017, 2021, etc.): Materials Division of the ASME (joined with the Applied Mechanics Division of ASME, Engineering Mechanics of the American Society of Civil Engineers, and Society of Engineering Sciences).
Year 4n + 2 (2018, 2022, etc.): National Committee of Theoretical and Applied Mechanics.
Year 4n + 3 (2019, 2023, etc.): Applied Mechanics Division of the ASME (joined with Materials Division of ASME).
== Publications ==
Newsletters of the Applied Mechanics Division
Journal of Applied Mechanics
Applied Mechanics Reviews
== Awards ==
Timoshenko Medal
Koiter Medal
Drucker Medal
Thomas K. Caughey Dynamics Award
Ted Belytschko Applied Mechanics Award
Thomas J.R. Hughes Young Investigator Award
Journal of Applied Mechanics Award
These awards are conferred every year at the Applied Mechanics Division Banquet held during the annual ASME (IMECE) conference. Awards other than those mentioned above are also celebrated during this banquet, such as the Haythornthwaite Research Initiation Grant Award and the Eshelby Mechanics Award for Young Faculty.
== Executive committee ==
The responsibility for guiding the Division, within the framework of the ASME, is vested in an executive committee of five members. The executive committee meets twice a year at the Summer Meeting and Winter Annual Meeting. Members correspond throughout the year by emails and conference calls. Three members shall constitute a quorum, and all action items must be approved by a majority of the committee.
Each member serves a term of five years, beginning and ending at the conclusion of the Summer Meeting, spending one year in each of the following positions:
Secretary
Vice-chair of the Program Committee
Chair of the Program Committee
Vice-chair of the Division
Chair of the Division
New members of the executive committee are sought from the entire membership of the Division. Due considerations are given to leadership, technical accomplishment, as well as diversity in geographic locations, sub-disciplines, and genders. At the Winter Annual Meeting each year, the executive committee nominates one new member, who is subsequently appointed by the ASME Council.
The executive committee has an additional non-rotating position, the Recording Secretary. The responsibility of the Recording Secretary is to attend and record minutes for the Executive Committee Meeting at the Summer and Winter Annual Meeting and the General Committee Meeting at the Winter Annual Meeting. The Recording Secretary serves a term of two years and is selected from the junior members (i.e. young investigators) of the AMD.
Current members of the Executive Committee
Yashashree Kulkarni, University of Houston, Houston, TX, United States: Secretary
Samantha Daly, University of California at Santa Barbara, Santa Barbara, CA, United States: Vice-chair of the Program Committee
Narayana Aluru, University of Texas at Austin, Austin, TX, United States: Chair of the Program Committee
Glaucio Paulino, Princeton University, Princeton NJ, United States: Vice-chair of the Division
Marco Amabili, McGill University, Montreal, Canada: Chair of the Division
== Technical Committees ==
The mission of a Technical Committee is to promote a field in Applied Mechanics. The principal approach for a Technical Committee to accomplish this mission is to organize symposia at the Summer and Winter Meetings. Technical Committees generally meet at the Winter Annual Meeting and the Summer Meeting; they may also schedule special meetings.
There are 17 Technical Committees in the Applied Mechanics Division.
Technical Committees are established and dissolved by the executive committee.
== Financial ==
== History ==
See Naghdi's "A Brief History of the Applied Mechanics Division of ASME" for details of the history from 1927 to 1977.
== Past chairs of the Applied Mechanics Division ==
Taher Saif (2023),
Pradeep Guduru (2022),
Yuri Bazilevs (2021),
Yonggang Huang (2020),
Balakumar Balachandran (2019),
Pradeep Sharma (2018),
Arun Shukla (2017),
Peter Wriggers (2016),
Huajian Gao (2015),
Lawrence A. Bergman (2014),
Ken Liechti (2013),
Ares Rosakis (2012),
Tayfun Tezduyar (2011),
Zhigang Suo (2010),
Dan Inman (2009),
K. Ravi-Chandar (2008),
Thomas N. Farris (2007),
Wing Kam Liu (2006),
Mary C. Boyce (2005),
Pol Spanos (2004),
Stelios Kyriakides (2003),
Dusan Krajcinovic (2002),
Thomas J.R. Hughes (2001),
Alan Needleman (2000),
Lallit Anand (1999),
Stanley A. Berger (1998),
Carl T. Herakovich (1997),
Thomas A. Cruse (1996),
John W. Hutchinson (1995),
L.B. Freund (1994),
David B. Bogy (1993),
William S. Saric (1992),
Ted Belytschko (1991),
Michael J. Forrestal (1990),
Sidney Leibovich (1989),
Thomas L. Geers (1988),
James R. Rice (1987),
Michael M. Carroll (1986),
Jan D. Achenbach (1985),
Charles R. Steele (1984),
William G. Gottenberg (1983),
R.C. DiPrima (1982),
R.M. Christensen (1981),
R.S. Rivlin (1980),
Richard Skalak (1979),
F. Essenburg (1978),
Yuan-Cheng Fung (1977),
J. Miklowitz (1976),
B.A. Boley (1975),
George Herrmann (1974),
J. Kestin (1973),
Paul M. Naghdi (1972),
S. Levy (1971),
H.N. Abramson (1970),
Stephen H. Crandall (1969),
P.G. Hodge Jr. (1968),
R. Plunkett (1967),
M.V. Barton (1966),
George F. Carrier (1965),
Daniel C. Drucker (1964),
E. Reissner (1963),
A.M. Wahl (1961, 1962),
S.B. Batdorf (1960),
William Prager (1959),
W. Ramberg (1958),
M. Hetenyl (1957),
Raymond D. Mindlin (1956),
Nicholas J. Hoff (1955),
N.M. Newmark (1954),
D. Young (1953),
R.E. Peterson (1952),
L.H. Donnell (1951),
R.P. Kroon (1950),
M. Golan (1949),
W.M. Murray (1948),
H.W. Emmons (1947),
H. Poritsky (1946),
J.N. Goodier (1945),
J.H. Keenan (1943, 1944),
H.L. Dryden (1942),
J.P. Den Hartog (1940, 1941),
C.R. Soderberg (1937,1938),
E.O. Waters (1936),
J.A. Goff (1935),
F.M. Lewis (1934),
J.M. Lessells (1933),
G.B. Pegram (1932),
A.L. Kimball (1931),
G.M. Eaton (1928, 1929),
Stephen P. Timoshenko (1927, 1930)
== Relevant websites ==
Homepage of Applied Mechanics Division
iMechanica.org, a web of mechanics and mechanicians.
== References ==
P.M. Naghdi, A brief history of the Applied Mechanics Division of ASME. Journal of Applied Mechanics 46, 723–794.
Bylaws of Applied Mechanics Division | Wikipedia/Applied_Mechanics_Division |
Non-autonomous mechanics describe non-relativistic mechanical systems subject to time-dependent transformations. In particular, this is the case of mechanical systems whose Lagrangians and Hamiltonians depend on the time. The configuration space of non-autonomous mechanics is a fiber bundle
Q
→
R
{\displaystyle Q\to \mathbb {R} }
over the time axis
R
{\displaystyle \mathbb {R} }
coordinated by
(
t
,
q
i
)
{\displaystyle (t,q^{i})}
.
This bundle is trivial, but its different trivializations
Q
=
R
×
M
{\displaystyle Q=\mathbb {R} \times M}
correspond to the choice of different non-relativistic reference frames. Such a reference frame also is represented by a connection
Γ
{\displaystyle \Gamma }
on
Q
→
R
{\displaystyle Q\to \mathbb {R} }
which takes a form
Γ
i
=
0
{\displaystyle \Gamma ^{i}=0}
with respect to this trivialization. The corresponding covariant differential
(
q
t
i
−
Γ
i
)
∂
i
{\displaystyle (q_{t}^{i}-\Gamma ^{i})\partial _{i}}
determines the relative velocity with respect to a reference frame
Γ
{\displaystyle \Gamma }
.
As a consequence, non-autonomous mechanics (in particular, non-autonomous Hamiltonian mechanics) can be formulated as a covariant classical field theory (in particular covariant Hamiltonian field theory) on
X
=
R
{\displaystyle X=\mathbb {R} }
. Accordingly, the velocity phase space of non-autonomous mechanics is the jet manifold
J
1
Q
{\displaystyle J^{1}Q}
of
Q
→
R
{\displaystyle Q\to \mathbb {R} }
provided with the coordinates
(
t
,
q
i
,
q
t
i
)
{\displaystyle (t,q^{i},q_{t}^{i})}
. Its momentum phase space is the vertical cotangent bundle
V
Q
{\displaystyle VQ}
of
Q
→
R
{\displaystyle Q\to \mathbb {R} }
coordinated by
(
t
,
q
i
,
p
i
)
{\displaystyle (t,q^{i},p_{i})}
and endowed with the canonical Poisson structure. The dynamics of Hamiltonian non-autonomous mechanics is defined by a Hamiltonian form
p
i
d
q
i
−
H
(
t
,
q
i
,
p
i
)
d
t
{\displaystyle p_{i}dq^{i}-H(t,q^{i},p_{i})dt}
.
One can associate to any Hamiltonian non-autonomous system an equivalent Hamiltonian autonomous system on the cotangent bundle
T
Q
{\displaystyle TQ}
of
Q
{\displaystyle Q}
coordinated by
(
t
,
q
i
,
p
,
p
i
)
{\displaystyle (t,q^{i},p,p_{i})}
and provided with the canonical symplectic form; its Hamiltonian is
p
−
H
{\displaystyle p-H}
.
== See also ==
Analytical mechanics
Non-autonomous system (mathematics)
Hamiltonian mechanics
Symplectic manifold
Covariant Hamiltonian field theory
Free motion equation
Relativistic system (mathematics)
== References ==
De Leon, M., Rodrigues, P., Methods of Differential Geometry in Analytical Mechanics (North Holland, 1989).
Echeverria Enriquez, A., Munoz Lecanda, M., Roman Roy, N., Geometrical setting of time-dependent regular systems. Alternative models, Rev. Math. Phys. 3 (1991) 301.
Carinena, J., Fernandez-Nunez, J., Geometric theory of time-dependent singular Lagrangians, Fortschr. Phys., 41 (1993) 517.
Mangiarotti, L., Sardanashvily, G., Gauge Mechanics (World Scientific, 1998) ISBN 981-02-3603-4.
Giachetta, G., Mangiarotti, L., Sardanashvily, G., Geometric Formulation of Classical and Quantum Mechanics (World Scientific, 2010) ISBN 981-4313-72-6 (arXiv:0911.0411 ). | Wikipedia/Non-autonomous_mechanics |
In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applications include many problems in a wide variety of fields such as biology, neuroscience, computer science, information theory and sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.
Statistical mechanics arose out of the development of classical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such as temperature, pressure, and heat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions.: 1–4
While classical thermodynamics is primarily concerned with thermodynamic equilibrium, statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances.: 3 Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.: 572–573
== History ==
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion.
The founding of the field of statistical mechanics is generally credited to three physicists:
Ludwig Boltzmann, who developed the fundamental interpretation of entropy in terms of a collection of microstates
James Clerk Maxwell, who developed models of probability distribution of such states
Josiah Willard Gibbs, who coined the name of the field in 1884
In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further.
Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory. Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H-theorem.
The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871:
"In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus."
"Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.
== Principles: mechanics and ensembles ==
In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:
The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics)
Using these two concepts, the state at any other time, past or future, can in principle be calculated.
There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix.
As is usual for probabilities, the ensemble can be interpreted in different ways:
an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or
the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials.
These two meanings are equivalent for many purposes, and will be used interchangeably in this article.
However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.
One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.
== Statistical thermodynamics ==
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.
Whereas statistical mechanics proper involves dynamics, here the attention is focused on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.
=== Fundamental postulate ===
A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).
There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.
A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that
For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.
The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:
Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic.
Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation.
Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).
Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates:
where the third postulate can be replaced by the following:
=== Three thermodynamic ensembles ===
There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.
Microcanonical ensemble
describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition.
Canonical ensemble
describes a system of fixed composition that is in thermal equilibrium with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy.
Grand canonical ensemble
describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers.
For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.: 227 The Gibbs theorem about equivalence of ensembles was developed into the theory of concentration of measure phenomenon, which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology.
Important cases where the thermodynamic ensembles do not give identical results include:
Microscopic systems.
Large systems at a phase transition.
Large systems with long-range interactions.
In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.
=== Calculation methods ===
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities.
==== Exact ====
There are some cases which allow exact solutions.
For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics).
Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics.
A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model.
==== Monte Carlo ====
Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations.
The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level.
The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble.
Path integral Monte Carlo, also used to sample the canonical ensemble.
==== Other ====
For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.
For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function.
Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions.
Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.
== Non-equilibrium statistical mechanics ==
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
heat transport by the internal motions in a material, driven by a temperature imbalance,
electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance,
spontaneous chemical reactions driven by a decrease in free energy,
friction, dissipation, quantum decoherence,
systems being pumped by external forces (optical pumping, etc.),
and irreversible processes in general.
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics.
Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.
=== Stochastic methods ===
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier.
=== Near-equilibrium methods ===
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.: 664
This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics.
A few of the theoretical tools used to make this connection include:
Fluctuation–dissipation theorem
Onsager reciprocal relations
Green–Kubo relations
Landauer–Büttiker formalism
Mori–Zwanzig formalism
GENERIC formalism
=== Hybrid methods ===
An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.
== Applications ==
The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in:
propagation of uncertainty over time,
regression analysis of gravitational orbits,
ensemble forecasting of weather,
dynamics of neural networks,
bounded-rational potential games in game theory and non-equilibrium economics.
Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics and virial theorem. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases).
Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
=== Quantum statistical mechanics ===
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.
== Index of statistical mechanics topics ==
=== Physics ===
Probability amplitude
Statistical physics
Boltzmann factor
Feynman–Kac formula
Fluctuation theorem
Information entropy
Vacuum expectation value
Cosmic variance
Negative probability
Gibbs state
Master equation
Partition function (mathematics)
Quantum probability
=== Percolation theory ===
Percolation theory
Schramm–Loewner evolution
== See also ==
List of textbooks in thermodynamics and statistical mechanics
Laplace transform § Statistical mechanics
== References ==
== Further reading ==
Reif, F. (2009). Fundamentals of Statistical and Thermal Physics. Waveland Press. ISBN 978-1-4786-1005-2.
Müller-Kirsten, Harald J W. (2013). Basics of Statistical Physics (PDF). doi:10.1142/8709. ISBN 978-981-4449-53-3.
Kadanoff, Leo P. "Statistical Physics and other resources". Archived from the original on August 12, 2021. Retrieved June 18, 2023.
Kadanoff, Leo P. (2000). Statistical Physics: Statics, Dynamics and Renormalization. World Scientific. ISBN 978-981-02-3764-6.
Flamm, Dieter (1998). "History and outlook of statistical physics". arXiv:physics/9803005.
== External links ==
Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy.
Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter.
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick
Cohen, Doron (2011). "Lecture Notes in Statistical Mechanics and Mesoscopics". arXiv:1107.0568 [quant-ph].
Videos of lecture series in statistical mechanics on YouTube taught by Leonard Susskind.
Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28. | Wikipedia/Classical_statistical_mechanics |
Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work Physics, Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrial – including all motion (change with respect to place), quantitative change (change with respect to size or number), qualitative change, and substantial change ("coming to be" [coming into existence, 'generation'] or "passing away" [no longer existing, 'corruption']). To Aristotle, 'physics' was a broad field including subjects which would now be called the philosophy of mind, sensory experience, memory, anatomy and biology. It constitutes the foundation of the thought underlying many of his works.
Key concepts of Aristotelian physics include the structuring of the cosmos into concentric spheres, with the Earth at the centre and celestial spheres around it. The terrestrial sphere was made of four elements, namely earth, air, fire, and water, subject to change and decay. The celestial spheres were made of a fifth element, an unchangeable aether. Objects made of these elements have natural motions: those of earth and water tend to fall; those of air and fire, to rise. The speed of such motion depends on their weights and the density of the medium. Aristotle argued that a vacuum could not exist as speeds would become infinite.
Aristotle described four causes or explanations of change as seen on earth: the material, formal, efficient, and final causes of things. As regards living things, Aristotle's biology relied on observation of what he considered to be ‘natural kinds’, both those he considered basic and the groups to which he considered these belonged. He did not conduct experiments in the modern sense, but relied on amassing data, observational procedures such as dissection, and making hypotheses about relationships between measurable quantities such as body size and lifespan.
== Methods ==
nature is everywhere the cause of order.
While consistent with common human experience, Aristotle's principles were not based on controlled, quantitative experiments, so they do not describe our universe in the precise, quantitative way now expected of science. Contemporaries of Aristotle like Aristarchus rejected these principles in favor of heliocentrism, but their ideas were not widely accepted. Aristotle's principles were difficult to disprove merely through casual everyday observation, but later development of the scientific method challenged his views with experiments and careful measurement, using increasingly advanced technology such as the telescope and vacuum pump.
In claiming novelty for their doctrines, those natural philosophers who developed the "new science" of the seventeenth century frequently contrasted "Aristotelian" physics with their own. Physics of the former sort, so they claimed, emphasized the qualitative at the expense of the quantitative, neglected mathematics and its proper role in physics (particularly in the analysis of local motion), and relied on such suspect explanatory principles as final causes and "occult" essences. Yet in his Physics Aristotle characterizes physics or the "science of nature" as pertaining to magnitudes (megethê), motion (or "process" or "gradual change" – kinêsis), and time (chronon) (Phys III.4 202b30–1). Indeed, the Physics is largely concerned with an analysis of motion, particularly local motion, and the other concepts that Aristotle believes are requisite to that analysis.
There are clear differences between modern and Aristotelian physics, the main being the use of mathematics, largely absent in Aristotle. Some recent studies, however, have re-evaluated Aristotle's physics, stressing both its empirical validity and its continuity with modern physics.
== Concepts ==
=== Elements and spheres ===
Aristotle divided his universe into "terrestrial spheres" which were "corruptible" and where humans lived, and moving but otherwise unchanging celestial spheres.
Aristotle believed that four classical elements make up everything in the terrestrial spheres: earth, air, fire and water.[a] He also held that the heavens are made of a special weightless and incorruptible (i.e. unchangeable) fifth element called "aether". Aether also has the name "quintessence", meaning, literally, "fifth being".
Aristotle considered heavy matter such as iron and other metals to consist primarily of the element earth, with a smaller amount of the other three terrestrial elements. Other, lighter objects, he believed, have less earth, relative to the other three elements in their composition.
The four classical elements were not invented by Aristotle; they were originated by Empedocles. During the Scientific Revolution, the ancient theory of classical elements was found to be incorrect, and was replaced by the empirically tested concept of chemical elements.
==== Celestial spheres ====
According to Aristotle, the Sun, Moon, planets and stars – are embedded in perfectly concentric "crystal spheres" that rotate eternally at fixed rates. Because the celestial spheres are incapable of any change except rotation, the terrestrial sphere of fire must account for the heat, starlight and occasional meteorites. The lowest, lunar sphere is the only celestial sphere that actually comes in contact with the sublunary orb's changeable, terrestrial matter, dragging the rarefied fire and air along underneath as it rotates. Like Homer's æthere (αἰθήρ) – the "pure air" of Mount Olympus – was the divine counterpart of the air breathed by mortal beings (άήρ, aer). The celestial spheres are composed of the special element aether, eternal and unchanging, the sole capability of which is a uniform circular motion at a given rate (relative to the diurnal motion of the outermost sphere of fixed stars).
The concentric, aetherial, cheek-by-jowl "crystal spheres" that carry the Sun, Moon and stars move eternally with unchanging circular motion. Spheres are embedded within spheres to account for the "wandering stars" (i.e. the planets, which, in comparison with the Sun, Moon and stars, appear to move erratically). Mercury, Venus, Mars, Jupiter, and Saturn are the only planets (including minor planets) which were visible before the invention of the telescope, which is why Neptune and Uranus are not included, nor are any asteroids. Later, the belief that all spheres are concentric was forsaken in favor of Ptolemy's deferent and epicycle model. Aristotle submits to the calculations of astronomers regarding the total number of spheres and various accounts give a number in the neighborhood of fifty spheres. An unmoved mover is assumed for each sphere, including a "prime mover" for the sphere of fixed stars. The unmoved movers do not push the spheres (nor could they, being immaterial and dimensionless) but are the final cause of the spheres' motion, i.e. they explain it in a way that's similar to the explanation "the soul is moved by beauty".
=== Terrestrial change ===
Unlike the eternal and unchanging celestial aether, each of the four terrestrial elements are capable of changing into either of the two elements they share a property with: e.g. the cold and wet (water) can transform into the hot and wet (air) or the cold and dry (earth). Any apparent change from cold and wet into the hot and dry (fire) is actually a two-step process, as first one of the property changes, then the other. These properties are predicated of an actual substance relative to the work it is able to do; that of heating or chilling and of desiccating or moistening. The four elements exist only with regard to this capacity and relative to some potential work. The celestial element is eternal and unchanging, so only the four terrestrial elements account for "coming to be" and "passing away" – or, in the terms of Aristotle's On Generation and Corruption (Περὶ γενέσεως καὶ φθορᾶς), "generation" and "corruption".
=== Natural place ===
The Aristotelian explanation of gravity is that all bodies move toward their natural place. For the elements earth and water, that place is the center of the (geocentric) universe; the natural place of water is a concentric shell around the Earth because earth is heavier; it sinks in water. The natural place of air is likewise a concentric shell surrounding that of water; bubbles rise in water. Finally, the natural place of fire is higher than that of air but below the innermost celestial sphere (carrying the Moon).
In Book Delta of his Physics (IV.5), Aristotle defines topos (place) in terms of two bodies, one of which contains the other: a "place" is where the inner surface of the former (the containing body) touches the contained body. This definition remained dominant until the beginning of the 17th century, even though it had been questioned and debated by philosophers since antiquity. The most significant early critique was made in terms of geometry by the 11th-century Arab polymath al-Hasan Ibn al-Haytham (Alhazen) in his Discourse on Place.
=== Natural motion ===
Terrestrial objects rise or fall, to a greater or lesser extent, according to the ratio of the four elements of which they are composed. For example, earth, the heaviest element, and water, fall toward the center of the cosmos; hence the Earth and for the most part its oceans, will have already come to rest there. At the opposite extreme, the lightest elements, air and especially fire, rise up and away from the center.
The elements are not proper substances in Aristotelian theory (or the modern sense of the word). Instead, they are abstractions used to explain the varying natures and behaviors of actual materials in terms of ratios between them.
Motion and change are closely related in Aristotelian physics. Motion, according to Aristotle, involved a change from potentiality to actuality. He gave example of four types of change, namely change in substance, in quality, in quantity and in place.
Aristotle proposed that the speed at which two identically shaped objects sink or fall is directly proportional to their weights and inversely proportional to the density of the medium through which they move. While describing their terminal velocity, Aristotle must stipulate that there would be no limit at which to compare the speed of atoms falling through a vacuum, (they could move indefinitely fast because there would be no particular place for them to come to rest in the void). Now however it is understood that at any time prior to achieving terminal velocity in a relatively resistance-free medium like air, two such objects are expected to have nearly identical speeds because both are experiencing a force of gravity proportional to their masses and have thus been accelerating at nearly the same rate. This became especially apparent from the eighteenth century when partial vacuum experiments began to be made, but some two hundred years earlier Galileo had already demonstrated that objects of different weights reach the ground in similar times.
=== Unnatural motion ===
Apart from the natural tendency of terrestrial exhalations to rise and objects to fall, unnatural or forced motion from side to side results from the turbulent collision and sliding of the objects as well as transmutation between the elements (On Generation and Corruption). Aristotle phrased this principle as: "Everything that moves is moved by something else. (Omne quod movetur ab alio movetur.)" When the cause ceases, so does the effect. The cause, according to Aristotle, must be a power (i.e., force) that drives the body as long as the external agent remains in direct contact. Aristotle went on to say that the velocity of the body is directly proportional to the force imparted and inversely proportional to the resistance of the medium in which the motion takes place. This gives the law in today's notation
velocity
∝
imparted power
resistance
{\displaystyle {\text{velocity}}\propto {\frac {\text{imparted power}}{\text{resistance}}}}
This law presented three difficulties that Aristotle was aware of. The first is that if the imparted power is less than the resistance, then in reality it will not move the body, but Aristotle's relation says otherwise. Second, what is the source of the increase in imparted power required to increase the velocity of a freely falling body? Third, what is the imparted power that keeps a projectile in motion after it leaves the agent of projection? Aristotle, in his book Physics, Book 8, Chapter 10, 267a 4, proposed the following solution to the third problem in the case of a shot arrow. The bowstring or hand imparts a certain 'power of being a movent' to the air in contact with it, so that this imparted force is transmitted to the next layer of air, and so on, thus keeping the arrow in motion until the power gradually dissipates.
==== Chance ====
In his Physics Aristotle examines accidents (συμβεβηκός, symbebekòs) that have no cause but chance. "Nor is there any definite cause for an accident, but only chance (τύχη, týche), namely an indefinite (ἀόριστον, aóriston) cause" (Metaphysics V, 1025a25).
It is obvious that there are principles and causes which are generable and destructible apart from the actual processes of generation and destruction; for if this is not true, everything will be of necessity: that is, if there must necessarily be some cause, other than accidental, of that which is generated and destroyed. Will this be, or not? Yes, if this happens; otherwise not (Metaphysics VI, 1027a29).
=== Continuum and vacuum ===
Aristotle argues against the indivisibles of Democritus (which differ considerably from the historical and the modern use of the term "atom"). As a place without anything existing at or within it, Aristotle argued against the possibility of a vacuum or void. Because he believed that the speed of an object's motion is proportional to the force being applied (or, in the case of natural motion, the object's weight) and inversely proportional to the density of the medium, he reasoned that objects moving in a void would move indefinitely fast – and thus any and all objects surrounding the void would immediately fill it. The void, therefore, could never form.
The "voids" of modern-day astronomy (such as the Local Void adjacent to our own galaxy) have the opposite effect: ultimately, bodies off-center are ejected from the void due to the gravity of the material outside.
=== Four causes ===
According to Aristotle, there are four ways to explain the aitia or causes of change. He writes that "we do not have knowledge of a thing until we have grasped its why, that is to say, its cause."
Aristotle held that there were four kinds of causes.
==== Material ====
The material cause of a thing is that of which it is made. For a table, that might be wood; for a statue, that might be bronze or marble.
"In one way we say that the aition is that out of which. as existing, something comes to be, like the bronze for the statue, the silver for the phial, and their genera" (194b2 3—6). By "genera", Aristotle means more general ways of classifying the matter (e.g. "metal"; "material"); and that will become important. A little later on, he broadens the range of the material cause to include letters (of syllables), fire and the other elements (of physical bodies), parts (of wholes), and even premises (of conclusions: Aristotle re-iterates this claim, in slightly different terms, in An. Post II. 11).
==== Formal ====
The formal cause of a thing is the essential property that makes it the kind of thing it is. In Metaphysics Book Α Aristotle emphasizes that form is closely related to essence and definition. He says for example that the ratio 2:1, and number in general, is the cause of the octave.
"Another [cause] is the form and the exemplar: this is the formula (logos) of the essence (to ti en einai), and its genera, for instance the ratio 2:1 of the octave" (Phys 11.3 194b26—8)... Form is not just shape... We are asking (and this is the connection with essence, particularly in its canonical Aristotelian formulation) what it is to be some thing. And it is a feature of musical harmonics (first noted and wondered at by the Pythagoreans) that intervals of this type do indeed exhibit this ratio in some form in the instruments used to create them (the length of pipes, of strings, etc.). In some sense, the ratio explains what all the intervals have in common, why they turn out the same.
==== Efficient ====
The efficient cause of a thing is the primary agency by which its matter took its form. For example, the efficient cause of a baby is a parent of the same species and that of a table is a carpenter, who knows the form of the table. In his Physics II, 194b29—32, Aristotle writes: "there is that which is the primary originator of the change and of its cessation, such as the deliberator who is responsible [sc. for the action] and the father of the child, and in general the producer of the thing produced and the changer of the thing changed".
Aristotle’s examples here are instructive: one case of mental and one of physical causation, followed by a perfectly general characterization. But they conceal (or at any rate fail to make patent) a crucial feature of Aristotle’s concept of efficient causation, and one which serves to distinguish it from most modern homonyms. For Aristotle, any process requires a constantly operative efficient cause as long as it continues. This commitment appears most starkly to modern eyes in Aristotle’s discussion of projectile motion: what keeps the projectile moving after it leaves the
hand? "Impetus", "momentum", much less "inertia", are not possible answers. There must be a mover, distinct (at least in some sense) from the thing moved, which is exercising its motive capacity at every moment of the projectile’s flight (see Phys VIII. 10 266b29—267a11). Similarly, in every case of animal generation, there is always some thing responsible for the continuity of that generation, although it may do so by way of some intervening instrument (Phys II.3 194b35—195a3).
==== Final ====
The final cause is that for the sake of which something takes place, its aim or teleological purpose: for a germinating seed, it is the adult plant, for a ball at the top of a ramp, it is coming to rest at the bottom, for an eye, it is seeing, for a knife, it is cutting.
Goals have an explanatory function: that is a commonplace, at least in the context of action-ascriptions. Less of a commonplace is the view espoused by Aristotle, that finality and purpose are to be found throughout nature, which is for him the realm of those things which contain within themselves principles of movement and rest (i.e. efficient causes); thus it makes sense to attribute purposes not only to natural things themselves, but also to their parts: the parts of a natural whole exist for the sake of the whole. As Aristotle himself notes, "for the sake of" locutions are ambiguous: "A is for the sake of B" may mean that A exists or is undertaken in order to bring B about; or it may mean that A is for B’s benefit (An II.4 415b2—3, 20—1); but both types of finality have, he thinks, a crucial role to play in natural, as well as deliberative, contexts. Thus a man may exercise for the sake of his health: and so "health", and not just the hope of achieving it, is the cause of his action (this distinction is not trivial). But the eyelids are for the sake of the eye (to protect it: PA II.1 3) and the eye for the sake of the animal as a whole (to help it function properly: cf. An II.7).
=== Biology ===
According to Aristotle, the science of living things proceeds by gathering observations about each natural kind of animal, organizing them into genera and species (the differentiae in History of Animals) and then going on to study the causes (in Parts of Animals and Generation of Animals, his three main biological works).
The four causes of animal generation can be summarized as follows. The mother and father represent the material and efficient causes, respectively. The mother provides the matter out of which the embryo is formed, while the father provides the agency that informs that material and triggers its development. The formal cause is the definition of the animal’s substantial being (GA I.1 715a4: ho logos tês ousias). The final cause is the adult form, which is the end for the sake of which development takes place.
==== Organism and mechanism ====
The four elements make up the uniform materials such as blood, flesh and bone, which are themselves the matter out of which are created the non-uniform organs of the body (e.g. the heart, liver and hands) "which in turn, as parts, are matter for the functioning body as a whole (PA II. 1 646a 13—24)".
[There] is a certain obvious conceptual economy about the view that in natural processes naturally constituted things simply seek to realize in full actuality the potentials contained within them (indeed, this is what is for them to be natural); on the other hand, as the detractors of Aristotelianism from the seventeenth century on were not slow to point out, this economy is won at the expense of any serious empirical content. Mechanism, at least as practiced by Aristotle’s contemporaries and predecessors, may have been explanatorily inadequate – but at least it was an attempt at a general account given in reductive terms of the lawlike connections between things. Simply introducing what later reductionists were to scoff at as "occult qualities" does not explain – it merely, in the manner of Molière’s famous satirical joke, serves to re-describe the effect. Formal talk, or so it is said, is vacuous.Things are not however quite as bleak as this. For one thing, there’s no point in trying to engage in reductionist science if you don’t have the wherewithal, empirical and conceptual, to do so successfully: science shouldn't be simply unsubstantiated speculative metaphysics. But more than that, there is a point to describing the world in such teleologically loaded terms: it makes sense of things in a way that atomist speculations do not. And further, Aristotle’s talk of species-forms is not as empty as his opponents would insinuate. He doesn't simply say that things do what they do because that's the sort of thing they do: the whole point of his classificatory biology, most clearly exemplified in PA, is to show what sorts of function go with what, which presuppose which and which are subservient to which. And in this sense, formal or functional biology is susceptible of a type of reductionism. We start, he tells us, with the basic animal kinds which we all pre-theoretically (although not indefeasibly) recognize (cf. PA I.4): but we then go on to show how their parts relate to one another: why it is, for instance, that only blooded creatures have lungs, and how certain structures in one species are analogous or homologous to those in another (such as scales in fish, feathers in birds, hair in mammals). And the answers, for Aristotle, are to be found in the economy of functions, and how they all contribute to the overall well-being (the final cause in this sense) of the animal.
See also Organic form.
==== Psychology ====
According to Aristotle, perception and thought are similar, though not exactly alike in that perception is concerned only with the external objects that are acting on our sense organs at any given time, whereas we can think about anything we choose. Thought is about universal forms, in so far as they have been successfully understood, based on our memory of having encountered instances of those forms directly.
Aristotle’s theory of cognition rests on two central pillars: his account of perception and his account of thought. Together, they make up a significant portion of his psychological writings, and his discussion of other mental states depends critically on them. These two activities, moreover, are conceived of in an analogous manner, at least with regard to their most basic forms. Each activity is triggered by its object – each, that is, is about the very thing that brings it about. This simple causal account explains the reliability of cognition: perception and thought are, in effect, transducers, bringing information about the world into our cognitive systems, because, at least in their most basic forms, they are infallibly about the causes that bring them about (An III.4 429a13–18). Other, more complex mental states are far from infallible. But they are still tethered to the world, in so far as they rest on the unambiguous and direct contact perception and thought enjoy with their objects.
== Medieval commentary ==
The Aristotelian theory of motion came under criticism and modification during the Middle Ages. Modifications began with John Philoponus in the 6th century, who partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force" but modified it to include his idea that a hurled body also acquires an inclination (or "motive power") for movement away from whatever caused it to move, an inclination that secures its continued motion. This impressed virtue would be temporary and self-expending, meaning that all motion would tend toward the form of Aristotle's natural motion.
In The Book of Healing (1027), the 11th-century Persian polymath Avicenna developed Philoponean theory into the first coherent alternative to Aristotelian theory. Inclinations in the Avicennan theory of motion were not self-consuming but permanent forces whose effects were dissipated only as a result of external agents such as air resistance, making him "the first to conceive such a permanent type of impressed virtue for non-natural motion". Such a self-motion (mayl) is "almost the opposite of the Aristotelian conception of violent motion of the projectile type, and it is rather reminiscent of the principle of inertia, i.e. Newton's first law of motion."
The eldest Banū Mūsā brother, Ja'far Muhammad ibn Mūsā ibn Shākir (800-873), wrote the Astral Motion and The Force of Attraction. The Persian physicist, Ibn al-Haytham (965-1039) discussed the theory of attraction between bodies. It seems that he was aware of the magnitude of acceleration due to gravity and he discovered that the heavenly bodies "were accountable to the laws of physics". During his debate with Avicenna, al-Biruni also criticized the Aristotelian theory of gravity firstly for denying the existence of levity or gravity in the celestial spheres; and, secondly, for its notion of circular motion being an innate property of the heavenly bodies.
Hibat Allah Abu'l-Barakat al-Baghdaadi (1080–1165) wrote al-Mu'tabar, a critique of Aristotelian physics where he negated Aristotle's idea that a constant force produces uniform motion, as he realized that a force applied continuously produces acceleration, a fundamental law of classical mechanics and an early foreshadowing of Newton's second law of motion. Like Newton, he described acceleration as the rate of change of speed.
In the 14th century, Jean Buridan developed the theory of impetus as an alternative to the Aristotelian theory of motion. The theory of impetus was a precursor to the concepts of inertia and momentum in classical mechanics. Buridan and Albert of Saxony also refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus. In the 16th century, Al-Birjandi discussed the possibility of the Earth's rotation and, in his analysis of what might occur if the Earth were rotating, developed a hypothesis similar to Galileo's notion of "circular inertia". He described it in terms of the following observational test:
"The small or large rock will fall to the Earth along the path of a line that is perpendicular to the plane (sath) of the horizon; this is witnessed by experience (tajriba). And this perpendicular is away from the tangent point of the Earth’s sphere and the plane of the perceived (hissi) horizon. This point moves with the motion of the Earth and thus there will be no difference in place of fall of the two rocks."
== Life and death of Aristotelian physics ==
The reign of Aristotelian physics, the earliest known speculative theory of physics, lasted almost two millennia. After the work of many pioneers such as Copernicus, Tycho Brahe, Galileo, Kepler, Descartes and Newton, it became generally accepted that Aristotelian physics was neither correct nor viable. Despite this, it survived as a scholastic pursuit well into the seventeenth century, until universities amended their curricula.
In Europe, Aristotle's theory was first convincingly discredited by Galileo's studies. Using a telescope, Galileo observed that the Moon was not entirely smooth, but had craters and mountains, contradicting the Aristotelian idea of the incorruptibly perfect smooth Moon. Galileo also criticized this notion theoretically; a perfectly smooth Moon would reflect light unevenly like a shiny billiard ball, so that the edges of the moon's disk would have a different brightness than the point where a tangent plane reflects sunlight directly to the eye. A rough moon reflects in all directions equally, leading to a disk of approximately equal brightness which is what is observed. Galileo also observed that Jupiter has moons – i.e. objects revolving around a body other than the Earth – and noted the phases of Venus, which demonstrated that Venus (and, by implication, Mercury) traveled around the Sun, not the Earth.
According to legend, Galileo dropped balls of various densities from the Tower of Pisa and found that lighter and heavier ones fell at almost the same speed. His experiments actually took place using balls rolling down inclined planes, a form of falling sufficiently slow to be measured without advanced instruments.
In a relatively dense medium such as water, a heavier body falls faster than a lighter one. This led Aristotle to speculate that the rate of falling is proportional to the weight and inversely proportional to the density of the medium. From his experience with objects falling in water, he concluded that water is approximately ten times denser than air. By weighing a volume of compressed air, Galileo showed that this overestimates the density of air by a factor of forty. From his experiments with inclined planes, he concluded that if friction is neglected, all bodies fall at the same rate (which is also not true, since not only friction but also density of the medium relative to density of the bodies has to be negligible. Aristotle correctly noticed that medium density is a factor but focused on body weight instead of density. Galileo neglected medium density which led him to correct conclusion for vacuum).
Galileo also advanced a theoretical argument to support his conclusion. He asked if two bodies of different weights and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing answer is neither: all the systems fall at the same rate.
Followers of Aristotle were aware that the motion of falling bodies was not uniform, but picked up speed with time. Since time is an abstract quantity, the peripatetics postulated that the speed was proportional to the distance. Galileo established experimentally that the speed is proportional to the time, but he also gave a theoretical argument that the speed could not possibly be proportional to the distance. In modern terms, if the rate of fall is proportional to the distance, the differential expression for the distance y travelled after time t is:
d
y
d
t
∝
y
{\displaystyle {dy \over dt}\propto y}
with the condition that
y
(
0
)
=
0
{\displaystyle y(0)=0}
. Galileo demonstrated that this system would stay at
y
=
0
{\displaystyle y=0}
for all time. If a perturbation set the system into motion somehow, the object would pick up speed exponentially in time, not linearly.
Standing on the surface of the Moon in 1971, David Scott famously repeated Galileo's experiment by dropping a feather and a hammer from each hand at the same time. In the absence of a substantial atmosphere, the two objects fell and hit the Moon's surface at the same time.
The first convincing mathematical theory of gravity – in which two masses are attracted toward each other by a force whose effect decreases according to the inverse square of the distance between them – was Newton's law of universal gravitation. This, in turn, was replaced by the General theory of relativity due to Albert Einstein.
== Modern evaluations of Aristotle's physics ==
Modern scholars differ in their opinions of whether Aristotle's physics were sufficiently based on empirical observations to qualify as science, or else whether they were derived primarily from philosophical speculation and thus fail to satisfy the scientific method.
Carlo Rovelli has argued that Aristotle's physics are an accurate and non-intuitive representation of a particular domain (motion in fluids), and thus are just as scientific as Newton's laws of motion, which also are accurate in some domains while failing in others (i.e. special and general relativity).
== As listed in the Corpus Aristotelicum ==
== See also ==
Minima naturalia, a hylomorphic concept suggested by Aristotle broadly analogous in Peripatetic and Scholastic physical speculation to the atoms of Epicureanism
== Notes ==
== References ==
== Sources ==
H. Carteron (1965) "Does Aristotle Have a Mechanics?" in Articles on Aristotle 1. Science eds. Jonathan Barnes, Malcolm Schofield, Richard Sorabji (London: General Duckworth and Company Limited), 161–174.
Ragep, F. Jamil (2001). "Tusi and Copernicus: The Earth's Motion in Context". Science in Context. 14 (1–2). Cambridge University Press: 145–163. doi:10.1017/s0269889701000060. S2CID 145372613.
Ragep, F. Jamil; Al-Qushji, Ali (2001). "Freeing Astronomy from Philosophy: An Aspect of Islamic Influence on Science". Osiris. 2nd Series. 16 (Science in Theistic Contexts: Cognitive Dimensions): 49–64 and 66–71. Bibcode:2001Osir...16...49R. doi:10.1086/649338. S2CID 142586786.
== Further reading ==
Katalin Martinás, "Aristotelian Thermodynamics" in Thermodynamics: history and philosophy: facts, trends, debates (Veszprém, Hungary 23–28 July 1990), pp. 285–303. | Wikipedia/Aristotelian_mechanics |
Biomechanics is the study of the structure, function and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomechanics is a branch of biophysics.
== Etymology ==
The word "biomechanics" (1899) and the related "biomechanical" (1856) come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms, particularly their movement and structure.
== Subfields ==
=== Biofluid mechanics ===
Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluid problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modeled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails when considering forward flow within arterioles. At the microscopic scale, the effects of individual red blood cells become significant, and whole blood can no longer be modeled as a continuum. When the diameter of the blood vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and often can only pass in a single file. In this case, the inverse Fahraeus–Lindquist effect occurs and the wall shear stress increases.
An example of a gaseous biofluids problem is that of human respiration. Respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices.
=== Biotribology ===
Biotribology is the study of friction, wear and lubrication of biological systems, especially human joints such as hips and knees. In general, these processes are studied in the context of contact mechanics and tribology.
Additional aspects of biotribology include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue-engineered cartilage.
=== Comparative biomechanics ===
Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans (as in physical anthropology) or into the functions, ecology and adaptations of the organisms themselves. Common areas of investigation are animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion has many manifestations, including running, jumping and flying. Locomotion requires energy to overcome friction, drag, inertia, and gravity, though which factor predominates varies with environment.
Comparative biomechanics overlaps strongly with many other fields, including ecology, neurobiology, developmental biology, ethology, and paleontology, to the extent of commonly publishing papers in the journals of these other fields. Comparative biomechanics is often applied in medicine (with regards to common model organisms such as mice and rats) as well as in biomimetics, which looks to nature for solutions to engineering problems.
=== Computational biomechanics ===
Computational biomechanics is the application of engineering computational tools, such as the finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the finite element method has become an established alternative to in vivo surgical assessment. One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led finite element modeling (or other discretization techniques) to the point of becoming ubiquitous in several fields of biomechanics while several projects have even adopted an open source philosophy (e.g., BioSpine).
Computational biomechanics is an essential ingredient in surgical simulation, which is used for surgical planning, assistance, and training. In this case, numerical (discretization) methods are used to compute, as fast as possible, a system's response to boundary conditions such as forces, heat and mass transfer, and electrical and magnetic stimuli.
=== Continuum biomechanics ===
The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the microstructural details of the material. One of the most remarkable characteristics of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels.
Biomaterials are classified into two groups: hard and soft tissues. Mechanical deformation of hard tissues (like wood, shell and bone) may be analysed with the theory of linear elasticity. On the other hand, soft tissues (like skin, tendon, muscle, and cartilage) usually undergo large deformations, and thus, their analysis relies on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation.: 568
=== Neuromechanics ===
Neuromechanics uses a biomechanical approach to better understand how the brain and nervous system interact to control the body. During motor tasks, motor units activate a set of muscles to perform a specific movement, which can be modified via motor adaptation and learning. In recent years, neuromechanical experiments have been enabled by combining motion capture tools with neural recordings.
=== Plant biomechanics ===
The application of biomechanical principles to plants, plant organs and cells has developed into the subfield of plant biomechanics. Application of biomechanics for plants ranges from studying the resilience of crops to environmental stress to development and morphogenesis at cell and tissue scale, overlapping with mechanobiology.
=== Sports biomechanics ===
In sports biomechanics, the laws of mechanics are applied to human movement in order to gain a greater understanding of athletic performance and to reduce sport injuries as well. It focuses on the application of the scientific principles of mechanical physics to understand movements of action of human bodies and sports implements such as cricket bat, hockey stick and javelin etc. Elements of mechanical engineering (e.g., strain gauges), electrical engineering (e.g., digital filtering), computer science (e.g., numerical methods), gait analysis (e.g., force platforms), and clinical neurophysiology (e.g., surface EMG) are common methods used in sports biomechanics.
Biomechanics in sports can be stated as the body's muscular, joint, and skeletal actions while executing a given task, skill, or technique. Understanding biomechanics relating to sports skills has the greatest implications on sports performance, rehabilitation and injury prevention, and sports mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best.
=== Vascular biomechanics ===
The main topics of the vascular biomechanics is the description of the mechanical behaviour of vascular tissues.
It is well known that cardiovascular disease is the leading cause of death worldwide. Vascular system in the human body is the main component that is supposed to maintain pressure and allow for blood flow and chemical exchanges. Studying the mechanical properties of these complex tissues improves the possibility of better understanding cardiovascular diseases and drastically improves personalized medicine.
Vascular tissues are inhomogeneous with a strongly non linear behaviour. Generally this study involves complex geometry with intricate load conditions and material properties. The correct description of these mechanisms is based on the study of physiology and biological interaction. Therefore, is necessary to study wall mechanics and hemodynamics with their interaction.
It is also necessary to premise that the vascular wall is a dynamic structure in continuous evolution. This evolution directly follows the chemical and mechanical environment in which the tissues are immersed like Wall Shear Stress or biochemical signaling.
=== Immunomechanics ===
The emerging field of immunomechanics focuses on characterising mechanical properties of the immune cells and their functional relevance. Mechanics of immune cells can be characterised using various force spectroscopy approaches such as acoustic force spectroscopy and optical tweezers, and these measurements can be performed at physiological conditions (e.g. temperature). Furthermore, one can study the link between immune cell mechanics and immunometabolism and immune signalling. The term "immunomechanics" is some times interchangeably used with immune cell mechanobiology or cell mechanoimmunology.
=== Other applied subfields of biomechanics include ===
Allometry
Animal locomotion and Gait analysis
Biotribology
Biofluid mechanics
Cardiovascular biomechanics
Comparative biomechanics
Computational biomechanics
Ergonomy
Forensic Biomechanics
Human factors engineering and occupational biomechanics
Injury biomechanics
Implant (medicine), Orthotics and Prosthesis
Kinaesthetics
Kinesiology (kinetics + physiology)
Musculoskeletal and orthopedic biomechanics
Rehabilitation
Soft body dynamics
Sports biomechanics
== History ==
=== Antiquity ===
Aristotle, a student of Plato, can be considered the first bio-mechanic because of his work with animal anatomy. Aristotle wrote the first book on the motion of animals, De Motu Animalium, or On the Movement of Animals. He saw animal's bodies as mechanical systems, pursued questions such as the physiological difference between imagining performing an action and actual performance. In another work, On the Parts of Animals, he provided an accurate description of how the ureter uses peristalsis to carry urine from the kidneys to the bladder.: 2
With the rise of the Roman Empire, technology became more popular than philosophy and the next bio-mechanic arose. Galen (129 AD-210 AD), physician to Marcus Aurelius, wrote his famous work, On the Function of the Parts (about the human body). This would be the world's standard medical book for the next 1,400 years.
=== Renaissance ===
The next major biomechanic would not be around until the 1490s, with the studies of human anatomy and biomechanics by Leonardo da Vinci. He had a great understanding of science and mechanics and studied anatomy in a mechanics context. He analyzed muscle forces and movements and studied joint functions. These studies could be considered studies in the realm of biomechanics. Leonardo da Vinci studied anatomy in the context of mechanics. He analyzed muscle forces as acting along lines connecting origins and insertions, and studied joint function. Da Vinci is also known for mimicking some animal features in his machines. For example, he studied the flight of birds to find means by which humans could fly; and because horses were the principal source of mechanical power in that time, he studied their muscular systems to design machines that would better benefit from the forces applied by this animal.
In 1543, Galen's work, On the Function of the Parts was challenged by Andreas Vesalius at the age of 29. Vesalius published his own work called, On the Structure of the Human Body. In this work, Vesalius corrected many errors made by Galen, which would not be globally accepted for many centuries. With the death of Copernicus came a new desire to understand and learn about the world around people and how it works. On his deathbed, he published his work, On the Revolutions of the Heavenly Spheres. This work not only revolutionized science and physics, but also the development of mechanics and later bio-mechanics.
Galileo Galilei, the father of mechanics and part time biomechanic was born 21 years after the death of Copernicus. Over his years of science, Galileo made a lot of biomechanical aspects known. For example, he discovered that "animals' masses increase disproportionately to their size, and their bones must consequently also disproportionately increase in girth, adapting to loadbearing rather than mere size. The bending strength of a tubular structure such as a bone is increased relative to its weight by making it hollow and increasing its diameter. Marine animals can be larger than terrestrial animals because the water's buoyancy relieves their tissues of weight."
Galileo Galilei was interested in the strength of bones and suggested that bones are hollow because this affords maximum strength with minimum weight. He noted that animals' bone masses increased disproportionately to their size. Consequently, bones must also increase disproportionately in girth rather than mere size. This is because the bending strength of a tubular structure (such as a bone) is much more efficient relative to its weight. Mason suggests that this insight was one of the first grasps of the principles of biological optimization.
In the 17th century, Descartes suggested a philosophic system whereby all living systems, including the human body (but not the soul), are simply machines ruled by the same mechanical laws, an idea that did much to promote and sustain biomechanical study.
=== Industrial era ===
The next major bio-mechanic, Giovanni Alfonso Borelli, embraced Descartes' mechanical philosophy and studied walking, running, jumping, the flight of birds, the swimming of fish, and even the piston action of the heart within a mechanical framework. He could determine the position of the human center of gravity, calculate and measure inspired and expired air volumes, and he showed that inspiration is muscle-driven and expiration is due to tissue elasticity.
Borelli was the first to understand that "the levers of the musculature system magnify motion rather than force, so that muscles must produce much larger forces than those resisting the motion". Influenced by the work of Galileo, whom he personally knew, he had an intuitive understanding of static equilibrium in various joints of the human body well before Newton published the laws of motion. His work is often considered the most important in the history of bio-mechanics because he made so many new discoveries that opened the way for the future generations to continue his work and studies.
It was many years after Borelli before the field of bio-mechanics made any major leaps. After that time, more and more scientists took to learning about the human body and its functions. There are not many notable scientists from the 19th or 20th century in bio-mechanics because the field is far too vast now to attribute one thing to one person. However, the field is continuing to grow every year and continues to make advances in discovering more about the human body. Because the field became so popular, many institutions and labs have opened over the last century and people continue doing research. With the Creation of the American Society of Bio-mechanics in 1977, the field continues to grow and make many new discoveries.
In the 19th century Étienne-Jules Marey used cinematography to scientifically investigate locomotion. He opened the field of modern 'motion analysis' by being the first to correlate ground reaction forces with movement. In Germany, the brothers Ernst Heinrich Weber and Wilhelm Eduard Weber hypothesized a great deal about human gait, but it was Christian Wilhelm Braune who significantly advanced the science using recent advances in engineering mechanics. During the same period, the engineering mechanics of materials began to flourish in France and Germany under the demands of the Industrial Revolution. This led to the rebirth of bone biomechanics when the railroad engineer Karl Culmann and the anatomist Hermann von Meyer compared the stress patterns in a human femur with those in a similarly shaped crane. Inspired by this finding Julius Wolff proposed the famous Wolff's law of bone remodeling.
== Applications ==
The study of biomechanics ranges from the inner workings of a cell to the movement and development of limbs, to the mechanical properties of soft tissue, and bones. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of bird and insect flight, the hydrodynamics of swimming in fish, and locomotion in general across all forms of life, from individual cells to whole organisms. With growing understanding of the physiological behavior of living tissues, researchers are able to advance the field of tissue engineering, as well as develop improved treatments for a wide array of pathologies including cancer.
Biomechanics is also applied to studying human musculoskeletal systems. Such research utilizes force platforms to study human ground reaction forces and infrared videography to capture the trajectories of markers attached to the human body to study human 3D motion. Research also applies electromyography to study muscle activation, investigating muscle responses to external forces and perturbations.
Biomechanics is widely used in orthopedic industry to design orthopedic implants for human joints, dental parts, external fixations and other medical purposes. Biotribology is a very important part of it. It is a study of the performance and function of biomaterials used for orthopedic implants. It plays a vital role to improve the design and produce successful biomaterials for medical and clinical purposes. One such example is in tissue engineered cartilage. The dynamic loading of joints considered as impact is discussed in detail by Emanuel Willert.
It is also tied to the field of engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Applied mechanics, most notably mechanical engineering disciplines such as continuum mechanics, mechanism analysis, structural analysis, kinematics and dynamics play prominent roles in the study of biomechanics.
Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements.
== See also ==
Biomechatronics
Biomedical engineering
Cardiovascular System Dynamics Society
Evolutionary physiology
Forensic biomechanics
International Society of Biomechanics
List of biofluid mechanics research groups
Mechanics of human sexuality
OpenSim (simulation toolkit)
Physical oncology
== References ==
== Further reading ==
== External links ==
Media related to Biomechanics at Wikimedia Commons
Biomechanics and Movement Science Listserver (Biomch-L)
Biomechanics Links
A Genealogy of Biomechanics | Wikipedia/Biomechanics |
Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to rockets, satellites, and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and the law of universal gravitation. Astrodynamics is a core discipline within space-mission design and control.
Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbital plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers.
General relativity is a more exact theory than Newton's laws for calculating orbits, and it is sometimes necessary to use it for greater accuracy or in high-gravity situations (e.g. orbits near the Sun).
== History ==
Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. At the time of Sputnik, the field was termed 'space dynamics'. The fundamental techniques, such as those used to solve the Keplerian problem (determining position as a function of time), are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared.
Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy, publishing his laws in 1609. Isaac Newton published more general laws of celestial motion in the first edition of Philosophiæ Naturalis Principia Mathematica (1687), which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Leonhard Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Johann Lambert in 1761–1777.
Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of pairs of right ascension and declination), to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Modern orbit determination and prediction are used to operate all types of satellites and space probes, as it is necessary to know their future positions to a high degree of accuracy.
Astrodynamics was developed by astronomer Samuel Herrick beginning in the 1930s. He consulted the rocket scientist Robert Goddard and was encouraged to continue his work on space navigation techniques, as Goddard believed they would be needed in the future. Numerical techniques of astrodynamics were coupled with new powerful computers in the 1960s, and humans were ready to travel to the Moon and return.
== Practical techniques ==
=== Rules of thumb ===
The following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics outlined below. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun.
Kepler's laws of planetary motion:
Orbits are elliptical, with the heavier body at one focus of the ellipse. A special case of this is a circular orbit (a circle is a special case of ellipse) with the planet at the center.
A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured.
The square of a satellite's orbital period is proportional to the cube of its average distance from the planet.
Without applying force (such as firing a rocket engine), the period and shape of the satellite's orbit will not change.
A satellite in a low orbit (or a low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet.
If thrust is applied at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus one cannot move from one circular orbit to another with only one brief application of thrust.
From a circular orbit, thrust applied in a direction opposite to the satellite's motion changes the orbit to an elliptical one; the satellite will descend and reach the lowest orbital point (the periapse) at 180 degrees away from the firing point; then it will ascend back. The period of the resultant orbit will be less than that of the original circular orbit. Thrust applied in the direction of the satellite's motion creates an elliptical orbit with its highest point (apoapse) 180 degrees away from the firing point. The period of the resultant orbit will be longer than that of the original circular orbit.
The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecrafts are in the same circular orbit and wish to dock, the trailing craft cannot simply fire its engines to accelerate towards the leading craft. This will change the shape of its orbit, causing it to gain altitude and slow down relative to the leading craft, thus moving away from the target. The space rendezvous before docking normally takes multiple precisely calculated engine firings in multiple orbital periods, requiring hours or even days to complete.
To the extent that the standard assumptions of astrodynamics do not hold, actual trajectories will vary from those calculated. For example, simple atmospheric drag is another complicating factor for objects in low Earth orbit.
These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system (see n-body problem). Celestial mechanics uses more general rules applicable to a wider variety of situations. Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold strictly only in describing the motion of two gravitating bodies in the absence of non-gravitational forces; they also describe parabolic and hyperbolic trajectories. In the close proximity of large objects like stars the differences between classical mechanics and general relativity also become important.
== Laws of astrodynamics ==
The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is differential calculus.
In a Newtonian framework, the laws governing orbits and trajectories are in principle time-symmetric.
Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile.
Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws which have been set out above. The three laws are:
The orbit of every planet is an ellipse with the Sun at one of the foci.
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis of the orbits.
=== Escape velocity ===
The formula for an escape velocity is derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by
ϵ
p
=
−
G
M
r
{\displaystyle \epsilon _{p}=-{\frac {GM}{r}}\,}
where G is the gravitational constant and r is the distance between the two bodies;
while the specific kinetic energy of an object is given by
ϵ
k
=
v
2
2
{\displaystyle \epsilon _{k}={\frac {v^{2}}{2}}\,}
where v is its Velocity;
and so the total specific orbital energy is
ϵ
=
ϵ
k
+
ϵ
p
=
v
2
2
−
G
M
r
{\displaystyle \epsilon =\epsilon _{k}+\epsilon _{p}={\frac {v^{2}}{2}}-{\frac {GM}{r}}\,}
Since energy is conserved,
ϵ
{\displaystyle \epsilon }
cannot depend on the distance,
r
{\displaystyle r}
, from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite
r
{\displaystyle r}
only if this quantity is nonnegative, which implies
v
≥
2
G
M
r
.
{\displaystyle v\geq {\sqrt {\frac {2GM}{r}}}.}
The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit.
=== Formulae for free orbits ===
Orbits are conic sections, so the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is:
r
=
p
1
+
e
cos
θ
{\displaystyle r={\frac {p}{1+e\cos \theta }}}
μ
=
G
(
m
1
+
m
2
)
{\displaystyle \mu =G(m_{1}+m_{2})\,}
p
=
h
2
/
μ
{\displaystyle p=h^{2}/\mu \,}
μ
{\displaystyle \mu }
is called the gravitational parameter.
m
1
{\displaystyle m_{1}}
and
m
2
{\displaystyle m_{2}}
are the masses of objects 1 and 2, and
h
{\displaystyle h}
is the specific angular momentum of object 2 with respect to object 1. The parameter
θ
{\displaystyle \theta }
is known as the true anomaly,
p
{\displaystyle p}
is the semi-latus rectum, while
e
{\displaystyle e}
is the orbital eccentricity, all obtainable from the various forms of the six independent orbital elements.
=== Circular orbits ===
All bounded orbits where the gravity of a central body dominates are elliptical in nature. A special case of this is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M can be derived as follows:
Centrifugal acceleration matches the acceleration due to gravity.
So,
v
2
r
=
G
M
r
2
{\displaystyle {\frac {v^{2}}{r}}={\frac {GM}{r^{2}}}}
Therefore,
v
=
G
M
r
{\displaystyle \ v={\sqrt {{\frac {GM}{r}}\ }}}
where
G
{\displaystyle G}
is the gravitational constant, equal to
6.6743 × 10−11 m3/(kg·s2)
To properly use this formula, the units must be consistent; for example,
M
{\displaystyle M}
must be in kilograms, and
r
{\displaystyle r}
must be in meters. The answer will be in meters per second.
The quantity
G
M
{\displaystyle GM}
is often termed the standard gravitational parameter, which has a different value for every planet or moon in the Solar System.
Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by
2
{\displaystyle {\sqrt {2}}}
:
v
=
2
G
M
r
=
2
G
M
r
.
{\displaystyle \ v={\sqrt {2}}{\sqrt {{\frac {GM}{r}}\ }}={\sqrt {{\frac {2GM}{r}}\ }}.}
To escape from gravity, the kinetic energy must at least match the negative potential energy. Therefore,
1
2
m
v
2
=
G
M
m
r
{\displaystyle {\frac {1}{2}}mv^{2}={\frac {GMm}{r}}}
v
=
2
G
M
r
.
{\displaystyle v={\sqrt {{\frac {2GM}{r}}\ }}.}
=== Elliptical orbits ===
If
0
<
e
<
1
{\displaystyle 0<e<1}
, then the denominator of the equation of free orbits varies with the true anomaly
θ
{\displaystyle \theta }
, but remains positive, never becoming zero. Therefore, the relative position vector remains bounded, having its smallest magnitude at periapsis
r
p
{\displaystyle r_{p}}
, which is given by:
r
p
=
p
1
+
e
{\displaystyle r_{p}={\frac {p}{1+e}}}
The maximum value
r
{\displaystyle r}
is reached when
θ
=
180
∘
{\displaystyle \theta =180^{\circ }}
. This point is called the apoapsis, and its radial coordinate, denoted
r
a
{\displaystyle r_{a}}
, is
r
a
=
p
1
−
e
{\displaystyle r_{a}={\frac {p}{1-e}}}
Let
2
a
{\displaystyle 2a}
be the distance measured along the apse line from periapsis
P
{\displaystyle P}
to apoapsis
A
{\displaystyle A}
, as illustrated in the equation below:
2
a
=
r
p
+
r
a
{\displaystyle 2a=r_{p}+r_{a}}
Substituting the equations above, we get:
a
=
p
1
−
e
2
{\displaystyle a={\frac {p}{1-e^{2}}}}
a is the semimajor axis of the ellipse. Solving for
p
{\displaystyle p}
, and substituting the result in the conic section curve formula above, we get:
r
=
a
(
1
−
e
2
)
1
+
e
cos
θ
{\displaystyle r={\frac {a(1-e^{2})}{1+e\cos \theta }}}
==== Orbital period ====
Under standard assumptions the orbital period (
T
{\displaystyle T\,\!}
) of a body traveling along an elliptic orbit can be computed as:
T
=
2
π
a
3
μ
{\displaystyle T=2\pi {\sqrt {a^{3} \over {\mu }}}}
where:
μ
{\displaystyle \mu \,}
is the standard gravitational parameter,
a
{\displaystyle a\,\!}
is the length of the semi-major axis.
Conclusions:
The orbital period is equal to that for a circular orbit with the orbit radius equal to the semi-major axis (
a
{\displaystyle a\,\!}
),
For a given semi-major axis the orbital period does not depend on the eccentricity (See also: Kepler's third law).
==== Velocity ====
Under standard assumptions the orbital speed (
v
{\displaystyle v\,}
) of a body traveling along an elliptic orbit can be computed from the Vis-viva equation as:
v
=
μ
(
2
r
−
1
a
)
{\displaystyle v={\sqrt {\mu \left({2 \over {r}}-{1 \over {a}}\right)}}}
where:
μ
{\displaystyle \mu \,}
is the standard gravitational parameter,
r
{\displaystyle r\,}
is the distance between the orbiting bodies.
a
{\displaystyle a\,\!}
is the length of the semi-major axis.
The velocity equation for a hyperbolic trajectory is
v
=
μ
(
2
r
+
|
1
a
|
)
{\displaystyle v={\sqrt {\mu \left({2 \over {r}}+\left\vert {1 \over {a}}\right\vert \right)}}}
.
==== Energy ====
Under standard assumptions, specific orbital energy (
ϵ
{\displaystyle \epsilon \,}
) of elliptic orbit is negative and the orbital energy conservation equation (the Vis-viva equation) for this orbit can take the form:
v
2
2
−
μ
r
=
−
μ
2
a
=
ϵ
<
0
{\displaystyle {v^{2} \over {2}}-{\mu \over {r}}=-{\mu \over {2a}}=\epsilon <0}
where:
v
{\displaystyle v\,}
is the speed of the orbiting body,
r
{\displaystyle r\,}
is the distance of the orbiting body from the center of mass of the central body,
a
{\displaystyle a\,}
is the semi-major axis,
μ
{\displaystyle \mu \,}
is the standard gravitational parameter.
Conclusions:
For a given semi-major axis the specific orbital energy is independent of the eccentricity.
Using the virial theorem we find:
the time-average of the specific potential energy is equal to
2
ϵ
{\displaystyle 2\epsilon }
the time-average of
r
−
1
{\displaystyle r^{-1}}
is
a
−
1
{\displaystyle a^{-1}}
the time-average of the specific kinetic energy is equal to
−
ϵ
{\displaystyle -\epsilon }
=== Parabolic orbits ===
If the eccentricity equals 1, then the orbit equation becomes:
r
=
h
2
μ
1
1
+
cos
θ
{\displaystyle r={{h^{2}} \over {\mu }}{{1} \over {1+\cos \theta }}}
where:
r
{\displaystyle r\,}
is the radial distance of the orbiting body from the mass center of the central body,
h
{\displaystyle h\,}
is specific angular momentum of the orbiting body,
θ
{\displaystyle \theta \,}
is the true anomaly of the orbiting body,
μ
{\displaystyle \mu \,}
is the standard gravitational parameter.
As the true anomaly θ approaches 180°, the denominator approaches zero, so that r tends towards infinity. Hence, the energy of the trajectory for which e=1 is zero, and is given by:
ϵ
=
v
2
2
−
μ
r
=
0
{\displaystyle \epsilon ={v^{2} \over 2}-{\mu \over {r}}=0}
where:
v
{\displaystyle v\,}
is the speed of the orbiting body.
In other words, the speed anywhere on a parabolic path is:
v
=
2
μ
r
{\displaystyle v={\sqrt {2\mu \over {r}}}}
=== Hyperbolic orbits ===
If
e
>
1
{\displaystyle e>1}
, the orbit formula,
r
=
h
2
μ
1
1
+
e
cos
θ
{\displaystyle r={{h^{2}} \over {\mu }}{{1} \over {1+e\cos \theta }}}
describes the geometry of the hyperbolic orbit. The system consists of two symmetric curves. The orbiting body occupies one of them; the other one is its empty mathematical image. Clearly, the denominator of the equation above goes to zero when
cos
θ
=
−
1
/
e
{\displaystyle \cos \theta =-1/e}
. we denote this value of true anomaly
θ
∞
=
cos
−
1
(
−
1
e
)
{\displaystyle \theta _{\infty }=\cos ^{-1}\left(-{\frac {1}{e}}\right)}
since the radial distance approaches infinity as the true anomaly approaches
θ
∞
{\displaystyle \theta _{\infty }}
, known as the true anomaly of the asymptote. Observe that
θ
∞
{\displaystyle \theta _{\infty }}
lies between 90° and 180°. From the trigonometric identity
sin
2
θ
+
cos
2
θ
=
1
{\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1}
it follows that:
sin
θ
∞
=
1
e
e
2
−
1
{\displaystyle \sin \theta _{\infty }={\frac {1}{e}}{\sqrt {e^{2}-1}}}
==== Energy ====
Under standard assumptions, specific orbital energy (
ϵ
{\displaystyle \epsilon \,}
) of a hyperbolic trajectory is greater than zero and the orbital energy conservation equation for this kind of trajectory takes form:
ϵ
=
v
2
2
−
μ
r
=
μ
−
2
a
{\displaystyle \epsilon ={v^{2} \over 2}-{\mu \over {r}}={\mu \over {-2a}}}
where:
v
{\displaystyle v\,}
is the orbital velocity of orbiting body,
r
{\displaystyle r\,}
is the radial distance of orbiting body from central body,
a
{\displaystyle a\,}
is the negative semi-major axis of the orbit's hyperbola,
μ
{\displaystyle \mu \,}
is standard gravitational parameter.
==== Hyperbolic excess velocity ====
Under standard assumptions the body traveling along a hyperbolic trajectory will attain at
r
=
{\displaystyle r=}
infinity an orbital velocity called hyperbolic excess velocity (
v
∞
{\displaystyle v_{\infty }\,\!}
) that can be computed as:
v
∞
=
μ
−
a
{\displaystyle v_{\infty }={\sqrt {\mu \over {-a}}}\,\!}
where:
μ
{\displaystyle \mu \,\!}
is standard gravitational parameter,
a
{\displaystyle a\,\!}
is the negative semi-major axis of orbit's hyperbola.
The hyperbolic excess velocity is related to the specific orbital energy or characteristic energy by
2
ϵ
=
C
3
=
v
∞
2
{\displaystyle 2\epsilon =C_{3}=v_{\infty }^{2}\,\!}
== Calculating trajectories ==
=== Kepler's equation ===
One approach to calculating orbits (mainly used historically) is to use Kepler's equation:
M
=
E
−
ϵ
⋅
sin
E
{\displaystyle M=E-\epsilon \cdot \sin E}
.
where M is the mean anomaly, E is the eccentric anomaly, and
ϵ
{\displaystyle \epsilon }
is the eccentricity.
With Kepler's formula, finding the time-of-flight to reach an angle (true anomaly) of
θ
{\displaystyle \theta }
from periapsis is broken into two steps:
Compute the eccentric anomaly
E
{\displaystyle E}
from true anomaly
θ
{\displaystyle \theta }
Compute the time-of-flight
t
{\displaystyle t}
from the eccentric anomaly
E
{\displaystyle E}
Finding the eccentric anomaly at a given time (the inverse problem) is more difficult. Kepler's equation is transcendental in
E
{\displaystyle E}
, meaning it cannot be solved for
E
{\displaystyle E}
algebraically. Kepler's equation can be solved for
E
{\displaystyle E}
analytically by inversion.
A solution of Kepler's equation, valid for all real values of
ϵ
{\displaystyle \textstyle \epsilon }
is:
E
=
{
∑
n
=
1
∞
M
n
3
n
!
lim
θ
→
0
(
d
n
−
1
d
θ
n
−
1
[
(
θ
θ
−
sin
(
θ
)
3
)
n
]
)
,
ϵ
=
1
∑
n
=
1
∞
M
n
n
!
lim
θ
→
0
(
d
n
−
1
d
θ
n
−
1
[
(
θ
θ
−
ϵ
⋅
sin
(
θ
)
)
n
]
)
,
ϵ
≠
1
{\displaystyle E={\begin{cases}\displaystyle \sum _{n=1}^{\infty }{\frac {M^{\frac {n}{3}}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left[\left({\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}\right)^{n}\right]\right),&\epsilon =1\\\displaystyle \sum _{n=1}^{\infty }{\frac {M^{n}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left[\left({\frac {\theta }{\theta -\epsilon \cdot \sin(\theta )}}\right)^{n}\right]\right),&\epsilon \neq 1\end{cases}}}
Evaluating this yields:
E
=
{
x
+
1
60
x
3
+
1
1400
x
5
+
1
25200
x
7
+
43
17248000
x
9
+
1213
7207200000
x
11
+
151439
12713500800000
x
13
⋯
|
x
=
(
6
M
)
1
3
,
ϵ
=
1
1
1
−
ϵ
M
−
ϵ
(
1
−
ϵ
)
4
M
3
3
!
+
(
9
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
7
M
5
5
!
−
(
225
ϵ
3
+
54
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
10
M
7
7
!
+
(
11025
ϵ
4
+
4131
ϵ
3
+
243
ϵ
2
+
ϵ
)
(
1
−
ϵ
)
13
M
9
9
!
⋯
,
ϵ
≠
1
{\displaystyle E={\begin{cases}\displaystyle x+{\frac {1}{60}}x^{3}+{\frac {1}{1400}}x^{5}+{\frac {1}{25200}}x^{7}+{\frac {43}{17248000}}x^{9}+{\frac {1213}{7207200000}}x^{11}+{\frac {151439}{12713500800000}}x^{13}\cdots \ |\ x=(6M)^{\frac {1}{3}},&\epsilon =1\\\\\displaystyle {\frac {1}{1-\epsilon }}M-{\frac {\epsilon }{(1-\epsilon )^{4}}}{\frac {M^{3}}{3!}}+{\frac {(9\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{7}}}{\frac {M^{5}}{5!}}-{\frac {(225\epsilon ^{3}+54\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{10}}}{\frac {M^{7}}{7!}}+{\frac {(11025\epsilon ^{4}+4131\epsilon ^{3}+243\epsilon ^{2}+\epsilon )}{(1-\epsilon )^{13}}}{\frac {M^{9}}{9!}}\cdots ,&\epsilon \neq 1\end{cases}}}
Alternatively, Kepler's Equation can be solved numerically. First one must guess a value of
E
{\displaystyle E}
and solve for time-of-flight; then adjust
E
{\displaystyle E}
as necessary to bring the computed time-of-flight closer to the desired value until the required precision is achieved. Usually, Newton's method is used to achieve relatively fast convergence.
The main difficulty with this approach is that it can take prohibitively long to converge for the extreme elliptical orbits. For near-parabolic orbits, eccentricity
ϵ
{\displaystyle \epsilon }
is nearly 1, and substituting
e
=
1
{\displaystyle e=1}
into the formula for mean anomaly,
E
−
sin
E
{\displaystyle E-\sin E}
, we find ourselves subtracting two nearly-equal values, and accuracy suffers. For near-circular orbits, it is hard to find the periapsis in the first place (and truly circular orbits have no periapsis at all). Furthermore, the equation was derived on the assumption of an elliptical orbit, and so it does not hold for parabolic or hyperbolic orbits. These difficulties are what led to the development of the universal variable formulation, described below.
=== Conic orbits ===
For simple procedures, such as computing the delta-v for coplanar transfer ellipses, traditional approaches are fairly effective. Others, such as time-of-flight are far more complicated, especially for near-circular and hyperbolic orbits.
=== The patched conic approximation ===
The Hohmann transfer orbit alone is a poor approximation for interplanetary trajectories because it neglects the planets' own gravity. Planetary gravity dominates the behavior of the spacecraft in the vicinity of a planet and in most cases Hohmann severely overestimates delta-v, and produces highly inaccurate prescriptions for burn timings. A relatively simple way to get a first-order approximation of delta-v is based on the 'Patched Conic Approximation' technique. One must choose the one dominant gravitating body in each region of space through which the trajectory will pass, and to model only that body's effects in that region. For instance, on a trajectory from the Earth to Mars, one would begin by considering only the Earth's gravity until the trajectory reaches a distance where the Earth's gravity no longer dominates that of the Sun. The spacecraft would be given escape velocity to send it on its way to interplanetary space. Next, one would consider only the Sun's gravity until the trajectory reaches the neighborhood of Mars. During this stage, the transfer orbit model is appropriate. Finally, only Mars's gravity is considered during the final portion of the trajectory where Mars's gravity dominates the spacecraft's behavior. The spacecraft would approach Mars on a hyperbolic orbit, and a final retrograde burn would slow the spacecraft enough to be captured by Mars. Friedrich Zander was one of the first to apply the patched-conics approach for astrodynamics purposes, when proposing the use of intermediary bodies' gravity for interplanetary travels, in what is known today as a gravity assist.
The size of the "neighborhoods" (or spheres of influence) vary with radius
r
S
O
I
{\displaystyle r_{SOI}}
:
r
S
O
I
=
a
p
(
m
p
m
s
)
2
/
5
{\displaystyle r_{SOI}=a_{p}\left({\frac {m_{p}}{m_{s}}}\right)^{2/5}}
where
a
p
{\displaystyle a_{p}}
is the semimajor axis of the planet's orbit relative to the Sun;
m
p
{\displaystyle m_{p}}
and
m
s
{\displaystyle m_{s}}
are the masses of the planet and Sun, respectively.
This simplification is sufficient to compute rough estimates of fuel requirements, and rough time-of-flight estimates, but it is not generally accurate enough to guide a spacecraft to its destination. For that, numerical methods are required.
=== The universal variable formulation ===
To address computational shortcomings of traditional approaches for solving the 2-body problem, the universal variable formulation was developed. It works equally well for the circular, elliptical, parabolic, and hyperbolic cases, the differential equations converging well when integrated for any orbit. It also generalizes well to problems incorporating perturbation theory.
=== Perturbations ===
The universal variable formulation works well with the variation of parameters technique, except now, instead of the six Keplerian orbital elements, we use a different set of orbital elements: namely, the satellite's initial position and velocity vectors
x
0
{\displaystyle x_{0}}
and
v
0
{\displaystyle v_{0}}
at a given epoch
t
=
0
{\displaystyle t=0}
. In a two-body simulation, these elements are sufficient to compute the satellite's position and velocity at any time in the future, using the universal variable formulation. Conversely, at any moment in the satellite's orbit, we can measure its position and velocity, and then use the universal variable approach to determine what its initial position and velocity would have been at the epoch. In perfect two-body motion, these orbital elements would be invariant (just like the Keplerian elements would be).
However, perturbations cause the orbital elements to change over time. Hence, the position element is written as
x
0
(
t
)
{\displaystyle x_{0}(t)}
and the velocity element as
v
0
(
t
)
{\displaystyle v_{0}(t)}
, indicating that they vary with time. The technique to compute the effect of perturbations becomes one of finding expressions, either exact or approximate, for the functions
x
0
(
t
)
{\displaystyle x_{0}(t)}
and
v
0
(
t
)
{\displaystyle v_{0}(t)}
.
The following are some effects which make real orbits differ from the simple models based on a spherical Earth. Most of them can be handled on short timescales (perhaps less than a few thousand orbits) by perturbation theory because they are small relative to the corresponding two-body effects.
Equatorial bulges cause precession of the node and the perigee
Tesseral harmonics of the gravity field introduce additional perturbations
Lunar and solar gravity perturbations alter the orbits
Atmospheric drag reduces the semi-major axis unless make-up thrust is used
Over very long timescales (perhaps millions of orbits), even small perturbations can dominate, and the behavior can become chaotic. On the other hand, the various perturbations can be orchestrated by clever astrodynamicists to assist with orbit maintenance tasks, such as station-keeping, ground track maintenance or adjustment, or phasing of perigee to cover selected targets at low altitude.
== Orbital maneuver ==
In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM).
=== Orbital transfer ===
Transfer orbits are usually elliptical orbits that allow spacecraft to move from one (usually substantially circular) orbit to another. Usually they require a burn at the start, a burn at the end, and sometimes one or more burns in the middle.
The Hohmann transfer orbit requires a minimal delta-v.
A bi-elliptic transfer can require less energy than the Hohmann transfer, if the ratio of orbits is 11.94 or greater, but comes at the cost of increased trip time over the Hohmann transfer.
Faster transfers may use any orbit that intersects both the original and destination orbits, at the cost of higher delta-v.
Using low thrust engines (such as electrical propulsion), if the initial orbit is supersynchronous to the final desired circular orbit then the optimal transfer orbit is achieved by thrusting continuously in the direction of the velocity at apogee. This method however takes much longer due to the low thrust.
For the case of orbital transfer between non-coplanar orbits, the change-of-plane thrust must be made at the point where the orbital planes intersect (the "node"). As the objective is to change the direction of the velocity vector by an angle equal to the angle between the planes, almost all of this thrust should be made when the spacecraft is at the node near the apoapse, when the magnitude of the velocity vector is at its lowest. However, a small fraction of the orbital inclination change can be made at the node near the periapse, by slightly angling the transfer orbit injection thrust in the direction of the desired inclination change. This works because the cosine of a small angle is very nearly one, resulting in the small plane change being effectively "free" despite the high velocity of the spacecraft near periapse, as the Oberth Effect due to the increased, slightly angled thrust exceeds the cost of the thrust in the orbit-normal axis.
=== Gravity assist and the Oberth effect ===
In a gravity assist, a spacecraft swings by a planet and leaves in a different direction, at a different speed. This is useful to speed or slow a spacecraft instead of carrying more fuel.
This maneuver can be approximated by an elastic collision at large distances, though the flyby does not involve any physical contact. Due to Newton's third law (equal and opposite reaction), any momentum gained by a spacecraft must be lost by the planet, or vice versa. However, because the planet is much, much more massive than the spacecraft, the effect on the planet's orbit is negligible.
The Oberth effect can be employed, particularly during a gravity assist operation. This effect is that use of a propulsion system works better at high speeds, and hence course changes are best done when close to a gravitating body; this can multiply the effective delta-v.
=== Interplanetary Transport Network and fuzzy orbits ===
It is now possible to use computers to search for routes using the nonlinearities in the gravity of the planets and moons of the Solar System. For example, it is possible to plot an orbit from high Earth orbit to Mars, passing close to one of the Earth's trojan points. Collectively referred to as the Interplanetary Transport Network, these highly perturbative, even chaotic, orbital trajectories in principle need no fuel beyond that needed to reach the Lagrange point (in practice keeping to the trajectory requires some course corrections). The biggest problem with them is they can be exceedingly slow, taking many years. In addition launch windows can be very far apart.
They have, however, been employed on projects such as Genesis. This spacecraft visited the Earth-Sun L1 point and returned using very little propellant.
== See also ==
Celestial mechanics
Chaos theory
Kepler orbit
Lagrange point
Mechanical engineering
N-body problem
Roche limit
Spacecraft propulsion
Universal variable formulation
== References ==
== Further reading ==
Lynnane George. Introduction to Orbital Mechanics.
Sellers, Jerry J.; Astore, William J.; Giffen, Robert B.; Larson, Wiley J. (2004). Kirkpatrick, Douglas H. (ed.). Understanding Space: An Introduction to Astronautics (2 ed.). McGraw Hill. p. 228. ISBN 0-07-242468-0.
"Air University Space Primer, Chapter 8 - Orbital Mechanics" (PDF). USAF. Archived from the original (PDF) on 2013-02-14. Retrieved 2007-10-13.
Bate, R.R.; Mueller, D.D.; White, J.E. (1971). Fundamentals of Astrodynamics. Dover Publications, New York. ISBN 978-0-486-60061-1.
Vallado, D. A. (2001). Fundamentals of Astrodynamics and Applications (2nd ed.). Springer. ISBN 978-0-7923-6903-5.
Battin, R.H. (1999). An Introduction to the Mathematics and Methods of Astrodynamics. American Institute of Aeronautics & Ast, Washington, D.C. ISBN 978-1-56347-342-5.
Chobotov, V.A., ed. (2002). Orbital Mechanics (3rd ed.). American Institute of Aeronautics & Ast, Washington, D.C. ISBN 978-1-56347-537-5.
Herrick, S. (1971). Astrodynamics: Orbit Determination, Space Navigation, Celestial Mechanics, Volume 1. Van Nostrand Reinhold, London. ISBN 978-0-442-03370-5.
Herrick, S. (1972). Astrodynamics: Orbit Correction, Perturbation Theory, Integration, Volume 2. Van Nostrand Reinhold, London. ISBN 978-0-442-03371-2.
Kaplan, M.H. (1976). Modern Spacecraft Dynamics and Controls. Wiley, New York. ISBN 978-0-471-45703-9.
Tom Logsdon (1997). Orbital Mechanics. Wiley-Interscience, New York. ISBN 978-0-471-14636-0.
John E. Prussing & Bruce A. Conway (1993). Orbital Mechanics. Oxford University Press, New York. ISBN 978-0-19-507834-3.
M.J. Sidi (2000). Spacecraft Dynamics and Control. Cambridge University Press, New York. ISBN 978-0-521-78780-2.
W.E. Wiesel (1996). Spaceflight Dynamics (2nd ed.). McGraw-Hill, New York. ISBN 978-0-07-070110-6.
J.P. Vinti (1998). Orbital and Celestial Mechanics. American Institute of Aeronautics & Ast, Reston, Virginia. ISBN 978-1-56347-256-5.
P. Gurfil (2006). Modern Astrodynamics. Butterworth-Heinemann. ISBN 978-0-12-373562-1.
== External links ==
ORBITAL MECHANICS (Rocket and Space Technology)
Java Astrodynamics Toolkit
Astrodynamics-based Space Traffic and Event Knowledge Graph | Wikipedia/Astrodynamics |
Computational mechanics is the discipline concerned with the use of computational methods to study phenomena governed by the principles of mechanics. Before the emergence of computational science (also called scientific computing) as a "third way" besides theoretical and experimental sciences, computational mechanics was widely considered to be a sub-discipline of applied mechanics. It is now considered to be a sub-discipline within computational science.
== Overview ==
Computational mechanics (CM) is interdisciplinary. Its three pillars are mechanics, mathematics, and computer science.
=== Mechanics ===
Computational fluid dynamics, computational thermodynamics, computational electromagnetics, computational solid mechanics are some of the many specializations within CM.
=== Mathematics ===
The areas of mathematics most related to computational mechanics are partial differential equations, linear algebra and numerical analysis. The most popular numerical methods used are the finite element, finite difference, and boundary element methods in order of dominance. In solid mechanics finite element methods are far more prevalent than finite difference methods, whereas in fluid mechanics, thermodynamics, and electromagnetism, finite difference methods are almost equally applicable. The boundary element technique is in general less popular, but has a niche in certain areas including acoustics engineering, for example.
=== Computer Science ===
With regard to computing, computer programming, algorithms, and parallel computing play a major role in CM. The most widely used programming language in the scientific community, including computational mechanics, is Fortran. Recently, C++ has increased in popularity. The scientific computing community has been slow in adopting C++ as the lingua franca. Because of its very natural way of expressing mathematical computations, and its built-in visualization capacities, the proprietary language/environment MATLAB is also widely used, especially for rapid application development and model verification.
== Process ==
Scientists within the field of computational mechanics follow a list of tasks to analyze their target mechanical process:
A mathematical model of the physical phenomenon is made. This usually involves expressing the natural or engineering system in terms of partial differential equations. This step uses physics to formalize a complex system.
The mathematical equations are converted into forms which are suitable for digital computation. This step is called discretization because it involves creating an approximate discrete model from the original continuous model. In particular, it typically translates a partial differential equation (or a system thereof) into a system of algebraic equations. The processes involved in this step are studied in the field of numerical analysis.
Computer programs are made to solve the discretized equations using direct methods (which are single step methods resulting in the solution) or iterative methods (which start with a trial solution and arrive at the actual solution by successive refinement). Depending on the nature of the problem, supercomputers or parallel computers may be used at this stage.
The mathematical model, numerical procedures, and the computer codes are verified using either experimental results or simplified models for which exact analytical solutions are available. Quite frequently, new numerical or computational techniques are verified by comparing their result with those of existing well-established numerical methods. In many cases, benchmark problems are also available. The numerical results also have to be visualized and often physical interpretations will be given to the results.
== Applications ==
Some examples where computational mechanics have been put to practical use are vehicle crash simulation, petroleum reservoir modeling, biomechanics, glass manufacturing, and semiconductor modeling.
Complex systems that would be very difficult or impossible to treat using analytical methods have been successfully simulated using the tools provided by computational mechanics.
== See also ==
Scientific computing
Dynamical systems theory
Movable cellular automaton
== References ==
== External links ==
United States Association for Computational Mechanics
Santa Fe Institute Comp Mech Publications | Wikipedia/Computational_mechanics |
Soil mechanics is a branch of soil physics and applied mechanics that describes the behavior of soils. It differs from fluid mechanics and solid mechanics in the sense that soils consist of a heterogeneous mixture of fluids (usually air and water) and particles (usually clay, silt, sand, and gravel) but soil may also contain organic solids and other matter. Along with rock mechanics, soil mechanics provides the theoretical basis for analysis in geotechnical engineering, a subdiscipline of civil engineering, and engineering geology, a subdiscipline of geology. Soil mechanics is used to analyze the deformations of and flow of fluids within natural and man-made structures that are supported on or made of soil, or structures that are buried in soils. Example applications are building and bridge foundations, retaining walls, dams, and buried pipeline systems. Principles of soil mechanics are also used in related disciplines such as geophysical engineering, coastal engineering, agricultural engineering, and hydrology.
This article describes the genesis and composition of soil, the distinction between pore water pressure and inter-granular effective stress, capillary action of fluids in the soil pore spaces, soil classification, seepage and permeability, time dependent change of volume due to squeezing water out of tiny pore spaces, also known as consolidation, shear strength and stiffness of soils. The shear strength of soils is primarily derived from friction between the particles and interlocking, which are very sensitive to the effective stress. The article concludes with some examples of applications of the principles of soil mechanics such as slope stability, lateral earth pressure on retaining walls, and bearing capacity of foundations.
== Genesis and composition of soils ==
=== Genesis ===
The primary mechanism of soil creation is the weathering of rock. All rock types (igneous rock, metamorphic rock and sedimentary rock) may be broken down into small particles to create soil. Weathering mechanisms are physical weathering, chemical weathering, and biological weathering Human activities such as excavation, blasting, and waste disposal, may also create soil. Over geologic time, deeply buried soils may be altered by pressure and temperature to become metamorphic or sedimentary rock, and if melted and solidified again, they would complete the geologic cycle by becoming igneous rock.
Physical weathering includes temperature effects, freeze and thaw of water in cracks, rain, wind, impact and other mechanisms. Chemical weathering includes dissolution of matter composing a rock and precipitation in the form of another mineral. Clay minerals, for example can be formed by weathering of feldspar, which is the most common mineral present in igneous rock.
The most common mineral constituent of silt and sand is quartz, also called silica, which has the chemical name silicon dioxide. The reason that feldspar is most common in rocks but silica is more prevalent in soils is that feldspar is much more soluble than silica.
Silt, Sand, and Gravel are basically little pieces of broken rocks.
According to the Unified Soil Classification System, silt particle sizes are in the range of 0.002 mm to 0.075 mm and sand particles have sizes in the range of 0.075 mm to 4.75 mm.
Gravel particles are broken pieces of rock in the size range 4.75 mm to 100 mm. Particles larger than gravel are called cobbles and boulders.
=== Transport ===
Soil deposits are affected by the mechanism of transport and deposition to their location. Soils that are not transported are called residual soils—they exist at the same location as the rock from which they were generated. Decomposed granite is a common example of a residual soil. The common mechanisms of transport are the actions of gravity, ice, water, and wind. Wind blown soils include dune sands and loess. Water carries particles of different size depending on the speed of the water, thus soils transported by water are graded according to their size. Silt and clay may settle out in a lake, and gravel and sand collect at the bottom of a river bed. Wind blown soil deposits (aeolian soils) also tend to be sorted according to their grain size. Erosion at the base of glaciers is powerful enough to pick up large rocks and boulders as well as soil; soils dropped by melting ice can be a well graded mixture of widely varying particle sizes. Gravity on its own may also carry particles down from the top of a mountain to make a pile of soil and boulders at the base; soil deposits transported by gravity are called colluvium.
The mechanism of transport also has a major effect on the particle shape. For example, low velocity grinding in a river bed will produce rounded particles. Freshly fractured colluvium particles often have a very angular shape.
=== Soil composition ===
==== Soil mineralogy ====
Silts, sands and gravels are classified by their size, and hence they may consist of a variety of minerals. Owing to the stability of quartz compared to other rock minerals, quartz is the most common constituent of sand and silt. Mica, and feldspar are other common minerals present in sands and silts. The mineral constituents of gravel may be more similar to that of the parent rock.
The common clay minerals are montmorillonite or smectite, illite, and kaolinite or kaolin. These minerals tend to form in sheet or plate like structures, with length typically ranging between 10−7 m and 4x10−6 m and thickness typically ranging between 10−9 m and 2x10−6 m, and they have a relatively large specific surface area. The specific surface area (SSA) is defined as the ratio of the surface area of particles to the mass of the particles. Clay minerals typically have specific surface areas in the range of 10 to 1,000 square meters per gram of solid. Due to the large surface area available for chemical, electrostatic, and van der Waals interaction, the mechanical behavior of clay minerals is very sensitive to the amount of pore fluid available and the type and amount of dissolved ions in the pore fluid.
The minerals of soils are predominantly formed by atoms of oxygen, silicon, hydrogen, and aluminum, organized in various crystalline forms. These elements along with calcium, sodium, potassium, magnesium, and carbon constitute over 99 per cent of the solid mass of soils.
==== Grain size distribution ====
Soils consist of a mixture of particles of different size, shape and mineralogy. Because the size of the particles obviously has a significant effect on the soil behavior, the grain size and grain size distribution are used to classify soils. The grain size distribution describes the relative proportions of particles of various sizes. The grain size is often visualized in a cumulative distribution graph which, for example, plots the percentage of particles finer than a given size as a function of size. The median grain size,
D
50
{\displaystyle D_{50}}
, is the size for which 50% of the particle mass consists of finer particles. Soil behavior, especially the hydraulic conductivity, tends to be dominated by the smaller particles, hence, the term "effective size", denoted by
D
10
{\displaystyle D_{10}}
, is defined as the size for which 10% of the particle mass consists of finer particles.
Sands and gravels that possess a wide range of particle sizes with a smooth distribution of particle sizes are called well graded soils. If the soil particles in a sample are predominantly in a relatively narrow range of sizes, the sample is uniformly graded. If a soil sample has distinct gaps in the gradation curve, e.g., a mixture of gravel and fine sand, with no coarse sand, the sample may be gap graded. Uniformly graded and gap graded soils are both considered to be poorly graded. There are many methods for measuring particle-size distribution. The two traditional methods are sieve analysis and hydrometer analysis.
===== Sieve analysis =====
The size distribution of gravel and sand particles are typically measured using sieve analysis. The formal procedure is described in ASTM D6913-04(2009). A stack of sieves with accurately dimensioned holes between a mesh of wires is used to separate the particles into size bins. A known volume of dried soil, with clods broken down to individual particles, is put into the top of a stack of sieves arranged from coarse to fine. The stack of sieves is shaken for a standard period of time so that the particles are sorted into size bins. This method works reasonably well for particles in the sand and gravel size range. Fine particles tend to stick to each other, and hence the sieving process is not an effective method. If there are a lot of fines (silt and clay) present in the soil it may be necessary to run water through the sieves to wash the coarse particles and clods through.
A variety of sieve sizes are available. The boundary between sand and silt is arbitrary. According to the Unified Soil Classification System, a #4 sieve (4 openings per inch) having 4.75 mm opening size separates sand from gravel and a #200 sieve with an 0.075 mm opening separates sand from silt and clay. According to the British standard, 0.063 mm is the boundary between sand and silt, and 2 mm is the boundary between sand and gravel.
===== Hydrometer analysis =====
The classification of fine-grained soils, i.e., soils that are finer than sand, is determined primarily by their Atterberg limits, not by their grain size. If it is important to determine the grain size distribution of fine-grained soils, the hydrometer test may be performed. In the hydrometer tests, the soil particles are mixed with water and shaken to produce a dilute suspension in a glass cylinder, and then the cylinder is left to sit. A hydrometer is used to measure the density of the suspension as a function of time. Clay particles may take several hours to settle past the depth of measurement of the hydrometer. Sand particles may take less than a second. Stokes' law provides the theoretical basis to calculate the relationship between sedimentation velocity and particle size. ASTM provides the detailed procedures for performing the Hydrometer test.
Clay particles can be sufficiently small that they never settle because they are kept in suspension by Brownian motion, in which case they may be classified as colloids.
==== Mass-volume relations ====
There are a variety of parameters used to describe the relative proportions of air, water and solid in a soil. This section defines these parameters and some of their interrelationships. The basic notation is as follows:
V
a
{\displaystyle V_{a}}
,
V
w
{\displaystyle V_{w}}
, and
V
s
{\displaystyle V_{s}}
represent the volumes of air, water and solids in a soil mixture;
W
a
{\displaystyle W_{a}}
,
W
w
{\displaystyle W_{w}}
, and
W
s
{\displaystyle W_{s}}
represent the weights of air, water and solids in a soil mixture;
M
a
{\displaystyle M_{a}}
,
M
w
{\displaystyle M_{w}}
, and
M
s
{\displaystyle M_{s}}
represent the masses of air, water and solids in a soil mixture;
ρ
a
{\displaystyle \rho _{a}}
,
ρ
w
{\displaystyle \rho _{w}}
, and
ρ
s
{\displaystyle \rho _{s}}
represent the densities of the constituents (air, water and solids) in a soil mixture;
Note that the weights, W, can be obtained by multiplying the mass, M, by the acceleration due to gravity, g; e.g.,
W
s
=
M
s
g
{\displaystyle W_{s}=M_{s}g}
Specific Gravity is the ratio of the density of one material compared to the density of pure water (
ρ
w
=
1
g
/
c
m
3
{\displaystyle \rho _{w}=1g/cm^{3}}
).
Specific gravity of solids,
G
s
=
ρ
s
ρ
w
{\displaystyle G_{s}={\frac {\rho _{s}}{\rho _{w}}}}
Note that specific weight, conventionally denoted by the symbol
γ
{\displaystyle \gamma }
may be obtained by multiplying the density (
ρ
{\displaystyle \rho }
) of a material by the acceleration due to gravity,
g
{\displaystyle g}
.
Density, bulk density, or wet density,
ρ
{\displaystyle \rho }
, are different names for the density of the mixture, i.e., the total mass of air, water, solids divided by the total volume of air water and solids (the mass of air is assumed to be zero for practical purposes):
ρ
=
M
s
+
M
w
V
s
+
V
w
+
V
a
=
M
t
V
t
{\displaystyle \rho ={\frac {M_{s}+M_{w}}{V_{s}+V_{w}+V_{a}}}={\frac {M_{t}}{V_{t}}}}
Dry density,
ρ
d
{\displaystyle \rho _{d}}
, is the mass of solids divided by the total volume of air water and solids:
ρ
d
=
M
s
V
s
+
V
w
+
V
a
=
M
s
V
t
{\displaystyle \rho _{d}={\frac {M_{s}}{V_{s}+V_{w}+V_{a}}}={\frac {M_{s}}{V_{t}}}}
Buoyant density,
ρ
′
{\displaystyle \rho '}
, defined as the density of the mixture minus the density of water is useful if the soil is submerged under water:
ρ
′
=
ρ
−
ρ
w
{\displaystyle \rho '=\rho \ -\rho _{w}}
where
ρ
w
{\displaystyle \rho _{w}}
is the density of water
Water content,
w
{\displaystyle w}
is the ratio of mass of water to mass of solid. It is easily measured by weighing a sample of the soil, drying it out in an oven and re-weighing. Standard procedures are described by ASTM.
w
=
M
w
M
s
=
W
w
W
s
{\displaystyle w={\frac {M_{w}}{M_{s}}}={\frac {W_{w}}{W_{s}}}}
Void ratio,
e
{\displaystyle e}
, is the ratio of the volume of voids to the volume of solids:
e
=
V
v
V
s
=
V
v
V
t
−
V
v
=
n
1
−
n
{\displaystyle e={\frac {V_{v}}{V_{s}}}={\frac {V_{v}}{V_{t}-V_{v}}}={\frac {n}{1-n}}}
Porosity,
n
{\displaystyle n}
, is the ratio of volume of voids to the total volume, and is related to the void ratio:
n
=
V
v
V
t
=
V
v
V
s
+
V
v
=
e
1
+
e
{\displaystyle n={\frac {V_{v}}{V_{t}}}={\frac {V_{v}}{V_{s}+V_{v}}}={\frac {e}{1+e}}}
Degree of saturation,
S
{\displaystyle S}
, is the ratio of the volume of water to the volume of voids:
S
=
V
w
V
v
{\displaystyle S={\frac {V_{w}}{V_{v}}}}
From the above definitions, some useful relationships can be derived by use of basic algebra.
ρ
=
(
G
s
+
S
e
)
ρ
w
1
+
e
{\displaystyle \rho ={\frac {(G_{s}+Se)\rho _{w}}{1+e}}}
ρ
=
(
1
+
w
)
G
s
ρ
w
1
+
e
{\displaystyle \rho ={\frac {(1+w)G_{s}\rho _{w}}{1+e}}}
w
=
S
e
G
s
{\displaystyle w={\frac {Se}{G_{s}}}}
== Soil classification ==
Geotechnical engineers classify the soil particle types by performing tests on disturbed (dried, passed through sieves, and remolded) samples of the soil. This provides information about the characteristics of the soil grains themselves. Classification of the types of grains present in a soil does not account for important effects of the structure or fabric of the soil, terms that describe compactness of the particles and patterns in the arrangement of particles in a load carrying framework as well as the pore size and pore fluid distributions. Engineering geologists also classify soils based on their genesis and depositional history.
=== Classification of soil grains ===
In the US and other countries, the Unified Soil Classification System (USCS) is often used for soil classification. Other classification systems include the British Standard BS 5930 and the AASHTO soil classification system.
==== Classification of sands and gravels ====
In the USCS, gravels (given the symbol G) and sands (given the symbol S) are classified according to their grain size distribution. For the USCS, gravels may be given the classification symbol GW (well-graded gravel), GP (poorly graded gravel), GM (gravel with a large amount of silt), or GC (gravel with a large amount of clay). Likewise sands may be classified as being SW, SP, SM or SC. Sands and gravels with a small but non-negligible amount of fines (5–12%) may be given a dual classification such as SW-SC.
==== Atterberg limits ====
Clays and Silts, often called 'fine-grained soils', are classified according to their Atterberg limits; the most commonly used Atterberg limits are the liquid limit (denoted by LL or
w
l
{\displaystyle w_{l}}
), plastic limit (denoted by PL or
w
p
{\displaystyle w_{p}}
), and shrinkage limit (denoted by SL).
The liquid limit is the water content at which the soil behavior transitions from a plastic solid to a liquid. The plastic limit is the water content at which the soil behavior transitions from that of a plastic solid to a brittle solid. The Shrinkage Limit corresponds to a water content below which the soil will not shrink as it dries. The consistency of fine grained soil varies in proportional to the water content in a soil.
As the transitions from one state to another are gradual, the tests have adopted arbitrary definitions to determine the boundaries of the states. The liquid limit is determined by measuring the water content for which a groove closes after 25 blows in a standard test. Alternatively, a fall cone test apparatus may be used to measure the liquid limit. The undrained shear strength of remolded soil at the liquid limit is approximately 2 kPa. The plastic limit is the water content below which it is not possible to roll by hand the soil into 3 mm diameter cylinders. The soil cracks or breaks up as it is rolled down to this diameter. Remolded soil at the plastic limit is quite stiff, having an undrained shear strength of the order of about 200 kPa.
The plasticity index of a particular soil specimen is defined as the difference between the liquid limit and the plastic limit of the specimen; it is an indicator of how much water the soil particles in the specimen can absorb, and correlates with many engineering properties like permeability, compressibility, shear strength and others. Generally, the clay having high plasticity have lower permeability and also they are also difficult to be compacted.
==== Classification of silts and clays ====
According to the Unified Soil Classification System (USCS), silts and clays are classified by plotting the values of their plasticity index and liquid limit on a plasticity chart. The A-Line on the chart separates clays (given the USCS symbol C) from silts (given the symbol M). LL=50% separates high plasticity soils (given the modifier symbol H) from low plasticity soils (given the modifier symbol L). A soil that plots above the A-line and has LL>50% would, for example, be classified as CH. Other possible classifications of silts and clays are ML, CL and MH. If the Atterberg limits plot in the"hatched" region on the graph near the origin, the soils are given the dual classification 'CL-ML'.
=== Indices related to soil strength ===
==== Liquidity index ====
The effects of the water content on the strength of saturated remolded soils can be quantified by the use of the liquidity index, LI:
L
I
=
w
−
P
L
L
L
−
P
L
{\displaystyle LI={\frac {w-PL}{LL-PL}}}
When the LI is 1, remolded soil is at the liquid limit and it has an undrained shear strength of about 2 kPa. When the soil is at the plastic limit, the LI is 0 and the undrained shear strength is about 200 kPa.
==== Relative density ====
The density of sands (cohesionless soils) is often characterized by the relative density,
D
r
{\displaystyle D_{r}}
D
r
=
e
m
a
x
−
e
e
m
a
x
−
e
m
i
n
100
%
{\displaystyle D_{r}={\frac {e_{max}-e}{e_{max}-e_{min}}}100\%}
where:
e
m
a
x
{\displaystyle e_{max}}
is the "maximum void ratio" corresponding to a very loose state,
e
m
i
n
{\displaystyle e_{min}}
is the "minimum void ratio" corresponding to a very dense state and
e
{\displaystyle e}
is the in situ void ratio. Methods used to calculate relative density are defined in ASTM D4254-00(2006).
Thus if
D
r
=
100
%
{\displaystyle D_{r}=100\%}
the sand or gravel is very dense, and if
D
r
=
0
%
{\displaystyle D_{r}=0\%}
the soil is extremely loose and unstable.
== Seepage: steady state flow of water ==
== Effective stress and capillarity: hydrostatic conditions ==
To understand the mechanics of soils it is necessary to understand how normal stresses and shear stresses are shared by the different phases. Neither gas nor liquid provide significant resistance to shear stress. The shear resistance of soil is provided by friction and interlocking of the particles. The friction depends on the intergranular contact stresses between solid particles. The normal stresses, on the other hand, are shared by the fluid and the particles. Although the pore air is relatively compressible, and hence takes little normal stress in most geotechnical problems, liquid water is relatively incompressible and if the voids are saturated with water, the pore water must be squeezed out in order to pack the particles closer together.
The principle of effective stress, introduced by Karl Terzaghi, states that the effective stress σ' (i.e., the average intergranular stress between solid particles) may be calculated by a simple subtraction of the pore pressure from the total stress:
σ
′
=
σ
−
u
{\displaystyle \sigma '=\sigma -u\,}
where σ is the total stress and u is the pore pressure. It is not practical to measure σ' directly, so in practice the vertical effective stress is calculated from the pore pressure and vertical total stress. The distinction between the terms pressure and stress is also important. By definition, pressure at a point is equal in all directions but stresses at a point can be different in different directions. In soil mechanics, compressive stresses and pressures are considered to be positive and tensile stresses are considered to be negative, which is different from the solid mechanics sign convention for stress.
=== Total stress ===
For level ground conditions, the total vertical stress at a point,
σ
v
{\displaystyle \sigma _{v}}
, on average, is the weight of everything above that point per unit area. The vertical stress beneath a uniform surface layer with density
ρ
{\displaystyle \rho }
, and thickness
H
{\displaystyle H}
is for example:
σ
v
=
ρ
g
H
=
γ
H
{\displaystyle \sigma _{v}=\rho gH=\gamma H}
where
g
{\displaystyle g}
is the acceleration due to gravity, and
γ
{\displaystyle \gamma }
is the unit weight of the overlying layer. If there are multiple layers of soil or water above the point of interest, the vertical stress may be calculated by summing the product of the unit weight and thickness of all of the overlying layers. Total stress increases with increasing depth in proportion to the density of the overlying soil.
It is not possible to calculate the horizontal total stress in this way. Lateral earth pressures are addressed elsewhere.
=== Pore water pressure ===
==== Hydrostatic conditions ====
If the soil pores are filled with water that is not flowing but is static, the pore water pressures will be hydrostatic. The water table is located at the depth where the water pressure is equal to the atmospheric pressure. For hydrostatic conditions, the water pressure increases linearly with depth below the water table:
u
=
ρ
w
g
z
w
{\displaystyle u=\rho _{w}gz_{w}}
where
ρ
w
{\displaystyle \rho _{w}}
is the density of water, and
z
w
{\displaystyle z_{w}}
is the depth below the water table.
==== Capillary action ====
Due to surface tension, water will rise up in a small capillary tube above a free surface of water. Likewise, water will rise up above the water table into the small pore spaces around the soil particles. In fact the soil may be completely saturated for some distance above the water table. Above the height of capillary saturation, the soil may be wet but the water content will decrease with elevation. If the water in the capillary zone is not moving, the water pressure obeys the equation of hydrostatic equilibrium,
u
=
ρ
w
g
z
w
{\displaystyle u=\rho _{w}gz_{w}}
, but note that
z
w
{\displaystyle z_{w}}
, is negative above the water table. Hence, hydrostatic water pressures are negative above the water table. The thickness of the zone of capillary saturation depends on the pore size, but typically, the heights vary between a centimeter or so for coarse sand to tens of meters for a silt or clay. In fact the pore space of soil is a uniform fractal e.g. a set of uniformly distributed D-dimensional fractals of average linear size L. For the clay soil it has been found that L=0.15 mm and D=2.7.
The surface tension of water explains why the water does not drain out of a wet sand castle or a moist ball of clay. Negative water pressures make the water stick to the particles and pull the particles to each other, friction at the particle contacts make a sand castle stable. But as soon as a wet sand castle is submerged below a free water surface, the negative pressures are lost and the castle collapses. Considering the effective stress equation,
σ
′
=
σ
−
u
,
{\displaystyle \sigma '=\sigma -u,}
if the water pressure is negative, the effective stress may be positive, even on a free surface (a surface where the total normal stress is zero). The negative pore pressure pulls the particles together and causes compressive particle to particle contact forces.
Negative pore pressures in clayey soil can be much more powerful than those in sand. Negative pore pressures explain why clay soils shrink when they dry and swell as they are wetted. The swelling and shrinkage can cause major distress, especially to light structures and roads.
Later sections of this article address the pore water pressures for seepage and consolidation problems.
== Consolidation: transient flow of water ==
Consolidation is a process by which soils decrease in volume. It occurs when stress is applied to a soil that causes the soil particles to pack together more tightly, therefore reducing volume. When this occurs in a soil that is saturated with water, water will be squeezed out of the soil. The time required to squeeze the water out of a thick deposit of clayey soil layer might be years. For a layer of sand, the water may be squeezed out in a matter of seconds. A building foundation or construction of a new embankment will cause the soil below to consolidate and this will cause settlement which in turn may cause distress to the building or embankment. Karl Terzaghi developed the theory of one-dimensional consolidation which enables prediction of the amount of settlement and the time required for the settlement to occur. Afterwards, Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity. Soils are tested with an oedometer test to determine their compression index and coefficient of consolidation.
When stress is removed from a consolidated soil, the soil will rebound, drawing water back into the pores and regaining some of the volume it had lost in the consolidation process. If the stress is reapplied, the soil will re-consolidate again along a recompression curve, defined by the recompression index. Soil that has been consolidated to a large pressure and has been subsequently unloaded is considered to be overconsolidated. The maximum past vertical effective stress is termed the preconsolidation stress. A soil which is currently experiencing the maximum past vertical effective stress is said to be normally consolidated. The overconsolidation ratio, (OCR) is the ratio of the maximum past vertical effective stress to the current vertical effective stress. The OCR is significant for two reasons: firstly, because the compressibility of normally consolidated soil is significantly larger than that for overconsolidated soil, and secondly, the shear behavior and dilatancy of clayey soil are related to the OCR through critical state soil mechanics; highly overconsolidated clayey soils are dilatant, while normally consolidated soils tend to be contractive.
== Shear behavior: stiffness and strength ==
The shear strength and stiffness of soil determines whether or not soil will be stable or how much it will deform. Knowledge of the strength is necessary to determine if a slope will be stable, if a building or bridge might settle too far into the ground, and the limiting pressures on a retaining wall. It is important to distinguish between failure of a soil element and the failure of a geotechnical structure (e.g., a building foundation, slope or retaining wall); some soil elements may reach their peak strength prior to failure of the structure. Different criteria can be used to define the "shear strength" and the "yield point" for a soil element from a stress–strain curve. One may define the peak shear strength as the peak of a stress–strain curve, or the shear strength at critical state as the value after large strains when the shear resistance levels off. If the stress–strain curve does not stabilize before the end of shear strength test, the "strength" is sometimes considered to be the shear resistance at 15–20% strain. The shear strength of soil depends on many factors including the effective stress and the void ratio.
The shear stiffness is important, for example, for evaluation of the magnitude of deformations of foundations and slopes prior to failure and because it is related to the shear wave velocity. The slope of the initial, nearly linear, portion of a plot of shear stress as a function of shear strain is called the shear modulus
=== Friction, interlocking and dilation ===
Soil is an assemblage of particles that have little to no cementation while rock (such as sandstone) may consist of an assembly of particles that are strongly cemented together by chemical bonds. The shear strength of soil is primarily due to interparticle friction and therefore, the shear resistance on a plane is approximately proportional to the effective normal stress on that plane. The angle of internal friction is thus closely related to the maximum stable slope angle, often called the angle of repose.
But in addition to friction, soil derives significant shear resistance from interlocking of grains. If the grains are densely packed, the grains tend to spread apart from each other as they are subject to shear strain. The expansion of the particle matrix due to shearing was called dilatancy by Osborne Reynolds. If one considers the energy required to shear an assembly of particles there is energy input by the shear force, T, moving a distance, x and there is also energy input by the normal force, N, as the sample expands a distance, y. Due to the extra energy required for the particles to dilate against the confining pressures, dilatant soils have a greater peak strength than contractive soils. Furthermore, as dilative soil grains dilate, they become looser (their void ratio increases), and their rate of dilation decreases until they reach a critical void ratio. Contractive soils become denser as they shear, and their rate of contraction decreases until they reach a critical void ratio.
The tendency for a soil to dilate or contract depends primarily on the confining pressure and the void ratio of the soil. The rate of dilation is high if the confining pressure is small and the void ratio is small. The rate of contraction is high if the confining pressure is large and the void ratio is large. As a first approximation, the regions of contraction and dilation are separated by the critical state line.
=== Failure criteria ===
After a soil reaches the critical state, it is no longer contracting or dilating and the shear stress on the failure plane
τ
c
r
i
t
{\displaystyle \tau _{crit}}
is determined by the effective normal stress on the failure plane
σ
n
′
{\displaystyle \sigma _{n}'}
and critical state friction angle
ϕ
c
r
i
t
′
{\displaystyle \phi _{crit}'\ }
:
τ
c
r
i
t
=
σ
n
′
tan
ϕ
c
r
i
t
′
{\displaystyle \tau _{crit}=\sigma _{n}'\tan \phi _{crit}'\ }
The peak strength of the soil may be greater, however, due to the interlocking (dilatancy) contribution.
This may be stated:
τ
p
e
a
k
=
σ
n
′
tan
ϕ
p
e
a
k
′
{\displaystyle \tau _{peak}=\sigma _{n}'\tan \phi _{peak}'\ }
where
ϕ
p
e
a
k
′
>
ϕ
c
r
i
t
′
{\displaystyle \phi _{peak}'>\phi _{crit}'}
. However, use of a friction angle greater than the critical state value for design requires care. The peak strength will not be mobilized everywhere at the same time in a practical problem such as a foundation, slope or retaining wall. The critical state friction angle is not nearly as variable as the peak friction angle and hence it can be relied upon with confidence.
Not recognizing the significance of dilatancy, Coulomb proposed that the shear strength of soil may be expressed as a combination of adhesion and friction components:
τ
f
=
c
′
+
σ
f
′
tan
ϕ
′
{\displaystyle \tau _{f}=c'+\sigma _{f}'\tan \phi '\,}
It is now known that the
c
′
{\displaystyle c'}
and
ϕ
′
{\displaystyle \phi '}
parameters in the last equation are not fundamental soil properties. In particular,
c
′
{\displaystyle c'}
and
ϕ
′
{\displaystyle \phi '}
are different depending on the magnitude of effective stress. According to Schofield (2006), the longstanding use of
c
′
{\displaystyle c'}
in practice has led many engineers to wrongly believe that
c
′
{\displaystyle c'}
is a fundamental parameter. This assumption that
c
′
{\displaystyle c'}
and
ϕ
′
{\displaystyle \phi '}
are constant can lead to overestimation of peak strengths.
=== Structure, fabric, and chemistry ===
In addition to the friction and interlocking (dilatancy) components of strength, the structure and fabric also play a significant role in the soil behavior. The structure and fabric include factors such as the spacing and arrangement of the solid particles or the amount and spatial distribution of pore water; in some cases cementitious material accumulates at particle-particle contacts. Mechanical behavior of soil is affected by the density of the particles and their structure or arrangement of the particles as well as the amount and spatial distribution of fluids present (e.g., water and air voids). Other factors include the electrical charge of the particles, chemistry of pore water, chemical bonds (i.e. cementation -particles connected through a solid substance such as recrystallized calcium carbonate)
=== Drained and undrained shear ===
The presence of nearly incompressible fluids such as water in the pore spaces affects the ability for the pores to dilate or contract.
If the pores are saturated with water, water must be sucked into the dilating pore spaces to fill the expanding pores (this phenomenon is visible at the beach when apparently dry spots form around feet that press into the wet sand).
Similarly, for contractive soil, water must be squeezed out of the pore spaces to allow contraction to take place.
Dilation of the voids causes negative water pressures that draw fluid into the pores, and contraction of the voids causes positive pore pressures to push the water out of the pores. If the rate of shearing is very large compared to the rate that water can be sucked into or squeezed out of the dilating or contracting pore spaces, then the shearing is called undrained shear, if the shearing is slow enough that the water pressures are negligible, the shearing is called drained shear. During undrained shear, the water pressure u changes depending on volume change tendencies. From the effective stress equation, the change in u directly effects the effective stress by the equation:
σ
′
=
σ
−
u
{\displaystyle \sigma '=\sigma -u\,}
and the strength is very sensitive to the effective stress. It follows then that the undrained shear strength of a soil may be smaller or larger than the drained shear strength depending upon whether the soil is contractive or dilative.
=== Shear tests ===
Strength parameters can be measured in the laboratory using direct shear test, triaxial shear test, simple shear test, fall cone test and (hand) shear vane test; there are numerous other devices and variations on these devices used in practice today. Tests conducted to characterize the strength and stiffness of the soils in the ground include the Cone penetration test and the Standard penetration test.
=== Other factors ===
The stress–strain relationship of soils, and therefore the shearing strength, is affected by:
soil composition (basic soil material): mineralogy, grain size and grain size distribution, shape of particles, pore fluid type and content, ions on grain and in pore fluid.
state (initial): Defined by the initial void ratio, effective normal stress and shear stress (stress history). State can be describd by terms such as: loose, dense, overconsolidated, normally consolidated, stiff, soft, contractive, dilative, etc.
structure: Refers to the arrangement of particles within the soil mass; the manner in which the particles are packed or distributed. Features such as layers, joints, fissures, slickensides, voids, pockets, cementation, etc., are part of the structure. Structure of soils is described by terms such as: undisturbed, disturbed, remolded, compacted, cemented; flocculent, honey-combed, single-grained; flocculated, deflocculated; stratified, layered, laminated; isotropic and anisotropic.
Loading conditions: Effective stress path - drained, undrained, and type of loading - magnitude, rate (static, dynamic), and time history (monotonic, cyclic).
== Applications ==
=== Lateral earth pressure ===
Lateral earth stress theory is used to estimate the amount of stress soil can exert perpendicular to gravity. This is the stress exerted on retaining walls. A lateral earth stress coefficient, K, is defined as the ratio of lateral (horizontal) effective stress to vertical effective stress for cohesionless soils (K=σ'h/σ'v). There are three coefficients: at-rest, active, and passive. At-rest stress is the lateral stress in the ground before any disturbance takes place. The active stress state is reached when a wall moves away from the soil under the influence of lateral stress, and results from shear failure due to reduction of lateral stress. The passive stress state is reached when a wall is pushed into the soil far enough to cause shear failure within the mass due to increase of lateral stress. There are many theories for estimating lateral earth stress; some are empirically based, and some are analytically derived.
=== Bearing capacity ===
The bearing capacity of soil is the average contact stress between a foundation and the soil which will cause shear failure in the soil. Allowable bearing stress is the bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing stress is determined with regard to the maximum allowable settlement. It is important during construction and design stage of a project to evaluate the subgrade strength. The California Bearing Ratio (CBR) test is commonly used to determine the suitability of a soil as a subgrade for design and construction. The field Plate Load Test is commonly used to predict the deformations and failure characteristics of the soil/subgrade and modulus of subgrade reaction (ks). The Modulus of subgrade reaction (ks) is used in foundation design, soil-structure interaction studies and design of highway pavements.
=== Slope stability ===
The field of slope stability encompasses the analysis of static and dynamic stability of slopes of earth and rock-fill dams, slopes of other types of embankments, excavated slopes, and natural slopes in soil and soft rock.
As seen to the right, earthen slopes can develop a cut-spherical weakness zone. The probability of this happening can be calculated in advance using a simple 2-D circular analysis package. A primary difficulty with analysis is locating the most-probable slip plane for any given situation. Many landslides have been analyzed only after the fact. Landslides vs. Rock strength are two factors for consideration.
== Recent developments ==
A recent finding in soil mechanics is that soil deformation can be described as the behavior of a dynamical system. This approach to soil mechanics is referred to as Dynamical Systems based Soil Mechanics (DSSM). DSSM holds simply that soil deformation is a Poisson process in which particles move to their final position at random shear strains.
The basis of DSSM is that soils (including sands) can be sheared till they reach a steady-state condition at which, under conditions of constant strain-rate, there is no change in shear stress, effective confining stress, and void ratio. The steady-state was formally defined by Steve J. Poulos Archived 2020-10-17 at the Wayback Machine an associate professor at the Soil Mechanics Department of Harvard University, who built off a hypothesis that Arthur Casagrande was formulating towards the end of his career. The steady state condition is not the same as the "critical state" condition. It differs from the critical state in that it specifies a statistically constant structure at the steady state. The steady-state values are also very slightly dependent on the strain-rate.
Many systems in nature reach steady states, and dynamical systems theory describes such systems. Soil shear can also be described as a dynamical system. The physical basis of the soil shear dynamical system is a Poisson process in which particles move to the steady-state at random shear strains. Joseph generalized this—particles move to their final position (not just steady-state) at random shear-strains. Because of its origins in the steady state concept, DSSM is sometimes informally called "Harvard soil mechanics."
DSSM provides for very close fits to stress–strain curves, including for sands. Because it tracks conditions on the failure plane, it also provides close fits for the post failure region of sensitive clays and silts something that other theories are not able to do. Additionally DSSM explains key relationships in soil mechanics that to date have simply been taken for granted, for example, why normalized undrained peak shear strengths vary with the log of the overconsolidation ratio and why stress–strain curves normalize with the initial effective confining stress; and why in one-dimensional consolidation the void ratio must vary with the log of the effective vertical stress, why the end-of-primary curve is unique for static load increments, and why the ratio of the creep value Cα to the compression index Cc must be approximately constant for a wide range of soils.
== See also ==
Critical state soil mechanics
Earthquake engineering
Engineering geology
Geotechnical centrifuge modeling
Geotechnical engineering
Geotechnical engineering (Offshore)
Geotechnics
Hydrogeology, aquifer characteristics closely related to soil characteristics
International Society for Soil Mechanics and Geotechnical Engineering
Rock mechanics
Slope stability analysis
== References ==
== External links ==
Media related to Soil mechanics at Wikimedia Commons | Wikipedia/Soil_mechanics |
The Discourses and Mathematical Demonstrations Relating to Two New Sciences (Italian: Discorsi e dimostrazioni matematiche intorno a due nuove scienze pronounced [diˈskorsi e ddimostratˈtsjoːni mateˈmaːtike inˈtorno a dˈduːe ˈnwɔːve ʃˈʃɛntse]) published in 1638 was Galileo Galilei's final book and a scientific testament covering much of his work in physics over the preceding thirty years. It was written partly in Italian and partly in Latin.
After his Dialogue Concerning the Two Chief World Systems, the Roman Inquisition had banned the publication of any of Galileo's works, including any he might write in the future. After the failure of his initial attempts to publish Two New Sciences in France, Germany, and Poland, it was published by Lodewijk Elzevir who was working in Leiden, South Holland, where the writ of the Inquisition was of less consequence (see House of Elzevir). Fra Fulgenzio Micanzio, the official theologian of the Republic of Venice, had initially offered to help Galileo publish the new work there, but he pointed out that publishing the Two New Sciences in Venice might cause Galileo unnecessary trouble; thus, the book was eventually published in Holland. Galileo did not seem to suffer any harm from the Inquisition for publishing this book since in January 1639, the book reached Rome's bookstores, and all available copies (about fifty) were quickly sold.
Discourses was written in a style similar to Dialogues, in which three men (Simplicio, Sagredo, and Salviati) discuss and debate the various questions Galileo is seeking to answer. There is a notable change in the men, however; Simplicio, in particular, is no longer quite as simple-minded, stubborn and Aristotelian as his name implies. His arguments are representative of Galileo's own early beliefs, as Sagredo represents his middle period, and Salviati proposes Galileo's newest models.
== Introduction ==
The book is divided into four days, each addressing different areas of physics. Galileo dedicates Two New Sciences to Lord Count of Noailles.
In the First Day, Galileo addressed topics that were discussed in Aristotle's Physics and the Aristotelian school's Mechanics. It also provides an introduction to the discussion of both of the new sciences. The likeness between the topics discussed, specific questions that are hypothesized, and the style and sources throughout give Galileo the backbone to his First Day. The First Day introduces the speakers in the dialogue: Salviati, Sagredo, and Simplicio, the same as in the Dialogue. These three people are all Galileo just at different stages of his life, Simplicio the youngest and Salviati, Galileo's closest counterpart. The Second Day addresses the question of the strength of materials.
The Third and Fourth days address the science of motion. The Third day discusses uniform and naturally accelerated motion, the issue of terminal velocity having been addressed in the First day. The Fourth day discusses projectile motion.
In Two Sciences uniform motion is defined as a motion that, over any equal periods of time, covers equal distance. With the use of the quantifier ″any″, uniformity is introduced and expressed more explicitly than in previous definitions.
Galileo had started an additional day on the force of percussion, but was not able to complete it to his own satisfaction. This section was referenced frequently in the first four days of discussion. It finally appeared only in the 1718 edition of Galilei's works. and it is often quoted as "Sixth Day" following the numbering in the 1898 edition. During this additional day Simplicio was replaced by Aproino, a former scholar and assistant of Galileo in Padua.
== Summary ==
Page numbers at the start of each paragraph are from the 1898 version, presently adopted as standard, and are found in the Crew and Drake translations.
=== Day one: Resistance of bodies to separation ===
[50] Preliminary discussions.
Sagredo (taken to be the younger Galileo) cannot understand why with machines one cannot argue from the small to the large: "I do not see that the properties of circles, triangles and...solid figures should change with their size". Salviati (speaking for Galileo) says the common opinion is wrong. Scale matters: a horse falling from a height of 3 or 4 cubits will break its bones whereas a cat falling from twice the height won't, nor will a grasshopper falling from a tower.
[56] The first example is a hemp rope which is constructed from small fibres which bind together in the same way as a rope round a windlass to produce something much stronger. Then the vacuum that prevents two highly polished plates from separating even though they slide easily gives rise to an experiment to test whether water can be expanded or whether a vacuum is caused. In fact, Sagredo had observed that a suction pump could not lift more than 18 cubits of water and Salviati observes that the weight of this is the amount of resistance to a void. The discussion turns to the strength of a copper wire and whether there are minute void spaces inside the metal or whether there is some other explanation for its strength.
[68] This leads into a discussion of infinites and the continuum and thence to the observation that the number of squares equal the number of roots. He comes eventually to the view that "if any number can be said to be infinite, it must be unity" and demonstrates a construction in which an infinite circle is approached and another to divide a line.
[85] The difference between a fine dust and a liquid leads to a discussion of light and how the concentrated power of the sun can melt metals. He deduces that light has motion and describes an (unsuccessful) attempt to measure its speed.
[106] Aristotle believed that bodies fell at a speed proportional to weight but Salviati doubts that Aristotle ever tested this. He also did not believe that motion in a void was possible, but since air is much less dense than water Salviati asserts that in a medium devoid of resistance (a vacuum) all bodies—a lock of wool or a bit of lead—would fall at the same speed. Large and small bodies fall at the same speed through air or water providing they are of the same density. Since ebony weighs a thousand times as much as air (which he had measured), it will fall only a very little more slowly than lead which weighs ten times as much. But shape also matters—even a piece of gold leaf (the densest of all substances [asserts Salviati]) floats through the air and a bladder filled with air falls much more slowly than lead.
[128] Measuring the speed of a fall is difficult because of the small time intervals involved and his first way round this used pendulums of the same length but with lead or cork weights. The period of oscillation was the same, even when the cork was swung more widely to compensate for the fact that it soon stopped.
[139] This leads to a discussion of the vibration of strings and he suggests that not only the length of the string is important for pitch but also the tension and the weight of the string.
=== Day two: Cause of cohesion ===
[151] Salviati proves that a balance can be used not only with equal arms but with unequal arms with weights inversely proportional to the distances from the fulcrum. Following this he shows that the moment of a weight suspended by a beam supported at one end is proportional to the square of the length. The resistance to fracture of beams of various sizes and thicknesses is demonstrated, supported at one or both ends.
[169] He shows that animal bones have to be proportionately larger for larger animals and the length of a cylinder that will break under its own weight. He proves that the best place to break a stick placed upon the knee is the middle and shows how far along a beam that a larger weight can be placed without breaking it.
[178] He proves that the optimum shape for a beam supported at one end and bearing a load at the other is parabolic. He also shows that hollow cylinders are stronger than solid ones of the same weight.
=== Day three: Naturally accelerated motion ===
[191] He first defines uniform (steady) motion and shows the relationship between speed, time and distance. He then defines uniformly accelerated motion where the speed increases by the same amount in increments of time. Falling bodies start very slowly and he sets out to show that their velocity increases in simple proportionality to time, not to distance which he shows is impossible.
[208] He shows that the distance travelled in naturally accelerated motion is proportional to the square of the time. He describes an experiment in which a steel ball was rolled down a groove in a piece of wooden moulding 12 cubits long (about 5.5m) with one end raised by one or two cubits. This was repeated, measuring times by accurately weighing the amount of water that came out of a thin pipe in a jet from the bottom of a large jug of water. By this means he was able to verify the uniformly accelerated motion. He then shows that whatever the inclination of the plane, the square of the time taken to fall a given vertical height is proportional to the inclined distance.
[221] He next considers descent along the chords of a circle, showing that the time is the same as that falling from the vertex, and various other combinations of planes. He gives an erroneous solution to the brachistochrone problem, claiming to prove that the arc of the circle is the fastest descent. 16 problems with solutions are given.
=== Day four: The motion of projectiles ===
[268] The motion of projectiles consists of a combination of uniform horizontal motion and a naturally accelerated vertical motion which produces a parabolic curve. Two motions at right angles can be calculated using the sum of the squares. He shows in detail how to construct the parabolas in various situations and gives tables for altitude and range depending on the projected angle.
[274] Air resistance shows itself in two ways: by affecting less dense bodies more and by offering greater resistance to faster bodies. A lead ball will fall slightly faster than an oak ball, but the difference with a stone ball is negligible. However the speed does not go on increasing indefinitely but reaches a maximum. Though at small speeds the effect of air resistance is small, it is greater when considering, say, a ball fired from a cannon.
[292] The effect of a projectile hitting a target is reduced if the target is free to move. The velocity of a moving body can overcome that of a larger body if its speed is proportionately greater than the resistance.
[310] A cord or chain stretched out is never level but also approximates to a parabola. (But see also catenary.)
=== Additional day: The force of percussion ===
[323] What is the weight of water falling from a bucket hanging on a balance arm onto another bucket suspended to the same arm?
[325] Piling of wooden poles for foundations; hammers and the force of percussion.
[336] Speed of fall along inclined planes; again on the principle of inertia.
== Methodology ==
Many contemporary scientists, such as Gassendi, dispute Galileo's methodology for conceptualizing his law of falling bodies. Two of the main arguments are that his epistemology followed the example of Platonist thought or hypothetico-deductivist. It has now been considered to be ex suppositione, or knowing the how and why effects from past events in order to determine the requirements for the production of similar effects in the future. Galilean methodology mirrored that of Aristotelian and Archimedean epistemology. Following a letter from Cardinal Bellarmine in 1615 Galileo distinguished his arguments and Copernicus' as natural suppositions as opposed to the "fictive" that are "introduced only for the sake of astronomical computations," such as Ptolemy's hypothesis on eccentrics and equants.
Galileo's earlier writing considered Juvenilia, or youthful writings, are considered his first attempts at creating lecture notes for his course "hypothesis of the celestial motions" while teaching in at the University of Padua. These notes mirrored those of his contemporaries at the Collegio as well as contained an "Aristotelian context with decided Thomistic (St. Thomas Aquinas) overtones." These earlier papers are believed to have encouraged him to apply demonstrative proof in order to give validity to his discoveries on motion.
Discovery of folio 116v gives evidence of experiments that had previously not been reported and therefore demonstrated Galileo's actual calculations for the Law of Falling Bodies.
His methods of experimentation have been proved by the recording and recreation done by scientists such as James MacLachlan, Stillman Drake, R.H. Taylor and others in order to prove he did not merely imagine his ideas as historian Alexandre Koyré argued, but sought to prove them mathematically.
Galileo believed that knowledge could be acquired through reason, and reinforced through observation and experimentation. Thus, it can be argued that Galileo was a rationalist, and also that he was an empiricist.
== The two new sciences ==
The two sciences mentioned in the title are the strength of materials and the motion of objects (the forebears of modern material engineering and kinematics). In the title of the book "mechanics" and "motion" are separate, since at Galileo's time "mechanics" meant only statics and strength of materials.
=== The science of materials ===
The discussion begins with a demonstration of the reasons that a large structure proportioned in exactly the same way as a smaller one must necessarily be weaker known as the square–cube law. Later in the discussion this principle is applied to the thickness required of the bones of a large animal, possibly the first quantitative result in biology, anticipating J. B. S. Haldane's work On Being the Right Size, and other essays, edited by John Maynard Smith.
=== The motion of objects ===
Galileo expresses clearly for the first time the constant acceleration of a falling body which he was able to measure accurately by slowing it down using an inclined plane.
In Two New Sciences, Galileo (Salviati speaks for him) used a wood molding, "12 cubits long, half a cubit wide and three finger-breadths thick" as a ramp with a straight, smooth, polished groove to study rolling balls ("a hard, smooth and very round bronze ball"). He lined the groove with "parchment, also smooth and polished as possible". He inclined the ramp at various angles, effectively slowing down the acceleration enough so that he could measure the elapsed time. He would let the ball roll a known distance down the ramp, and use a water clock to measure the time taken to move the known distance. This clock was
a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length. The water collected was weighed, and after each descent on a very accurate balance, the differences and ratios of these weights gave him the differences and ratios of the times. This was done with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results.
==== The law of falling bodies ====
While Aristotle had observed that heavier objects fall more quickly than lighter ones, in Two New Sciences Galileo postulated that this was due not to inherently stronger forces acting on the heavier objects, but to the countervailing forces of air resistance and friction. To compensate, he conducted experiments using a shallowly inclined ramp, smoothed so as to eliminate as much friction as possible, on which he rolled down balls of different weights. In this manner, he was able to provide empirical evidence that matter accelerates vertically downward at a constant rate, regardless of mass, due to the effects of gravity.
The unreported experiment found in folio 116V tested the constant rate of acceleration in falling bodies due to gravity. This experiment consisted of dropping a ball from specified heights onto a deflector in order to transfer its motion from vertical to horizontal. The data from the inclined plane experiments were used to calculate the expected horizontal motion. However, discrepancies were found in the results of the experiment: the observed horizontal distances disagreed with the calculated distances expected for a constant rate of acceleration. Galileo attributed the discrepancies to air resistance in the unreported experiment, and friction in the inclined plane experiment. These discrepancies forced Galileo to assert that the postulate held only under "ideal conditions", i.e., in the absence of friction and/or air resistance.
==== Bodies in motion ====
Aristotelian physics argued that the Earth must not move as humans are unable to perceive the effects of this motion. A popular justification of this is the experiment of an archer shooting an arrow straight up into the air. If the Earth were moving, Aristotle argued, the arrow should fall in a different location than the launch point. Galileo refuted this argument in Dialogues Concerning the Two Chief World Systems. He provided the example of sailors aboard a boat at sea. The boat is obviously in motion, but the sailors are unable to perceive this motion. If a sailor were to drop a weighted object from the mast, this object would fall at the base of the mast rather than behind it (due to the ship's forward motion). This was the result of simultaneously the horizontal and vertical motion of the ship, sailors, and ball.
==== Relativity of motions ====
One of Galileo's experiments regarding falling bodies was that describing the relativity of motions, explaining that, under the right circumstances, "one motion may be superimposed upon another without effect upon either...". In Two New Sciences, Galileo made his case for this argument and it would become the basis of Newton's first law, the law of inertia.
He poses the question of what happens to a ball dropped from the mast of a sailing ship or an arrow fired into the air on the deck. According to Aristotle's physics, the ball dropped should land at the stern of the ship as it falls straight down from the point of origin. Likewise the arrow when fired straight up should not land in the same spot if the ship is in motion. Galileo offers that there are two independent motions at play. One is the accelerating vertical motion caused by gravity while the other is the uniform horizontal motion caused by the moving ship which continues to influence the trajectory of the ball through the principle of inertia. The combination of these two motions results in a parabolic curve. The observer cannot identify this parabolic curve because the ball and observer share the horizontal movement imparted to them by the ship, meaning only the perpendicular, vertical motion is perceivable. Surprisingly, nobody had tested this theory with the simple experiments needed to gain a conclusive result until Pierre Gassendi published the results of said experiments in his letters entitled De Motu Impresso a Motore Translato (1642).
== Infinity ==
The book also contains a discussion of infinity. Galileo considers the example of numbers and their squares. He starts by noting that:
It cannot be denied that there are as many [squares] as there are numbers because every number is a [square] root of some square:
1 ↔ 1, 2 ↔ 4, 3 ↔ 9, 4 ↔ 16, and so on.
But he notes what appears to be a contradiction:
Yet at the outset we said there are many more numbers than squares, since the larger portion of them are not squares. Not only so, but the proportionate number of squares diminishes as we pass to larger numbers.
(In modern language, there is a bijection between the set of positive integers N and the set of squares S, and S is a proper subset of N of density zero.) He resolves the contradiction by denying the possibility of comparing infinite numbers (and of comparing infinite and finite numbers):
We can only infer that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is infinite; neither is the number of squares less than the totality of all numbers, nor the latter greater than the former; and finally the attributes "equal", greater", and "less", are not applicable to infinite, but only to finite, quantities.
This conclusion, that ascribing sizes to infinite sets should be ruled impossible, owing to the contradictory results obtained from these two ostensibly natural ways of attempting to do so, is a resolution to the problem that is consistent with, but less powerful than, the methods used in modern mathematics. The resolution to the problem may be generalized by considering Galileo's first definition of what it means for sets to have equal sizes, that is, the ability to put them in one-to-one correspondence. This turns out to yield a way of comparing the sizes of infinite sets that is free from contradictory results.
These issues of infinity arise from problems of rolling circles. If two concentric circles of different radii roll along lines, then if the larger does not slip it appears clear that the smaller must slip. But in what way? Galileo attempts to clarify the matter by considering hexagons and then extending to rolling 100 000-gons, or n-gons, where he shows that a finite number of finite slips occur on the inner shape. Eventually, he concludes "the line traversed by the larger circle consists then of an infinite number of points which completely fill it; while that which is traced by the smaller circle consists of an infinite number of points which leave empty spaces and only partly fill the line," which would not be considered satisfactory now.
== Reactions by commentators ==
So great a contribution to physics was Two New Sciences that scholars have long maintained that the book anticipated Isaac Newton's laws of motion.
Galileo ... is the father of modern physics—indeed of modern science
Part of Two New Sciences was pure mathematics, as has been pointed out by the mathematician Alfréd Rényi, who said that it was the most significant book on mathematics in over 2000 years: Greek mathematics did not deal with motion, and so they never formulated mathematical laws of motion, even though Archimedes developed differentiation and integration. Two New Sciences opened the way to treating physics mathematically by treating motion mathematically for the first time. The Greek mathematician Zeno had designed his paradoxes to prove that motion could not be treated mathematically, and that any attempt to do so would lead to paradoxes. (He regarded this as an inevitable limitation of mathematics.) Aristotle reinforced this belief, saying that mathematic could only deal with abstract objects that were immutable. Galileo used the very methods of the Greeks to show that motion could indeed be treated mathematically. His idea was to separate out the paradoxes of the infinite from Zeno's paradoxes. He did this in several steps. First, he showed that the infinite sequence S of the squares 1, 4, 9, 16, ...contained as many elements as the sequence N of all positive integers (infinity); this is now referred to as Galileo's paradox. Then, using Greek style geometry, he showed a short line interval contained as many points as a longer interval. At some point he formulates the general principle that a smaller infinite set can have just as many points as a larger infinite set containing it. It was then clear that Zeno's paradoxes on motion resulted entirely from this paradoxical behavior of infinite quantities. Renyi said that, having removed this 2000-year-old stumbling block, Galileo went on to introduce his mathematical laws of motion, anticipating Newton.
=== Gassendi's thoughts ===
Pierre Gassendi defended Galileo's opinions in his book, De Motu Impresso a Motore Translato. In Howard Jones' article, Gassendi's Defence of Galileo: The Politics of Discretion, Jones says Gassendi displayed an understanding of Galileo's arguments and a clear grasp of their implications for the physical objections to the earth's motion.
=== Koyré's thoughts ===
The law of falling bodies was published by Galileo in 1638. But in the 20th century some authorities challenged the reality of Galileo's experiments. In particular, the French historian of science Alexandre Koyré bases his doubt on the fact that the experiments reported in Two New Sciences to determine the law of acceleration of falling bodies, required accurate measurements of time which appeared to be impossible with the technology of 1600. According to Koyré, the law was created deductively, and the experiments were merely illustrative thought experiments. In fact, Galileo's water clock (described above) provided sufficiently accurate measurements of time to confirm his conjectures.
Later research, however, has validated the experiments. The experiments on falling bodies (actually rolling balls) were replicated using the methods described by Galileo, and the precision of the results was consistent with Galileo's report. Later research into Galileo's unpublished working papers from 1604 clearly showed the reality of the experiments and even indicated the particular results that led to the time-squared law.
== See also ==
De Motu Antiquiora (Galileo's earliest investigations of the motion of falling bodies)
== Notes ==
== References ==
Drake, Stillman, translator (1974). Two New Sciences, University of Wisconsin Press, 1974. ISBN 0-299-06404-2. A new translation including sections on centers of gravity and the force of percussion.
Drake, Stillman (1978). Galileo At Work. Chicago: University of Chicago Press. ISBN 978-0-226-16226-3.
Henry Crew and Alfonso de Salvio, translators, [1914] (1954). Dialogues Concerning Two New Sciences, Dover Publications Inc., New York, NY. ISBN 978-0-486-60099-4. The classic source in English, originally published by McMillan (1914).
Jones, Howard, "Gassendi's Defense of Galileo: The Politics of Discretion", Medieval Renaissance Texts and Studies 58, 1988.
Titles of the first editions taken from Leonard C. Bruno 1989, The Landmarks of Science: from the Collections of the Library of Congress. ISBN 0-8160-2137-6 Q125.B87
Galileo Galilei, Discorsi e dimostrazioni matematiche intorno a due nuove scienze attinenti la meccanica e i movimenti locali (pag.664, of Claudio Pierini) publication Cierre, Simeoni Arti Grafiche, Verona, 2011, ISBN 9788895351049.
Wallace, Willian, A. Galileo and Reasoning Ex Suppositione: The Methodology of the Two New Sciences. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1974, (1974), pp. 79–104
Salvia, Stafano (2014). "'Galileo's Machine': Late Notes on Free Fall, Projectile Motion, and the Force of Percussion (ca. 1638–1639)". Physics in Perspective. 16 (4): 440–460. Bibcode:2014PhP....16..440S. doi:10.1007/s00016-014-0149-1. S2CID 122967350.
De Angelis, Alessandro (2021). Discorsi e Dimostrazioni Matematiche di Galileo Galilei per il Lettore Moderno (in Italian). Torino: Codice. ISBN 978-8875789305.
De Angelis, Alessandro (2021). Galilei's Two New Sciences for Modern Readers. Heidelberg: Springer Nature. ISBN 978-3030719524. With prefaces by Ugo Amaldi and Telmo Pievani.
== External links ==
(in Italian) Italian text with figures
English translation by Crew and de Salvio, with original figures | Wikipedia/Two_New_Sciences |
Mechanics (Greek: Μηχανικά; Latin: Mechanica), also called Mechanical Problems or Questions of Mechanics, is a text traditionally attributed to Aristotle, but generally regarded as spurious (cf. Pseudo-Aristotle). Thomas Winter has suggested that the author was Archytas, while Michael Coxhead says that it is only possible to conclude that the author was one of the Peripatetics.
During the Renaissance, an edition of this work was published by Francesco Maurolico. A Latin translation was made by Vettor Fausto, dedicated to Giovanni Badoer in 1517.
== See also ==
Aristotle's wheel paradox
== Notes ==
== External links ==
Greek Wikisource has original text related to this article: Μηχανικά
Pseudo-Aristotle, Mechanica - Greek text and English translation
Opuscula public domain audiobook at LibriVox | Wikipedia/Mechanics_(Aristotle) |
Keyline design is a landscaping technique of maximizing the beneficial use of the water resources of a tract of land. The "keyline" is a specific topographic feature related to the natural flow of water on the tract. Keyline design is a system of principles and techniques of developing rural and urban landscapes to optimize use of their water resources.
Australian farmer and engineer P. A. Yeomans invented and developed Keyline design in his books The Keyline Plan, The Challenge of Landscape, Water For Every Farm, and The City Forest.
== Application ==
P. A. Yeomans published the first book on Keyline design in 1954. Yeomans described a system of amplified contour ripping to control rainfall runoff and enable fast flood irrigation of undulating land without the need for terracing it.
Keyline designs include irrigation dams equipped with through-the-wall lockpipe systems, gravity feed irrigation, stock water, and yard water. Graded earthen channels may be interlinked to broaden the catchment areas of high dams, conserve the height of water, and transfer rainfall runoff into the most efficient high dam sites. Roads follow both ridge lines and water channels to provide easier movement across the land.
== Keyline Scale of Permanence ==
The foundation of Yeomans' Keyline design system is the Keyline Scale of Permanence (KSOP), which was the outcome of 15 years of adaptive experimentation on his properties Yobarnie and Nevallan. The Scale identifies the environmental elements of typical farms and orders them according to their degree of permanence as follows:
Climate
Landshape (topography)
Water supply
Roads and means of access
Trees
Structures (edifices)
Subdivisional fences
Soil
Keyline design considers these elements in planning the placement of water storage features, roads, trees, edifices, and fences. On undulating land, keyline design involves identifying ridges, valleys, and natural water courses and designing with them in mind in order to optimize water storage sites. Constructing interconnecting channels may be part of such optimization.
The identified natural water lines delineate the possible locations for the various less permanent elements, e. g. roads, fences, trees, and edifices, which if so located would help optimize the natural potential of the land in question.
== Keypoint ==
In a smooth, grassy valley, a location designated the keypoint is identified at which the lower and more level portion of the primary valley floor suddenly steepens higher. The keyline of this primary valley is determined by pegging a contour line that conforms to the natural shape of the valley through the keypoint, such that all points on the keyline are at the same elevation as the keypoint. Contour plowing both above and below the keyline and parallel to it is necessarily "off-contour", but the developing pattern tends to drift rainwater runoff away from the center of the valley and, incidentally, prevent erosion of its soil.
Cultivation conforming to Keyline design for ridges is carried out parallel to any suitable contour, but only on the high side of the contour's guide line. This process develops a pattern of off-contour cultivation in which all the rip marks made in the soil slope down toward the center of the ridge. This pattern of cultivation allows more time for water to infiltrate. Cultivation following the Keyline pattern also enables controlled flood irrigation of undulating land, which increases the rate of development of deep, fertile soil.
In many nations, including Australia, it is important to optimize the infiltration of rainfall, and Keyline cultivation accomplishes this while delaying the concentration of runoff that could damage the land. Yeomans' technique differs from traditional contour plowing in several important respects. Random contour plowing also becomes off-contour but usually with the opposite effect on runoff, namely causing it to quickly run off ridges and concentrate in valleys. The limitations of the traditional system of soil conservation, with its "safe disposal" approach to farm water, was an important motivation to develop Keyline design.
== Applications ==
David Holmgren, one of the founders of permaculture, used Yeoman's Keyline design extensively in the formulation of his principles of permaculture and the design of sustainable human settlements and organic farms.
Darren J. Doherty has extensive global experience in Keyline design, development, management, and education. He uses Keyline as the basis for his Regrarians framework, which he considers a revision and synthesis of Keyline design, permaculture, holistic management, and several other innovative, human ecological frameworks into a coherent process-based system of design and management of regenerative economies.
A topographical example of Keyline design is available at (37.159154°S 144.252248°E / -37.159154; 144.252248).
Keyline design also includes principles of rapidly enhancing soil fertility. They are explored in Priority One by P. A. Yeomans' son, Allan. Yeomans and his sons were also instrumental in the design and production of special plows and other equipment for Keyline cultivation.
== Further Development: Seepage Line System ==
An important evolution of classical Keyline Design has been introduced by German agronomist Dr. Philipp Gerhardt (b. 1983). Based on field experience, Gerhardt observed that traditional keylines often fail to manage increasingly frequent heavy rainfall and can interfere with modern farm operations. In response, he developed the Seepage Line System, a flexible, iterative approach that replaces fixed keylines with custom-shaped contour ditches, tailored to site-specific topography.
These individually designed lines enhance water infiltration, reduce erosion, and allow for overflow management during extreme weather. The system also integrates well with agroforestry and parallel tillage, helping maintain operational efficiency.
German federal agencies have recognized this adaptation as a valuable tool for climate-resilient land use. It is promoted for its ability to improve soil moisture retention, limit erosion, and buffer the effects of droughts and heavy rains.
The system has been applied by farms such as Schreiber GbR in northern Germany and is being explored in other regions. Research and field trials in Lower Saxony and Hesse confirm its effectiveness in reducing surface runoff and supporting water-retentive agroforestry systems.
Documented benefits include improved integration of tree crops, enhanced water retention, and more stable yields under climate stress.
== See also ==
== References ==
=== Notations ===
Yeomans, P.A. (1954). The Keyline Plan (Free online). OCLC 21106239.
Yeomans, P.A. (1958). The Challenge of Landscape : the development and practice of keyline. Sydney NSW: Keyline. OCLC 10466838. Archived from the original (Free online) on 2015-05-03. Retrieved 2009-01-22.
Yeomans, P.A. (1973). Water for Every Farm: A practical irrigation plan for every Australian property. Sydney NSW: K.G. Murray. ISBN 0-646-12954-6. ISBN 0-909325-29-4.
Yeomans, P.A. (1971). The City Forest. Keyline. ISBN 0-9599578-0-4. OCLC 515050. Archived from the original (Free online) on 2015-05-19. Retrieved 2009-01-22.
Yeomans, P.A.; Yeomans, K.B. (1993). Water for Every Farm — Yeomans Keyline Plan. Keyline Designs. ISBN 0646129546. 2002 ISBN 0646418750
Yeomans, P.A.; Yeomans, K.B. (2008). Water for Every Farm — Yeomans Keyline Plan. www.keyline.com.au. ISBN 978-1438225784.
Yeomans, A. (2005). Priority One: Together we Can Beat Global Warming. Keyline. ISBN 0-646-43805-0. Archived from the original (Online) on 2013-07-29.
MacDonald-Holmes, J. "Geographical and Topographical Basis of Keyline". Archived from the original on August 15, 2010.
Spencer, L (2006). "Keyline and Fertile Futures". Archived from the original on December 9, 2009.
=== Footnotes ===
== External links == | Wikipedia/Keyline_design |
Altium Designer (AD) is a printed circuit board (PCB) and electronic design automation software package for printed circuit boards. It is developed by American software company Altium Limited. Altium Designer was formerly named under the brand Protel.
== History ==
Altium Designer was originally launched in 2005 by Altium, then named Protel Systems Pty Ltd. It has roots in 1985, when the company launched the DOS-based PCB design tool named Protel PCB (which later emerged into Autotrax and Easytrax). Originally it was sold only in Australia. Protel PCB was marketed internationally by HST Technology since 1986. The product became available in the United States, Canada, and Mexico beginning in 1986, marketed by San Diego–based ACCEL Technologies, Inc., under the name Tango PCB. In 1987, Protel launched the circuit diagram editor Protel Schematic for DOS.
In 1991, Protel released Advanced Schematic and Advanced PCB 1.0 for Windows (1991–1993), followed by Advanced Schematic/PCB 2.x (1993–1995) and 3.x (1995–1998). In 1998, Protel 98 consolidated all components, including Advanced Schematic and Advanced PCB, into one environment. Protel 99 in 1999 introduced the first integrated 3D visualization of the PCB assembly. It was followed by Protel 99 SE in 2000. Protel DXP was issued in 2003, Protel 2004 in 2004, Altium Designer 6.0 in 2005. Altium Designer version 6.8 from 2007 was the first to offer 3D visualization and clearance checking of PCBs directly within the PCB editor.
== Features ==
Altium Designer's suite encompasses four main functional areas, including schematic capture, 3D PCB design, field-programmable gate array (FPGA) development and release/data management. It integrates with several component distributors for access to manufacturer's data. It also has interactive 3D editing of the board and MCAD export to STEP. Altium 365, a cloud-based infrastructure platform, connects all key stakeholders and disciplines for PCB design. This includes mechanical designers, engineers, PCB designers, parts procurement, fabrication, and assembly. Altium 365 hosting allows designers to find, configure and use parts.
== File formats ==
Altium Designer supports import & export of various PCB and CAD data exchange file formats. The tool's native file formats are the binary file formats *.SchDoc and *.PcbDoc.
It can also import and export AutoCAD *.dwg/*.dxf and ISO 10303-21 STEP file formats.
== See also ==
Comparison of EDA software
List of free electronics circuit simulators
== References ==
== External links ==
Official website, Altium
Altium Designer Overview - PCB Design Tool
What's New in Altium Designer | Wikipedia/Altium_Designer |
Procedural modeling is an umbrella term for a number of techniques in computer graphics to create 3D models and textures from sets of rules that may be easily changed over time. L-Systems, fractals, and generative modeling are procedural modeling techniques since they apply algorithms for producing scenes. The set of rules may either be embedded into the algorithm, configurable by parameters, or the set of rules is separate from the evaluation engine. The output is called procedural content, which can be used in computer games, films, be uploaded to the internet, or the user may edit the content manually. Procedural models often exhibit database amplification, meaning that large scenes can be generated from a much smaller number of rules. If the employed algorithm produces the same output every time, the output need not be stored. Often, it suffices to start the algorithm with the same random seed to achieve this.
Although all modeling techniques on a computer require algorithms to manage and store data at some point, procedural modeling focuses on creating a model from a rule set, rather than editing the model manually by using user input, in order to make modifying model in the future easier. The parameters that define a model may be dependent on parameters or geometry from another model making modelling process very flexible. Procedural modeling is often applied when it would be too cumbersome to create a 3D model using generic 3D modelers, or when more specialized tools are required. This is often the case for plants, architecture or landscapes.
== Procedural modeling suites ==
This is a list of Wikipedia articles about specific procedural modeling software products.
== See also ==
Parametric models in statistics
Parametric design in Computer-Aided Design
Procedural generation in video games
== References ==
== External links ==
"Texturing and Modeling: A Procedural Approach", Ebert, D., Musgrave, K., Peachey, P., Perlin, K., and Worley, S
Procedural Inc.
CityEngine
"Procedural Modeling of Cities", Yoav I H Parish, Pascal Müller
"Procedural Modeling of Buildings", Pascal Müller, Peter Wonka, Simon Haegler, Andreas Ulmer and Luc Van Gool
"King Kong – The Building of 1933 New York City", Chris White, Weta Digital. Siggraph 2006.
Tree Editors Compared:
List at Vterrain.org
List at TreeGenerator
"LAI4D Reference manual", Usage of the "program" entity type for algorithmic modelling with JavaScript | Wikipedia/Parametric_modeling |
Building design, also called architectural design, refers to the broadly based architectural, engineering and technical applications to the design of buildings. All building projects require the services of a building designer, typically a licensed architect. Smaller, less complicated projects often do not require a licensed professional, and the design of such projects is often undertaken by building designers, draftspersons, interior designers (for interior fit-outs or renovations), or contractors. Larger, more complex building projects require the services of many professionals trained in specialist disciplines, usually coordinated by an architect.
== Occupations ==
=== Architect ===
An architect is a person trained in the planning, design and supervision of the construction of buildings. Professionally, an architect's decisions affect public safety, and thus an architect must undergo specialized training consisting of advanced education and a practicum (or internship) for practical experience to earn a license to practice architecture. In most of the world's jurisdictions, the professional and commercial use of the term "architect" is legally protected.
=== Building engineer ===
Building engineering typically includes the services of electrical, mechanical and structural engineers.
=== Draftsperson ===
A draftsperson or documenter has attained a certificate or diploma in architectural drafting (or equivalent training), and provides services relating to preparing construction documents rather than building design. Some draftspersons are employed by architectural design firms and building contractors, while others are self-employed.
=== Building designer ===
In many places, building codes and legislation of professions allow persons to design single family residential buildings and, in some cases, light commercial buildings without an architectural license. As such, "Building designer" is a common designation in the United States, Canada, Australia and elsewhere for someone who offers building design services but is not a licensed architect or engineer.
Anyone may use the title of "building designer" in the broadest sense. In many places, a building designer may achieve certification demonstrating a higher level of training. In the U.S., the National Council of Building Designer Certification (NCBDC), an offshoot of the American Institute of Building Design, administers a program leading to the title of Certified Professional Building Designer (CPBD). Usually, building designers are trained as architectural technologists or draftspersons; they may also be architecture school graduates that have not completed licensing requirements.
Many building designers are known as "residential" or "home designers", since they focus mainly on residential design and remodeling. In the U.S. state of Nevada, "Residential Designer" is a regulated term for those who are registered as such under Nevada State Board of Architecture, Interior Design and Residential Design, and one may not legally represent oneself in a professional capacity without being currently registered.
In Australia where use of the term architect and some derivatives is highly restricted but the architectural design of buildings has very few restrictions in place, the term building designer is used extensively by people or design practices who are not registered by the relevant State Board of Architects. In Queensland the term building design is used in legislation which licenses practitioners as part of a broader building industry licensing system. In Victoria there is a registration process for building designers and in other States there is currently no regulation of the profession. A Building Designers Association operates in each state to represent the interests of building designers.
=== Building surveyor ===
Building surveyors are technically minded general practitioners in the United Kingdom, Australia and elsewhere, trained much like architectural technologists. In the UK, the knowledge and expertise of the building surveyor is applied to various tasks in the property and construction markets, including building design for smaller residential and light commercial projects. This aspect of the practice is similar to other European occupations, most notably the geometra in Italy, but also the géomètre in France, Belgium and Switzerland.
the building surveyors are also capable on establishment of bills of quantities for the new works and renovation or maintenance or rehabilitation works.
The profession of Building Surveyor does not exist in the US. The title Surveyor refers almost exclusively to Land surveyors. Architects, Building Designers, Residential Designers, Construction Managers, and Home Inspectors perform some or all of the work of the U.K. Building Surveyor.
== See also ==
Architectural designer
Architectural design values - intentions which influence design decision of architects
Facility management
Landscape architect
Urban design
== References == | Wikipedia/Building_design |
Design by committee is a pejorative term for a project that has many designers involved but no unifying plan or vision.
== Usage of the term ==
The term is used to refer to suboptimal traits that such a process may produce as a result of having to compromise between the requirements and viewpoints of the participants, particularly in the presence of poor leadership or poor technical knowledge, such as needless complexity, internal inconsistency, logical flaws, banality, and the lack of a unifying vision. This design process by consensus is in contrast to autocratic design, or design by dictator, where the project leader decides on the design. The difference is that in an autocratic style, members of the organizations are not included and the final outcome is the responsibility of the leader. The phrase, "a camel is a horse designed by committee" is often used to describe design by committee.
The term is especially common in technical parlance; and stresses the need for technical quality over political feasibility. The proverb "too many cooks spoil the broth" expresses the same idea. The term is also common in other fields of design such as graphic design, architecture or industrial design. In automotive design, this process is often blamed for unpopular or poorly designed cars.
== Examples ==
The term is commonly used in information and communications technology, especially when referring to the design of languages and technical standards, as demonstrated by USENET archives.
An alleged example is the DR Class 119 diesel locomotive, built from 1976 to 1985 in communist Romania for East Germany due to Soviet-era Comecon restrictions. Only the USSR was allowed to continue building powerful diesel engines, but they were too heavy for East Germany where some lines were limited to low axle load locomotives. Not allowed to continue building their own designs, the East Germans managed to have Romanian 23 August Works assemble a successor to the DR Class 118. With a mixture of parts from several planned economy countries, even including West German designs to fill gaps, the Class 119 was unreliable. Only 200 were built in ten years as East Germany cancelled the contract prematurely. No other country purchased the Class 119, but after 1990 unified German rail had to deal with them until they were replaced.
An example of a technical decision said to be a typical result of design by committee is the Asynchronous Transfer Mode (ATM) cell size of 53 bytes. The choice of 53 bytes was political rather than technical. When the CCITT was standardizing ATM, parties from the United States wanted a 64-byte payload. Parties from Europe wanted 32-byte payloads. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length of 32 bytes. A 53-byte size (48 bytes plus 5 byte header) was the compromise chosen.
An example described as naming by committee was a school near Liverpool formed by the merger of several other schools: it was officially named the "Knowsley Park Centre for Learning, Serving Prescot, Whiston and the Wider Community" in 2009, listing as a compromise all the schools and communities merged into it. The name lasted seven years before its headmistress, who called the name "so embarrassing", cut it to simply "The Prescot School".
The F-35 Joint Strike Fighter has been described as designed by committee, due to being overscheduled, over budget, and underperforming expectations. It was originally conceived to serve the widely varying needs of multiple branches of the military all in one simple platform. This multi-interest approach has been pinpointed as largely responsible for its bloat, along with stacking too many new and unproven features into its design. Uneven progress across areas and unexpected challenges meant that major technical fixes and redesigns could halt the program's movement, requiring planes to be remedied even as they were being delivered.
The C++ programming language has been described as such by computer scientist Ken Thompson. C++ is also literally designed by a committee.
Apple Inc. reportedly uses remote controls designed by other companies with as many as 78 buttons as an example of design by committee when training employees.
The Washington Post described the Pontiac Aztek as a vehicle designed by committee, which was largely designed based on feedback from extensive focus group testing, and was released to negative reviews and poor sales.
== See also ==
Condorcet paradox
Groupthink
The blind men and the elephant
Wisdom of the crowd
Tiger team
Overengineering
== References ==
== External links ==
Rod Johnson explains what is wrong with design by committee in the development of Java EE
The dictionary definition of too many cooks spoil the broth at Wiktionary | Wikipedia/Design_by_committee |
Design Closure is a part of the digital electronic design automation workflow by which an integrated circuit (i.e. VLSI) design is modified from its initial description to meet a growing list of design constraints and objectives.
Every step in the IC design (such as static timing analysis, placement, routing, and so on) is already complex and often forms its own field of study. This article, however, looks at the overall design closure process, which takes a chip from its initial design state to the final form in which all of its design constraints are met.
== Introduction ==
Every chip starts off as someone’s idea of a good thing: "If we can make a part that performs function X, we will all be rich!" Once the concept is established, someone from marketing says "To make this chip profitably, it must cost $C and run at frequency F." Someone from manufacturing says "To meet this chip’s targets, it must have a yield of Y%." Someone from packaging says “It must fit in the P package and dissipate no more than W watts.” Eventually, the team generates an extensive list of all the constraints and objectives they must meet to manufacture a product that can be sold profitably. The management then forms a design team, which consists of chip architects, logic designers, functional verification engineers, physical designers, and timing engineers, and assigns them to create a chip to the specifications.
=== Constraints vs Objectives ===
The distinction between constraints and objectives is straightforward: a constraint is a design target that must be met for the design to be successful. For example, a chip may be required to run at a specific frequency so it can interface with other components in a system. In contrast, an objective is a design target where more (or less) is better. For example, yield is generally an objective, which is maximized to lower manufacturing cost. For the purposes of design closure, the distinction between constraints and objectives is not important; this article uses the words interchangeably.
== Evolution of the Design Closure Flow ==
Designing a chip used to be a much simpler task. In the early days of VLSI, a chip consisted of a few thousand logic circuits that performed a simple function at speeds of a few MHz. Design closure was simple: if all of the necessary circuits and wires "fit", the chip would perform the desired function.
Modern design closure has grown orders of magnitude more complex. Modern logic chips can have tens to hundreds of millions of logic elements switching at speeds of several
GHz. This improvement has been driven by Moore’s law of scaling of technology, and has introduced many new design considerations. As a result, a modern VLSI designer must consider the performance of a chip against a list of dozens of design constraints and objectives including performance, power, signal integrity, reliability, and yield. In response to this growing list of constraints, the design closure flow has evolved from a simple linear list of tasks to a very complex, highly iterative flow such as the following simplified ASIC design flow:
== Reference ASIC Design Flow ==
Concept phase: Functional objectives and architecture of a chip are developed.
Logic design: Architecture is implemented in a register transfer level (RTL) language, then simulated to verify that it performs the desired functions. This includes functional verification.
Floorplanning: The RTL of the chip is assigned to gross regions of the chip, input/output (I/O) pins are assigned and large objects (arrays, cores, etc.) are placed.
Logic synthesis: The RTL is mapped into a gate-level netlist in the target technology of the chip.
Design for Testability: The test structures like scan chains are inserted.
Placement: The gates in the netlist are assigned to nonoverlapping locations on the chip.
Logic/placement refinement: Iterative logical and placement transformations to close performance and power constraints.
Clock insertion: Balanced buffered clock trees are introduced into the design.
Routing: The wires that connect the gates in the netlist are added.
Postwiring optimization: Remaining performance, noise, and yield violations are removed.
Design for manufacturability: The design is modified, where possible, to make it as easy as possible to produce.
Signoff checks: Since errors are expensive, time consuming and hard to spot, extensive error checking is the rule, making sure the mapping to logic was done correctly, and checking that the manufacturing rules were followed faithfully.
Tapeout and mask generation: the design data is turned into photomasks in mask data preparation.
== Evolution of design constraints ==
The purpose of the flow is to take a design from concept phase to working chip. The complexity of the flow is a direct result of the addition and evolution of the list of design closure constraints. To understand this evolution it is important to understand the life cycle of a design constraint. In general, design constraints influence the design flow via the following five-stage evolution:
Early warnings: Before chip issues begin occurring, academics and industry visionaries make dire predictions about the future impact of some new technology effect.
Hardware problems: Sporadic hardware failures start showing up in the field due to the new effect. Postmanufacturing redesign and hardware re-spins are required to get the chip to function.
Trial and error: Constraints on the effect are formulated and used to drive post-design checking. Violations of the constraint are fixed manually.
Find and repair: Large number of violations of the constraint drives the creation of automatic post-design analysis and repair flows.
Predict and prevent: Constraint checking moves earlier in the flow using predictive estimations of the effect. These drive optimizations to prevent violations of the constraint.
A good example of this evolution can be found in the signal integrity constraint. In the mid-1990s (180 nm node), industry visionaries were describing the impending dangers of coupling noise long before chips were failing. By the mid-late 1990s, noise problems were cropping up in advanced microprocessor designs. By 2000, automated noise analysis tools were available and were used to guide manual fix-up. The total number of noise problems identified by the analysis tools identified by the flow quickly became too many to correct manually. In response, CAD companies developed the noise avoidance flows that are currently in use in the industry.
At any point in time, the constraints in the design flow are at different stages of their life cycle. At the time of this writing, for example, performance optimization is the most mature and is well into the fifth phase with the widespread use of timing-driven design flows. Power- and defect-oriented yield optimization
is well into the fourth phase; power supply integrity, a type of noise constraint, is in the third phase; circuit-limited yield optimization is in the second phase, etc. A list of the first-phase impending constraint crises can always be found in the International Technology Roadmap for Semiconductors (ITRS) 15-year-outlook technology roadmaps.
As a constraint matures in the design flow, it tends to work its way from the end of the flow to the beginning. As it does this, it also tends to increase in complexity and in the degree that it contends with other constraints. Constraints tend to move up in the flow due to one of the basic paradoxes of design: accuracy vs. influence. Specifically, the earlier in a design flow a constraint is addressed, the more flexibility there is to address the constraint. Ironically, the earlier one is in a design flow, the more difficult it is to predict compliance. For example, an architectural decision to pipeline a logic function can have a far greater impact on total chip performance than any amount of post-routing fix-up. At the same time, accurately predicting the performance impact of such a change before the chip logic is synthesized, let alone placed or routed, is very difficult. This paradox has shaped the evolution of the design closure flow in several ways. First, it requires
that the design flow is no longer composed of a linear set of discrete steps. In the early stages of VLSI it was sufficient to break the design into discrete stages, i.e., first do logic synthesis, then do placement, then do routing. As the number and complexity of design closure constraints has increased, the linear design flow has broken down. In the past if there were too many timing constraint violations left after routing, it was necessary to loop back, modify the tool settings slightly, and reexecute the previous placement steps. If the constraints were still not met, it was necessary to reach further back in the flow and modify the chip logic and repeat the synthesis and placement steps. This type of looping is both time consuming and unable to guarantee convergence i.e., it is possible to loop back in the flow to correct one constraint violation only to find that the correction induced another unrelated violation.
== See also ==
Timing closure
Electronic design automation
Design flow (EDA)
Integrated circuit design
== References ==
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 A survey of the field of electronic design automation. In particular, this article is derived (with permission) from the introduction of Chapter 10, Volume II, Design Closure by John Cohn. | Wikipedia/Design_closure |
Building information modeling (BIM) is an approach involving the generation and management of digital representations of the physical and functional characteristics of buildings or other physical assets and facilities. BIM is supported by various tools, processes, technologies and contracts. Building information models (BIMs) are computer files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged or networked to support decision-making regarding a built asset. BIM software is used by individuals, businesses and government agencies who plan, design, construct, operate and maintain buildings and diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports and tunnels.
The concept of BIM has been in development since the 1970s, but it only became an agreed term in the early 2000s. The development of standards and the adoption of BIM has progressed at different speeds in different countries. Developed by buildingSMART, Industry Foundation Classes (IFCs) – data structures for representing information – became an international standard, ISO 16739, in 2013, and BIM process standards developed in the United Kingdom from 2007 onwards formed the basis of an international standard, ISO 19650, launched in January 2019.
== History ==
The concept of BIM has existed since the 1970s. The first software tools developed for modeling buildings emerged in the late 1970s and early 1980s, and included workstation products such as Chuck Eastman's Building Description System and GLIDE, RUCAPS, Sonata, Reflex and Gable 4D Series. The early applications, and the hardware needed to run them, were expensive, which limited widespread adoption.
The pioneering role of applications such as RUCAPS, Sonata and Reflex has been recognized by Laiserin as well as the UK's Royal Academy of Engineering; former GMW employee Jonathan Ingram worked on all three products. What became known as BIM products differed from architectural drafting tools such as AutoCAD by allowing the addition of further information (time, cost, manufacturers' details, sustainability, and maintenance information, etc.) to the building model.
As Graphisoft had been developing such solutions for longer than its competitors, Laiserin regarded its ArchiCAD application as then "one of the most mature BIM solutions on the market." Following its launch in 1987, ArchiCAD became regarded by some as the first implementation of BIM, as it was the first CAD product on a personal computer able to create both 2D and 3D geometry, as well as the first commercial BIM product for personal computers. However, Graphisoft founder Gábor Bojár has acknowledged to Jonathan Ingram in an open letter, that Sonata "was more advanced in 1986 than ArchiCAD at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later".
The term 'building model' (in the sense of BIM as used today) was first used in papers in the mid-1980s: in a 1985 paper by Simon Ruffle eventually published in 1986, and later in a 1986 paper by Robert Aish – then at GMW Computers Ltd, developer of RUCAPS software – referring to the software's use at London's Heathrow Airport. The term 'Building Information Model' first appeared in a 1992 paper by G.A. van Nederveen and F. P. Tolman.
However, the terms 'Building Information Model' and 'Building Information Modeling' (including the acronym "BIM") did not become popularly used until some 10 years later. Facilitating exchange and interoperability of information in digital format was variously with differing terminology: by Graphisoft as "Virtual Building" or "Single Building Model", Bentley Systems as "Integrated Project Models", and by Autodesk or Vectorworks as "Building Information Modeling". In 2002, Autodesk released a white paper entitled "Building Information Modeling," and other software vendors also started to assert their involvement in the field. By hosting contributions from Autodesk, Bentley Systems and Graphisoft, plus other industry observers, in 2003, Jerry Laiserin helped popularize and standardize the term as a common name for the digital representation of the building process.
=== Interoperability and BIM standards ===
As some BIM software developers have created proprietary data structures in their software, data and files created by one vendor's applications may not work in other vendor solutions. To achieve interoperability between applications, neutral, non-proprietary or open standards for sharing BIM data among different software applications have been developed.
Poor software interoperability has long been regarded as an obstacle to industry efficiency in general and to BIM adoption in particular. In August 2004 a US National Institute of Standards and Technology (NIST) report conservatively estimated that $15.8 billion was lost annually by the U.S. capital facilities industry due to inadequate interoperability arising from "the highly fragmented nature of the industry, the industry’s continued paper-based business practices, a lack of standardization, and inconsistent technology adoption among stakeholders".
An early BIM standard was the CIMSteel Integration Standard, CIS/2, a product model and data exchange file format for structural steel project information (CIMsteel: Computer Integrated Manufacturing of Constructional Steelwork). CIS/2 enables seamless and integrated information exchange during the design and construction of steel framed structures. It was developed by the University of Leeds and the UK's Steel Construction Institute in the late 1990s, with inputs from Georgia Tech, and was approved by the American Institute of Steel Construction as its data exchange format for structural steel in 2000.
BIM is often associated with Industry Foundation Classes (IFCs) and aecXML – data structures for representing information – developed by buildingSMART. IFC is recognised by the ISO and has been an international standard, ISO 16739, since 2013. OpenBIM is an initiative by buildingSMART that promotes open standards and interoperability. Based on the IFC standard, it allows vendor-neutral BIM data exchange. OpenBIM standards also include BIM Collaboration Format (BCF) for issue tracking and Information Delivery Specification (IDS) for defining model requirements.
Construction Operations Building information exchange (COBie) is also associated with BIM. COBie was devised by Bill East of the United States Army Corps of Engineers in 2007, and helps capture and record equipment lists, product data sheets, warranties, spare parts lists, and preventive maintenance schedules. This information is used to support operations, maintenance and asset management once a built asset is in service. In December 2011, it was approved by the US-based National Institute of Building Sciences as part of its National Building Information Model (NBIMS-US) standard. COBie has been incorporated into software, and may take several forms including spreadsheet, IFC, and ifcXML. In early 2013 BuildingSMART was working on a lightweight XML format, COBieLite, which became available for review in April 2013. In September 2014, a code of practice regarding COBie was issued as a British Standard: BS 1192-4.
In January 2019, ISO published the first two parts of ISO 19650, providing a framework for building information modelling, based on process standards developed in the United Kingdom. UK BS and PAS 1192 specifications form the basis of further parts of the ISO 19650 series, with parts on asset management (Part 3) and security management (Part 5) published in 2020.
The IEC/ISO 81346 series for reference designation has published 81346-12:2018, also known as RDS-CW (Reference Designation System for Construction Works). The use of RDS-CW offers the prospect of integrating BIM with complementary international standards based classification systems being developed for the Power Plant sector.
== Definition ==
ISO 19650-1:2018 defines BIM as:
Use of a shared digital representation of a built asset to facilitate design, construction and operation processes to form a reliable basis for decisions.
The US National Building Information Model Standard Project Committee has the following definition:
Building Information Modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition.
Traditional building design was largely reliant upon two-dimensional technical drawings (plans, elevations, sections, etc.). Building information modeling extends the three primary spatial dimensions (width, height and depth), incorporating information about time (so-called 4D BIM), cost (5D BIM), asset management, sustainability, etc. BIM therefore covers more than just geometry. It also covers spatial relationships, geospatial information, quantities and properties of building components (for example, manufacturers' details), and enables a wide range of collaborative processes relating to the built asset from initial planning through to construction and then throughout its operational life.
BIM authoring tools present a design as combinations of "objects" – vague and undefined, generic or product-specific, solid shapes or void-space oriented (like the shape of a room), that carry their geometry, relations, and attributes. BIM applications allow extraction of different views from a building model for drawing production and other uses. These different views are automatically consistent, being based on a single definition of each object instance. BIM software also defines objects parametrically; that is, the objects are defined as parameters and relations to other objects so that if a related object is amended, dependent ones will automatically also change. Each model element can carry attributes for selecting and ordering them automatically, providing cost estimates as well as material tracking and ordering.
For the professionals involved in a project, BIM enables a virtual information model to be shared by the design team (architects, landscape architects, surveyors, civil, structural and building services engineers, etc.), the main contractor and subcontractors, and the owner/operator. Each professional adds discipline-specific data to the shared model – commonly, a 'federated' model which combines several different disciplines' models into one. Combining models enables visualisation of all models in a single environment, better coordination and development of designs, enhanced clash avoidance and detection, and improved time and cost decision-making.
=== BIM wash ===
"BIM wash" or "BIM washing" is a term sometimes used to describe inflated, and/or deceptive, claims of using or delivering BIM services or products.
== Usage throughout the asset life cycle ==
Use of BIM goes beyond the planning and design phase of a project, extending throughout the life cycle of the asset. The supporting processes of building lifecycle management include cost management, construction management, project management, facility operation and application in green building.
=== Common Data Environment ===
A 'Common Data Environment' (CDE) is defined in ISO 19650 as an:
Agreed source of information for any given project or asset, for collecting, managing and disseminating each information container through a managed process.
A CDE workflow describes the processes to be used while a CDE solution can provide the underlying technologies. A CDE is used to share data across a project or asset lifecycle, supporting collaboration across a whole project team. The concept of a CDE overlaps with enterprise content management, ECM, but with a greater focus on BIM issues.
=== Management of building information models ===
Building information models span the whole concept-to-occupation time-span. To ensure efficient management of information processes throughout this span, a BIM manager might be appointed. The BIM manager is retained by a design build team on the client's behalf from the pre-design phase onwards to develop and to track the object-oriented BIM against predicted and measured performance objectives, supporting multi-disciplinary building information models that drive analysis, schedules, take-off and logistics. Companies are also now considering developing BIMs in various levels of detail, since depending on the application of BIM, more or less detail is needed, and there is varying modeling effort associated with generating building information models at different levels of detail.
=== BIM in construction management ===
Participants in the building process are constantly challenged to deliver successful projects despite tight budgets, limited staffing, accelerated schedules, and limited or conflicting information. The significant disciplines such as architectural, structural and MEP designs should be well-coordinated, as two things can't take place at the same place and time. BIM additionally is able to aid in collision detection, identifying the exact location of discrepancies.
The BIM concept envisages virtual construction of a facility prior to its actual physical construction, in order to reduce uncertainty, improve safety, work out problems, and simulate and analyze potential impacts. Sub-contractors from every trade can input critical information into the model before beginning construction, with opportunities to pre-fabricate or pre-assemble some systems off-site. Waste can be minimised on-site and products delivered on a just-in-time basis rather than being stock-piled on-site.
Quantities and shared properties of materials can be extracted easily. Scopes of work can be isolated and defined. Systems, assemblies and sequences can be shown in a relative scale with the entire facility or group of facilities. BIM also prevents errors by enabling conflict or 'clash detection' whereby the computer model visually highlights to the team where parts of the building (e.g.:structural frame and building services pipes or ducts) may wrongly intersect.
=== BIM in facility operation and asset management ===
BIM can bridge the information loss associated with handing a project from design team, to construction team and to building owner/operator, by allowing each group to add to and reference back to all information they acquire during their period of contribution to the BIM model. Enabling an effective handover of information from design and construction (including via IFC or COBie) can thus yield benefits to the facility owner or operator. BIM-related processes relating to longer-term asset management are also covered in ISO-19650 Part 3.
For example, a building owner may find evidence of a water leak in a building. Rather than exploring the physical building, the owner may turn to the model and see that a water valve is located in the suspect location. The owner could also have in the model the specific valve size, manufacturer, part number, and any other information ever researched in the past, pending adequate computing power. Such problems were initially addressed by Leite and Akinci when developing a vulnerability representation of facility contents and threats for supporting the identification of vulnerabilities in building emergencies.
Dynamic information about the building, such as sensor measurements and control signals from the building systems, can also be incorporated within software to support analysis of building operation and maintenance. As such, BIM in facility operation can be related to internet of things approaches; rapid access to data may also be aided by use of mobile devices (smartphones, tablets) and machine-readable RFID tags or barcodes; while integration and interoperability with other business systems - CAFM, ERP, BMS, IWMS, etc - can aid operational reuse of data.
There have been attempts at creating information models for older, pre-existing facilities. Approaches include referencing key metrics such as the Facility Condition Index (FCI), or using 3D laser-scanning surveys and photogrammetry techniques (separately or in combination) or digitizing traditional building surveying methodologies by using mobile technology to capture accurate measurements and operation-related information about the asset that can be used as the basis for a model. Trying to retrospectively model a building constructed in, say 1927, requires numerous assumptions about design standards, building codes, construction methods, materials, etc, and is, therefore, more complex than building a model during design.
One of the challenges to the proper maintenance and management of existing facilities is understanding how BIM can be utilized to support a holistic understanding and implementation of building management practices and "cost of ownership" principles that support the full product lifecycle of a building. An American National Standard entitled APPA 1000 – Total Cost of Ownership for Facilities Asset Management incorporates BIM to factor in a variety of critical requirements and costs over the life-cycle of the building, including but not limited to: replacement of energy, utility, and safety systems; continual maintenance of the building exterior and interior and replacement of materials; updates to design and functionality; and recapitalization costs.
=== BIM in green building ===
BIM in green building, or "green BIM", is a process that can help architecture, engineering and construction firms to improve sustainability in the built environment. It can allow architects and engineers to integrate and analyze environmental issues in their design over the life cycle of the asset.
In the ERANet projects EPC4SES and FinSESCo projects worked on the digital representation of the energy demand of the building. The nucleus is the XML from issuing Energy Performance Certificates, amended by roof data to be able to retrieve the position and size of PV or PV/T.
== International developments ==
=== Asia ===
==== China ====
China began its exploration on informatisation in 2001. The Ministry of Construction announced BIM was as the key application technology of informatisation in "Ten new technologies of construction industry" (by 2010). The Ministry of Science and Technology (MOST) clearly announced BIM technology as a national key research and application project in "12th Five-Year" Science and Technology Development Planning. Therefore, the year 2011 was described as "The First Year of China's BIM".
==== Hong Kong ====
In 2006 the Hong Kong Housing Authority introduced BIM, and then set a target of full BIM implementation in 2014/2015. BuildingSmart Hong Kong was inaugurated in Hong Kong SAR in late April 2012. The Government of Hong Kong mandates the use of BIM for all government projects over HK$30M since 1 January 2018.
==== India ====
India Building Information Modelling Association (IBIMA) is a national-level society that represents the entire Indian BIM community. In India BIM is also known as VDC: Virtual Design and Construction. Due to its population and economic growth, India has an expanding construction market. In spite of this, BIM usage was reported by only 22% of respondents in a 2014 survey. In 2019, government officials said BIM could help save up to 20% by shortening construction time, and urged wider adoption by infrastructure ministries.
==== Iran ====
The Iran Building Information Modeling Association (IBIMA) was founded in 2012 by professional engineers from five universities in Iran, including the Civil and Environmental Engineering Department at Amirkabir University of Technology. While it is not currently active, IBIMA aims to share knowledge resources to support construction engineering management decision-making.
==== Malaysia ====
BIM implementation is targeted towards BIM Stage 2 by the year 2020 led by the Construction Industry Development Board (CIDB Malaysia). Under the Construction Industry Transformation Plan (CITP 2016–2020), it is hoped more emphasis on technology adoption across the project life-cycle will induce higher productivity.
==== Singapore ====
The Building and Construction Authority (BCA) has announced that BIM would be introduced for architectural submission (by 2013), structural and M&E submissions (by 2014) and eventually for plan submissions of all projects with gross floor area of more than 5,000 square meters by 2015. The BCA Academy is training students in BIM.
==== Japan ====
The Ministry of Land, Infrastructure and Transport (MLIT) has announced "Start of BIM pilot project in government building and repairs" (by 2010). Japan Institute of Architects (JIA) released the BIM guidelines (by 2012), which showed the agenda and expected effect of BIM to architects. MLIT announced " BIM will be mandated for all of its public works from the fiscal year of 2023, except those having particular reasons". The works subject to WTO Government Procurement Agreement shall comply with the published ISO standards related to BIM such as ISO19650 series as determined by the Article 10 (Technical Specification) of the Agreement.
==== South Korea ====
Small BIM-related seminars and independent BIM effort existed in South Korea even in the 1990s. However, it was not until the late 2000s that the Korean industry paid attention to BIM. The first industry-level BIM conference was held in April 2008, after which, BIM has been spread very rapidly. Since 2010, the Korean government has been gradually increasing the scope of BIM-mandated projects. McGraw Hill published a detailed report in 2012 on the status of BIM adoption and implementation in South Korea.
==== United Arab Emirates ====
Dubai Municipality issued a circular (196) in 2014 mandating BIM use for buildings of a certain size, height or type. The one page circular initiated strong interest in BIM and the market responded in preparation for more guidelines and direction. In 2015 the Municipality issued another circular (207) titled 'Regarding the expansion of applying the (BIM) on buildings and facilities in the emirate of Dubai' which made BIM mandatory on more projects by reducing the minimum size and height requirement for projects requiring BIM. This second circular drove BIM adoption further with several projects and organizations adopting UK BIM standards as best practice. In 2016, the UAE's Quality and Conformity Commission set up a BIM steering group to investigate statewide adoption of BIM.
=== Europe ===
==== Austria ====
Austrian standards for digital modeling are summarized in the ÖNORM A 6241, published on 15 March 2015. The ÖNORM A 6241-1 (BIM Level 2), which replaced the ÖNORM A 6240-4, has been extended in the detailed and executive design stages, and corrected in the lack of definitions. The ÖNORM A 6241-2 (BIM Level 3) includes all the requirements for the BIM Level 3 (iBIM).
==== Czech Republic ====
The Czech BIM Council, established in May 2011, aims to implement BIM methodologies into the Czech building and designing processes, education, standards and legislation.
==== Estonia ====
In Estonia digital construction cluster (Digitaalehituse Klaster) was formed in 2015 to develop BIM solutions for the whole life-cycle of construction. The strategic objective of the cluster is to develop an innovative digital construction environment as well as VDC new product development, Grid and e-construction portal to increase the international competitiveness and sales of Estonian businesses in the construction field. The cluster is equally co-funded by European Structural and Investment Funds through Enterprise Estonia and by the members of the cluster with a total budget of 600 000 euros for the period 2016–2018.
==== France ====
The French arm of buildingSMART, called Mediaconstruct (existing since 1989), is supporting digital transformation in France. A building transition digital plan – French acronym PTNB – was created in 2013 (mandated since 2015 to 2017 and under several ministries). A 2013 survey of European BIM practice showed France in last place, but, with government support, in 2017 it had risen to third place with more than 30% of real estate projects carried out using BIM. PTNB was superseded in 2018 by Plan BIM 2022, administered by an industry body, the Association for the Development of Digital in Construction (AND Construction), founded in 2017, and supported by a digital platform, KROQI, developed and launched in 2017 by CSTB (France's Scientific and Technical Centre for Building).
==== Germany ====
In December 2015, the German minister for transport Alexander Dobrindt announced a timetable for the introduction of mandatory BIM for German road and rail projects from the end of 2020. Speaking in April 2016, he said digital design and construction must become standard for construction projects in Germany, with Germany two to three years behind The Netherlands and the UK in aspects of implementing BIM. BIM was piloted in many areas of German infrastructure delivery and in July 2022 Volker Wissing, Federal Minister for Digital and Transport, announced that, from 2025, BIM will be used as standard in the construction of federal trunk roads in addition to the rail sector.
==== Ireland ====
In November 2017, Ireland's Department for Public Expenditure and Reform launched a strategy to increase use of digital technology in delivery of key public works projects, requiring the use of BIM to be phased in over the next four years.
==== Italy ====
Through the new D.l. 50, in April 2016 Italy has included into its own legislation several European directives including 2014/24/EU on Public Procurement. The decree states among the main goals of public procurement the "rationalization of designing activities and of all connected verification processes, through the progressive adoption of digital methods and electronic instruments such as Building and Infrastructure Information Modelling". A norm in 8 parts is also being written to support the transition: UNI 11337-1, UNI 11337-4 and UNI 11337-5 were published in January 2017, with five further chapters to follow within a year.
In early 2018 the Italian Ministry of Infrastructure and Transport issued a decree (DM 01/12/17) creating a governmental BIM Mandate compelling public client organisations to adopt a digital approach by 2025, with an incremental obligation which will start on 1 January 2019.
==== Lithuania ====
Lithuania is moving towards adoption of BIM infrastructure by founding a public body "Skaitmeninė statyba" (Digital Construction), which is managed by 13 associations. Also, there is a BIM work group established by Lietuvos Architektų Sąjunga (a Lithuanian architects body). The initiative intends Lithuania to adopt BIM, Industry Foundation Classes (IFC) and National Construction Classification as standard. An international conference "Skaitmeninė statyba Lietuvoje" (Digital Construction in Lithuania) has been held annually since 2012.
==== The Netherlands ====
On 1 November 2011, the Rijksgebouwendienst, the agency within the Dutch Ministry of Housing, Spatial Planning and the Environment that manages government buildings, introduced the Rgd BIM Standard, which it updated on 1 July 2012.
==== Norway ====
In Norway BIM has been used increasingly since 2008. Several large public clients require use of BIM in open formats (IFC) in most or all of their projects. The Government Building Authority bases its processes on BIM in open formats to increase process speed and quality, and all large and several small and medium-sized contractors use BIM. National BIM development is centred around the local organisation, buildingSMART Norway which represents 25% of the Norwegian construction industry.
==== Poland ====
BIMKlaster (BIM Cluster) is a non-governmental, non-profit organisation established in 2012 with the aim of promoting BIM development in Poland. In September 2016, the Ministry of Infrastructure and Construction began a series of expert meetings concerning the application of BIM methodologies in the construction industry.
==== Portugal ====
Created in 2015 to promote the adoption of BIM in Portugal and its normalisation, the Technical Committee for BIM Standardisation, CT197-BIM, has created the first strategic document for construction 4.0 in Portugal, aiming to align the country's industry around a common vision, integrated and more ambitious than a simple technology change.
==== Russia ====
The Russian government has approved a list of the regulations that provide the creation of a legal framework for the use of information modeling of buildings in construction and encourages the use of BIM in government projects.
==== Slovakia ====
The BIM Association of Slovakia, "BIMaS", was established in January 2013 as the first Slovak professional organisation focused on BIM. Although there are neither standards nor legislative requirements to deliver projects in BIM, many architects, structural engineers and contractors, plus a few investors are already applying BIM. A Slovak implementation strategy created by BIMaS and supported by the Chamber of Civil Engineers and Chamber of Architects has yet to be approved by Slovak authorities due to their low interest in such innovation.
==== Spain ====
A July 2015 meeting at Spain's Ministry of Infrastructure [Ministerio de Fomento] launched the country's national BIM strategy, making BIM a mandatory requirement on public sector projects with a possible starting date of 2018. Following a February 2015 BIM summit in Barcelona, professionals in Spain established a BIM commission (ITeC) to drive the adoption of BIM in Catalonia.
==== Switzerland ====
Since 2009 through the initiative of buildingSmart Switzerland, then 2013, BIM awareness among a broader community of engineers and architects was raised due to the open competition for Basel's Felix Platter Hospital where a BIM coordinator was sought. BIM has also been a subject of events by the Swiss Society for Engineers and Architects, SIA.
==== United Kingdom ====
In May 2011 UK Government Chief Construction Adviser Paul Morrell called for BIM adoption on UK government construction projects. Morrell also told construction professionals to adopt BIM or be "Betamaxed out". In June 2011 the UK government published its BIM strategy, announcing its intention to require collaborative 3D BIM (with all project and asset information, documentation and data being electronic) on its projects by 2016. Initially, compliance would require building data to be delivered in a vendor-neutral 'COBie' format, thus overcoming the limited interoperability of BIM software suites available on the market. The UK Government BIM Task Group led the government's BIM programme and requirements, including a free-to-use set of UK standards and tools that defined 'level 2 BIM'. In April 2016, the UK Government published a new central web portal as a point of reference for the industry for 'level 2 BIM'. The work of the BIM Task Group then continued under the stewardship of the Cambridge-based Centre for Digital Built Britain (CDBB), announced in December 2017 and formally launched in early 2018.
Outside of government, industry adoption of BIM since 2016 has been led by the UK BIM Alliance, an independent, not-for-profit, collaboratively-based organisation formed to champion and enable the implementation of BIM, and to connect and represent organisations, groups and individuals working towards digital transformation of the UK's built environment industry. In November 2017, the UK BIM Alliance merged with the UK and Ireland chapter of BuildingSMART. In October 2019, CDBB, the UK BIM Alliance and the BSI Group launched the UK BIM Framework. Superseding the BIM levels approach, the framework describes an overarching approach to implementing BIM in the UK, giving free guidance on integrating the international ISO 19650 series of standards into UK processes and practice.
National Building Specification (NBS) has published research into BIM adoption in the UK since 2011, and in 2020 published its 10th annual BIM report. In 2011, 43% of respondents had not heard of BIM; in 2020 73% said they were using BIM.
=== North America ===
==== Canada ====
BIM is not mandatory in Canada. Several organizations support BIM adoption and implementation in Canada: the Canada BIM Council (CANBIM, founded in 2008), the Institute for BIM in Canada, and buildingSMART Canada (the Canadian chapter of buildingSMART International). Public Services and Procurement Canada (formerly Public Works and Government Services Canada) is committed to using non-proprietary or "OpenBIM" BIM standards and avoids specifying any specific proprietary BIM format. Designers are required to use the international standards of interoperability for BIM (IFC).
==== United States ====
The Associated General Contractors of America and US contracting firms have developed various working definitions of BIM that describe it generally as:
an object-oriented building development tool that utilizes 5-D modeling concepts, information technology and software interoperability to design, construct and operate a building project, as well as communicate its details.
Although the concept of BIM and relevant processes are being explored by contractors, architects and developers alike, the term itself has been questioned and debated with alternatives including Virtual Building Environment (VBE) also considered. Unlike some countries such as the UK, the US has not adopted a set of national BIM guidelines, allowing different systems to remain in competition. In 2021, the National Institute of Building Sciences (NIBS) looked at applying UK BIM experiences to developing shared US BIM standards and processes. The US National BIM Standard had largely been developed through volunteer efforts; NIBS aimed to create a national BIM programme to drive effective adoption at a national scale.
BIM is seen to be closely related to Integrated Project Delivery (IPD) where the primary motive is to bring the teams together early on in the project. A full implementation of BIM also requires the project teams to collaborate from the inception stage and formulate model sharing and ownership contract documents.
The American Institute of Architects has defined BIM as "a model-based technology linked with a database of project information",[3] and this reflects the general reliance on database technology as the foundation. In the future, structured text documents such as specifications may be able to be searched and linked to regional, national, and international standards.
=== Africa ===
==== Nigeria ====
BIM has the potential to play a vital role in the Nigerian AEC sector. In addition to its potential clarity and transparency, it may help promote standardization across the industry. For instance, Utiome suggests that, in conceptualizing a BIM-based knowledge transfer framework from industrialized economies to urban construction projects in developing nations, generic BIM objects can benefit from rich building information within specification parameters in product libraries, and used for efficient, streamlined design and construction. Similarly, an assessment of the current 'state of the art' by Kori found that medium and large firms were leading the adoption of BIM in the industry. Smaller firms were less advanced with respect to process and policy adherence. There has been little adoption of BIM in the built environment due to construction industry resistance to changes or new ways of doing things. The industry is still working with conventional 2D CAD systems in services and structural designs, although production could be in 3D systems. There is virtually no utilisation of 4D and 5D systems.
BIM Africa Initiative, primarily based in Nigeria, is a non-profit institute advocating the adoption of BIM across Africa. Since 2018, it has been engaging with professionals and the government towards the digital transformation of the built industry. Produced annually by its research and development committee, the African BIM Report gives an overview of BIM adoption across the African continent.
==== South Africa ====
The South African BIM Institute, established in May 2015, aims to enable technical experts to discuss digital construction solutions that can be adopted by professionals working within the construction sector. Its initial task was to promote the SA BIM Protocol.
There are no mandated or national best practice BIM standards or protocols in South Africa. Organisations implement company-specific BIM standards and protocols at best (there are isolated examples of cross-industry alliances).
=== Oceania ===
==== Australia ====
In February 2016, Infrastructure Australia recommended: "Governments should make the use of Building Information Modelling (BIM) mandatory for the design of large-scale complex infrastructure projects. In support of a mandatory rollout, the Australian Government should commission the Australasian Procurement and Construction Council, working with industry, to develop appropriate guidance around the adoption and use of BIM; and common standards and protocols to be applied when using BIM".
==== New Zealand ====
In 2015, many projects in the rebuilding of Christchurch were being assembled in detail on a computer using BIM well before workers set foot on the site. The New Zealand government started a BIM acceleration committee, as part of a productivity partnership with the goal of 20 per cent more efficiency in the construction industry by 2020. Today, BIM use is still not mandated in the country while several challenges have been identified for its implementation in the country. However, members of the AEC industry and academia have developed a national BIM handbook providing definitions, case studies and templates.
== Purposes or dimensionality ==
Some purposes or uses of BIM may be described as 'dimensions'. However, there is little consensus on definitions beyond 5D. Some organisations dismiss the term; for example, the UK Institution of Structural Engineers does not recommend using nD modelling terms beyond 4D, adding "cost (5D) is not really a 'dimension'."
=== 3D ===
3D BIM, an acronym for three-dimensional building information modeling, refers to the graphical representation of an asset's geometric design, augmented by information describing attributes of individual components. 3D BIM work may be undertaken by professional disciplines such as architectural, structural, and MEP, and the use of 3D models enhances coordination and collaboration between disciplines. A 3D virtual model can also be created by creating a point cloud of the building or facility using laser scanning technology.
=== 4D ===
4D BIM, an acronym for 4-dimensional building information modeling, refers to the intelligent linking of individual 3D CAD components or assemblies with time- or scheduling-related information. The term 4D refers to the fourth dimension: time, i.e. 3D plus time.
4D modelling enables project participants (architects, designers, contractors, clients) to plan, sequence the physical activities, visualise the critical path of a series of events, mitigate the risks, report and monitor progress of activities through the lifetime of the project. 4D BIM enables a sequence of events to be depicted visually on a time line that has been populated by a 3D model, augmenting traditional Gantt charts and critical path (CPM) schedules often used in project management. Construction sequences can be reviewed as a series of problems using 4D BIM, enabling users to explore options, manage solutions and optimize results.
As an advanced construction management technique, it has been used by project delivery teams working on larger projects. 4D BIM has traditionally been used for higher end projects due to the associated costs, but technologies are now emerging that allow the process to be used by laymen or to drive processes such as manufacture.
=== 5D ===
5D BIM, an acronym for 5-dimensional building information modeling refers to the intelligent linking of individual 3D components or assemblies with time schedule (4D BIM) constraints and then with cost-related information. 5D models enable participants to visualise construction progress and related costs over time. This BIM-centric project management technique has potential to improve management and delivery of projects of any size or complexity.
In June 2016, McKinsey & Company identified 5D BIM technology as one of five big ideas poised to disrupt construction. It defined 5D BIM as "a five-dimensional representation of the physical and functional characteristics of any project. It considers a project’s time schedule and cost in addition to the standard spatial design parameters in 3-D."
=== 6D ===
6D BIM, an acronym for 6-dimensional building information modeling, is sometimes used to refer to the intelligent linking of individual 3D components or assemblies with all aspects of project life-cycle management information. However, there is less consensus about the definition of 6D BIM; it is also sometimes used to cover use of BIM for sustainability purposes.
In the project life cycle context, a 6D model is usually delivered to the owner when a construction project is finished. The "As-Built" BIM model is populated with relevant building component information such as product data and details, maintenance/operation manuals, cut sheet specifications, photos, warranty data, web links to product online sources, manufacturer information and contacts, etc. This database is made accessible to the users/owners through a customized proprietary web-based environment. This is intended to aid facilities managers in the operation and maintenance of the facility.
The term is less commonly used in the UK and has been replaced with reference to the Asset Information Requirements (AIR) and an Asset Information Model (AIM) as specified in BS EN ISO 19650-3:2020.
== See also ==
Data model
Design computing
Digital twin (the physical manifestation instrumented and connected to the model)
BCF
GIS
Digital Building Logbook
Landscape design software
Lean construction
List of BIM software
Macro BIM
Open-source 3D file formats
OpenStreetMap
Pre-fire planning
System information modelling
Whole Building Design Guide
Facility management (or Building management)
Building automation (and Building management systems)
== Notes ==
== References ==
== Further reading ==
Kensek, Karen (2014). Building Information Modeling, Routledge. ISBN 978-0-415-71774-8
Kensek, Karen and Noble, Douglas (2014). Building Information Modeling: BIM in Current and Future Practice, Wiley. ISBN 978-1-118-76630-9
Eastman, Chuck; Teicholz, Paul; Sacks, Rafael; Liston, Kathleen (2011). 'BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers, and Contractors (2 ed.). John Wiley. ISBN 978-0-470-54137-1.
Lévy, François (2011). BIM in Small-Scale Sustainable Design, Wiley. ISBN 978-0470590898
Weygant, Robert S. (2011) BIM Content Development: Standards, Strategies, and Best Practices, Wiley. ISBN 978-0-470-58357-9
Hardin, Brad (2009). Martin Viveros (ed.). BIM and Construction Management: Proven Tools, Methods and Workflows. Sybex. ISBN 978-0-470-40235-1.
Smith, Dana K. and Tardif, Michael (2009). Building Information Modeling: A Strategic Implementation Guide for Architects, Engineers, Constructors, and Real Estate Asset Managers, Wiley. ISBN 978-0-470-25003-7
Underwood, Jason, and Isikdag, Umit (2009). Handbook of Research on Building Information Modeling and Construction Informatics: Concepts and Technologies, Information Science Publishing. ISBN 978-1-60566-928-1
Krygiel, Eddy and Nies, Brad (2008). Green BIM: Successful Sustainable Design with Building Information Modeling, Sybex. ISBN 978-0-470-23960-5
Kymmell, Willem (2008). Building Information Modeling: Planning and Managing Construction Projects with 4D CAD and Simulations, McGraw-Hill Professional. ISBN 978-0-07-149453-3
Jernigan, Finith (2007). BIG BIM little bim. 4Site Press. ISBN 978-0-9795699-0-6. | Wikipedia/Building_information_modeling |
Design flows are the explicit combination of electronic design automation tools to accomplish the design of an integrated circuit. Moore's law has driven the entire IC implementation RTL to GDSII design flows from one which uses primarily stand-alone synthesis, placement, and routing algorithms to an integrated construction and analysis flows for design closure. The challenges of rising interconnect delay led to a new way of thinking about and integrating design closure tools.
The RTL to GDSII flow underwent significant changes from 1980 through 2005. The continued scaling of CMOS technologies significantly changed the objectives of the various design steps. The lack of good predictors for delay has led to significant changes in recent design flows. New scaling challenges such as leakage power,
variability, and reliability will continue to require significant changes to the design closure process in the future. Many factors describe what drove the design flow from a set of separate design steps to a fully integrated approach, and what further changes are coming to address the latest challenges. In his keynote at the 40th Design Automation Conference entitled The Tides of EDA, Alberto Sangiovanni-Vincentelli distinguished three periods of EDA:
The Age of Invention: During the invention era, routing, placement, static timing analysis and logic synthesis were invented.
The Age of Implementation: In the age of implementation, these steps were drastically improved by designing sophisticated data structures and advanced algorithms. This allowed the tools in each of these design steps to keep pace with the rapidly increasing design sizes. However, due to the lack of good predictive cost functions, it became impossible to execute a design flow by a set of discrete steps, no matter how efficiently each of the steps was implemented.
The Age of Integration: This led to the age of integration where most of the design steps are performed in an integrated environment, driven by a set of incremental cost analyzers.
There are differences between the steps and methods of the design flow for analog and digital integrated circuits. Nonetheless, a typical VLSI design flow consists of various steps like design conceptualization, chip optimization, logical/physical implementation, and design validation and verification.
== See also ==
Floorplan (microelectronics), creates the physical infrastructure into which the design is placed and routed
Placement (EDA), an essential step in Electronic Design Automation (EDA)
Routing (EDA), a crucial step in the design of integrated circuits
Power optimization (EDA), the use of EDA tools to optimize (reduce) the power consumption of a digital design, while preserving its functionality
Post-silicon validation, the final step in the EDA design flow
== References ==
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 – A survey of the field, from which this summary was derived, with permission. | Wikipedia/Design_flow_(EDA) |
A design pattern is the re-usable form of a solution to a design problem. The idea was introduced by the architect Christopher Alexander and has been adapted for various other disciplines, particularly software engineering.
== Details ==
An organized collection of design patterns that relate to a particular field is called a pattern language. This language gives a common terminology for discussing the situations designers are faced with.
The elements of this language are entities called patterns. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.
Documenting a pattern requires explaining why a particular situation causes problems, and how the components of the pattern relate to each other to give the solution. Christopher Alexander describes common design problems as arising from "conflicting forces"—such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern.
Pattern documentation should also explain when it is applicable. Since two houses may be very different from one another, a design pattern for houses must be broad enough to apply to both of them, but not so vague that it doesn't help the designer make decisions. The range of situations in which a pattern can be used is called its context. Some examples might be "all houses", "all two-story houses", or "all places where people spend time".
For instance, in Christopher Alexander's work, bus stops and waiting rooms in a surgery center are both within the context for the pattern "A PLACE TO WAIT".
== Examples ==
Software design pattern, in software design
Architectural pattern, for software architecture
Interaction design pattern, used in interaction design / human–computer interaction
Pedagogical patterns, in teaching
Pattern gardening, in gardening
Business models also have design patterns. See Business model § Examples.
== See also ==
Style guide
Design paradigm
Anti-pattern
Dark pattern
== References ==
== Further reading == | Wikipedia/Design_pattern |
The research-based design process is a research process proposed by Teemu Leinonen, inspired by several design theories. It is strongly oriented towards the building of prototypes and it emphasizes creative solutions, exploration of various ideas and design concepts, continuous testing and redesign of the design solutions.
The method is firmly influenced by the Scandinavian participatory design approach. Therefore, most of the activities take place in a close dialogue with the community that is expected to use the tools or services designed.
== Phases ==
The process can be divided into four major phases, although they all happen concurrently and side by side. At different times of the research, researchers are asked to put more effort into different phases. The continuous iteration, however, asks researchers to keep all the phases alive all the time: contextual inquiry, participatory design, product design, prototype as hypothesis.
=== Contextual inquiry ===
Contextual inquiry refers to the exploration of the socio-cultural context of the design. The aim is to understand the environment, situation, and culture where the design takes place. The results of the contextual inquiry are better understanding of the context by recognizing in it possible challenges and design opportunities. In this phase, design researchers use rapid ethnographic methods, such as participatory observation, note-taking, sketching, informal conversations, and interviews. At the same time as the field work, the design researchers are doing a focused review of the literature, benchmarking existing solutions, and analyzing trends in the area in order to understand and recognise design challenges.
=== Participatory design ===
Throughout the contextual inquiry design researchers start to develop some preliminary design ideas, which would be developed during the next stage—participatory design—in workshops with different stakeholders. The participatory design sessions tend to take place with small groups of 4 to 6 participants. A common practice is to present the results as scenarios made by the design researchers containing challenges and design opportunities. In the workshop, the participants are invited to come up with design solutions to the challenges and to bring to the discussion new challenges and solutions.
Since one of the main features of the research-based design is its participatory nature, the user's involvement is an integral part of the process. In this regard, participatory design workshops are organized during the different stages in order to validate initial ideas and discuss the prototypes at different stage of development.
=== Product design ===
The results of the participatory design are analysed in a design studio by the design researchers who use the materials from the contextual inquiry and participatory design sessions to redefine the design problems and redesign the prototypes. By keeping a distance from the stakeholders, in the product design phase the design researchers will get a chance to analyse the results of the participatory design, categorize them, use specific design language related to implementation of the prototypes, and finally make design decisions.
=== Prototype as hypothesis ===
Ultimately, the prototypes are developed to be functional on a level that they can be tested with real people in their everyday situations. The prototypes are still considered to be a hypothesis, prototypes as hypothesis, because they are expected to be part of the solutions for the challenges defined and redefined during the research. It remains to the stakeholders to decide whether they support the assertions made by the design researchers. Therefore the first prototypes brought to the use of the real people can be considered to be also minimum viable products.
Research-based design is not to be confused with design-based research or educational design research. In research-based design, which builds on art and design tradition, the focus is on the artifacts, the end-results of the design. The way the artifacts are, the affordances and features they have or do not have, form an important part of the research argumentation. As such, research-based design as a methodological approach includes research, design, and design interventions that are all intertwined.
== References == | Wikipedia/Research-based_design |
Functional Design is a paradigm used to simplify the design of hardware and software devices such as computer software and, increasingly, 3D models. A functional design assures that each modular part of a device has only one responsibility and performs that responsibility with the minimum of side effects on other parts. Functionally designed modules tend to have low coupling.
== Advantages ==
The advantage for implementation is that if a software module has a single purpose, it will be simpler, and therefore easier and less expensive, to design and implement.
Systems with functionally designed parts are easier to modify because each part does only what it claims to do.
Since maintenance is more than 3/4 of a successful system's life, this feature is a crucial advantage. It also makes the system easier to understand and document, which simplifies training. The result is that the practical lifetime of a functional system is longer.
In a system of programs, a functional module will be easier to reuse because it is less likely to have side effects that appear in other parts of the system.
== Technique ==
The standard way to assure functional design is to review the description of a module. If the description includes conjunctions such as "and" or "or", then the design has more than one responsibility, and is therefore likely to have side effects. The responsibilities need to be divided into several modules in order to achieve a functional design.
== Critiques and limits ==
Every computer system has parts that cannot be functionally pure because they exist to distribute CPU cycles or other resources to different modules. For example, most systems have an "initialization" section that starts up the modules. Other well-known examples are the interrupt vector table and the main loop.
Some functions inherently have mixed semantics. For example, a function "move the car from the garage" inherently has a side effect of changing the "car position". In some cases, the mixed semantics can extend over a large topological tree or graph of related concepts. In these unusual cases, functional design is not recommended by some authorities. Instead polymorphism, inheritance, or procedural methods may be preferred.
== Applied to 3D modeling and simulation ==
Recently several software companies have introduced functional design as a concept to describe a Parametric feature based modeler for 3D modeling and simulation. In this context, they mean a parametric model of an object where the parameters are tied to real-world design criteria, such as an axle that will adjust its diameter based on the strength of the material and the amount of force being applied to it in the simulation. It is hoped that this will create efficiencies in the design process for mechanical and perhaps even architectural/structural assemblies by integrating the results of finite element analysis directly to the behavior of individual objects.
== References ==
== External links ==
Functional Design Specification
7 Essential Guidelines For Functional Design | Wikipedia/Functional_design |
A student design competition is a specific form of a student competition relating to design. Design competitions can be technical or purely aesthetic. The objective of technical competitions is to introduce students to real-world engineering situations and to teach students project-management and fabrication techniques used in industry. Aesthetic competitions usually require art and design skills.
Both students and industry benefit from intercollegiate design competitions. Each competition allows students to apply the theories and information they have learning in the class room to real situations. Industry gains better prepared and more experienced engineers.
== History ==
Through the 1970s only one competition of significance existed: Mini Baja. Today, almost every field of engineering has several design competitions, which have extended from college down into high school (e.g., FIRST Robotics) and even younger grades (e.g., FIRST Lego League). The Society of Automotive Engineers organizes the largest design competitions, including Baja SAE, Sunryce, and Formula SAE.
== Notable design competitions ==
Civil engineering
Great Northern Concrete Toboggan Race
Concrete canoe
Steel bridge
Mechanical engineering
Baja SAE
Basic Utility Vehicle
Formula SAE
Human Powered Vehicle Challenge (HPVC)
ASABE International 1/4 Scale Tractor Student Design Competition
Robotics Competitions
DARPA Grand Challenge
International Space Settlement Design Competition
NASA's Lunabotics Competition
RoboCup Soccer
Multi-Disciplinary Competitions
Stanford Center on Longevity Design Challenge
== See also ==
Student competition
University of Patras Formula Student Team - UoP Racing Team | Wikipedia/Student_design_competition |
A graphics card (also called a video card, display card, graphics accelerator, graphics adapter, VGA card/VGA, video adapter, display adapter, or colloquially GPU) is a computer expansion card that generates a feed of graphics output to a display device such as a monitor. Graphics cards are sometimes called discrete or dedicated graphics cards to emphasize their distinction to an integrated graphics processor on the motherboard or the central processing unit (CPU). A graphics processing unit (GPU) that performs the necessary computations is the main component in a graphics card, but the acronym "GPU" is sometimes also used to refer to the graphics card as a whole erroneously.
Most graphics cards are not limited to simple display output. The graphics processing unit can be used for additional processing, which reduces the load from the CPU. Additionally, computing platforms such as OpenCL and CUDA allow using graphics cards for general-purpose computing. Applications of general-purpose computing on graphics cards include AI training, cryptocurrency mining, and molecular simulation.
Usually, a graphics card comes in the form of a printed circuit board (expansion board) which is to be inserted into an expansion slot. Others may have dedicated enclosures, and they are connected to the computer via a docking station or a cable. These are known as external GPUs (eGPUs).
Graphics cards are often preferred over integrated graphics for increased performance. A more powerful graphics card will be able to render more frames per second.
== History ==
Graphics cards, also known as video cards or graphics processing units (GPUs), have historically evolved alongside computer display standards to accommodate advancing technologies and user demands. In the realm of IBM PC compatibles, the early standards included Monochrome Display Adapter (MDA), Color Graphics Adapter (CGA), Hercules Graphics Card, Enhanced Graphics Adapter (EGA), and Video Graphics Array (VGA). Each of these standards represented a step forward in the ability of computers to display more colors, higher resolutions, and richer graphical interfaces, laying the foundation for the development of modern graphical capabilities.
In the late 1980s, advancements in personal computing led companies like Radius to develop specialized graphics cards for the Apple Macintosh II. These cards were unique in that they incorporated discrete 2D QuickDraw capabilities, enhancing the graphical output of Macintosh computers by accelerating 2D graphics rendering. QuickDraw, a core part of the Macintosh graphical user interface, allowed for the rapid rendering of bitmapped graphics, fonts, and shapes, and the introduction of such hardware-based enhancements signaled an era of specialized graphics processing in consumer machines.
The evolution of graphics processing took a major leap forward in the mid-1990s with 3dfx Interactive's introduction of the Voodoo series, one of the earliest consumer-facing GPUs that supported 3D acceleration. The Voodoo's architecture marked a major shift in graphical computing by offloading the demanding task of 3D rendering from the CPU to the GPU, significantly improving gaming performance and graphical realism.
The development of fully integrated GPUs that could handle both 2D and 3D rendering came with the introduction of the NVIDIA RIVA 128. Released in 1997, the RIVA 128 was one of the first consumer-facing GPUs to integrate both 3D and 2D processing units on a single chip. This innovation simplified the hardware requirements for end-users, as they no longer needed separate cards for 2D and 3D rendering, thus paving the way for the widespread adoption of more powerful and versatile GPUs in personal computers.
In contemporary times, the majority of graphics cards are built using chips sourced from two dominant manufacturers: AMD and Nvidia. These modern graphics cards are multifunctional and support various tasks beyond rendering 3D images for gaming. They also provide 2D graphics processing, video decoding, TV output, and multi-monitor setups. Additionally, many graphics cards now have integrated sound capabilities, allowing them to transmit audio alongside video output to connected TVs or monitors with built-in speakers, further enhancing the multimedia experience.
Within the graphics industry, these products are often referred to as graphics add-in boards (AIBs). The term "AIB" emphasizes the modular nature of these components, as they are typically added to a computer's motherboard to enhance its graphical capabilities. The evolution from the early days of separate 2D and 3D cards to today's integrated and multifunctional GPUs reflects the ongoing technological advancements and the increasing demand for high-quality visual and multimedia experiences in computing.
== Discrete vs integrated graphics ==
As an alternative to the use of a graphics card, video hardware can be integrated into the motherboard, CPU, or a system-on-chip as integrated graphics. Motherboard-based implementations are sometimes called "on-board video". Some motherboards support using both integrated graphics and a graphics card simultaneously to feed separate displays. The main advantages of integrated graphics are: low cost, compactness, simplicity, and low energy consumption. Integrated graphics often have less performance than a graphics card because the graphics processing unit inside integrated graphics needs to share system resources with the CPU. On the other hand, a graphics card has a separate random access memory (RAM), cooling system, and dedicated power regulators. A graphics card can offload work and reduce memory-bus-contention from the CPU and system RAM, therefore, the overall performance for a computer could improve, in addition to increased performance in graphics processing. Such improvements to performance can be seen in video gaming, 3D animation, and video editing.
Both AMD and Intel have introduced CPUs and motherboard chipsets that support the integration of a GPU into the same die as the CPU. AMD advertises CPUs with integrated graphics under the trademark Accelerated Processing Unit (APU), while Intel brands similar technology under "Intel Graphics Technology".
== Power demand ==
As the processing power of graphics cards increased, so did their demand for electrical power. Current high-performance graphics cards tend to consume large amounts of power. For example, the thermal design power (TDP) for the GeForce Titan RTX is 280 watts. When tested with video games, the GeForce RTX 2080 Ti Founder's Edition averaged 300 watts of power consumption. While CPU and power supply manufacturers have recently aimed toward higher efficiency, power demands of graphics cards continued to rise, with the largest power consumption of any individual part in a computer. Although power supplies have also increased their power output, the bottleneck occurs in the PCI-Express connection, which is limited to supplying 75 watts.
Modern graphics cards with a power consumption of over 75 watts usually include a combination of six-pin (75 W) or eight-pin (150 W) sockets that connect directly to the power supply. Providing adequate cooling becomes a challenge in such computers. Computers with multiple graphics cards may require power supplies over 750 watts. Heat extraction becomes a major design consideration for computers with two or more high-end graphics cards.
As of the Nvidia GeForce RTX 30 series, Ampere architecture, a custom flashed RTX 3090 named "Hall of Fame" has been recorded to reach a peak power draw as high as 630 watts. A standard RTX 3090 can peak at up to 450 watts. The RTX 3080 can reach up to 350 watts, while a 3070 can reach a similar, if not slightly lower, peak power draw. Ampere cards of the Founders Edition variant feature a "dual axial flow through" cooler design, which includes fans above and below the card to dissipate as much heat as possible towards the rear of the computer case. A similar design was used by the Sapphire Radeon RX Vega 56 Pulse graphics card.
== Size ==
Graphics cards for desktop computers have different size profiles, which allows graphics cards to be added to smaller-sized computers. Some graphics cards are not of the usual size, and are named as "low profile". Graphics card profiles are based on height only, with low-profile cards taking up less than the height of a PCIe slot, some can be as low as "half-height". Length and thickness can vary greatly, with high-end cards usually occupying two or three expansion slots, and with modern high-end graphics cards such as the RTX 4090 exceeding 300mm in length. A lower profile card is preferred when trying to fit multiple cards or if graphics cards run into clearance issues with other motherboard components like the DIMM or PCIE slots. This can be fixed with a larger computer case such as mid-tower or full tower. Full towers are usually able to fit larger motherboards in sizes like ATX and micro ATX.
=== GPU sag ===
In the late 2010s and early 2020s, some high-end graphics card models have become so heavy that it is possible for them to sag downwards after installing without proper support, which is why many manufacturers provide additional support brackets. GPU sag can damage a GPU in the long term.
== Multicard scaling ==
Some graphics cards can be linked together to allow scaling graphics processing across multiple cards. This is done using either the PCIe bus on the motherboard or, more commonly, a data bridge. Usually, the cards must be of the same model to be linked, and most low end cards are not able to be linked in this way. AMD and Nvidia both have proprietary scaling methods, CrossFireX for AMD, and SLI (since the Turing generation, superseded by NVLink) for Nvidia. Cards from different chip-set manufacturers or architectures cannot be used together for multi-card scaling. If graphics cards have different sizes of memory, the lowest value will be used, with the higher values disregarded. Currently, scaling on consumer-grade cards can be done using up to four cards. The use of four cards requires a large motherboard with a proper configuration. Nvidia's GeForce GTX 590 graphics card can be configured in a four-card configuration. As stated above, users will want to stick to cards with the same performances for optimal use. Motherboards including ASUS Maximus 3 Extreme and Gigabyte GA EX58 Extreme are certified to work with this configuration. A large power supply is necessary to run the cards in SLI or CrossFireX. Power demands must be known before a proper supply is installed. For the four card configuration, a 1000+ watt supply is needed. With any relatively powerful graphics card, thermal management cannot be ignored. Graphics cards require well-vented chassis and good thermal solutions. Air or water cooling are usually required, though low end GPUs can use passive cooling. Larger configurations use water solutions or immersion cooling to achieve proper performance without thermal throttling.
SLI and Crossfire have become increasingly uncommon as most games do not fully utilize multiple GPUs, due to the fact that most users cannot afford them. Multiple GPUs are still used on supercomputers (like in Summit), on workstations to accelerate video and 3D rendering, visual effects, for simulations, and for training artificial intelligence.
== 3D graphics APIs ==
A graphics driver usually supports one or multiple cards by the same vendor and has to be written for a specific operating system. Additionally, the operating system or an extra software package may provide certain programming APIs for applications to perform 3D rendering.
=== Specific usage ===
Some GPUs are designed with specific usage in mind:
Gaming
GeForce GTX
GeForce RTX
Nvidia Titan
Radeon HD
Radeon RX
Intel Arc
Cloud gaming
Nvidia Grid
Radeon Sky
Workstation
Nvidia Quadro
AMD FirePro
Radeon Pro
Intel Arc Pro
Cloud Workstation
Nvidia Tesla
AMD FireStream
Artificial Intelligence Cloud
Nvidia Tesla
Radeon Instinct
Automated/Driverless car
Nvidia Drive PX
== Industry ==
As of 2016, the primary suppliers of the GPUs (graphics chips or chipsets) used in graphics cards are AMD and Nvidia. In the third quarter of 2013, AMD had a 35.5% market share while Nvidia had 64.5%, according to Jon Peddie Research. In economics, this industry structure is termed a duopoly. AMD and Nvidia also build and sell graphics cards, which are termed graphics add-in-boards (AIBs) in the industry. (See Comparison of Nvidia graphics processing units and Comparison of AMD graphics processing units.) In addition to marketing their own graphics cards, AMD and Nvidia sell their GPUs to authorized AIB suppliers, which AMD and Nvidia refer to as "partners". The fact that Nvidia and AMD compete directly with their customer/partners complicates relationships in the industry. AMD and Intel being direct competitors in the CPU industry is also noteworthy, since AMD-based graphics cards may be used in computers with Intel CPUs. Intel's integrated graphics may weaken AMD, in which the latter derives a significant portion of its revenue from its APUs. As of the second quarter of 2013, there were 52 AIB suppliers. These AIB suppliers may market graphics cards under their own brands, produce graphics cards for private label brands, or produce graphics cards for computer manufacturers. Some AIB suppliers such as MSI build both AMD-based and Nvidia-based graphics cards. Others, such as EVGA, build only Nvidia-based graphics cards, while XFX, now builds only AMD-based graphics cards. Several AIB suppliers are also motherboard suppliers. Most of the largest AIB suppliers are based in Taiwan and they include ASUS, MSI, GIGABYTE, and Palit. Hong Kong–based AIB manufacturers include Sapphire and Zotac. Sapphire and Zotac also sell graphics cards exclusively for AMD and Nvidia GPUs respectively.
== Market ==
Graphics card shipments peaked at a total of 114 million in 1999. By contrast, they totaled 14.5 million units in the third quarter of 2013, a 17% fall from Q3 2012 levels. Shipments reached an annual total of 44 million in 2015. The sales of graphics cards have trended downward due to improvements in integrated graphics technologies; high-end, CPU-integrated graphics can provide competitive performance with low-end graphics cards. At the same time, graphics card sales have grown within the high-end segment, as manufacturers have shifted their focus to prioritize the gaming and enthusiast market.
Beyond the gaming and multimedia segments, graphics cards have been increasingly used for general-purpose computing, such as big data processing. The growth of cryptocurrency has placed a severely high demand on high-end graphics cards, especially in large quantities, due to their advantages in the process of cryptocurrency mining. In January 2018, mid- to high-end graphics cards experienced a major surge in price, with many retailers having stock shortages due to the significant demand among this market. Graphics card companies released mining-specific cards designed to run 24 hours a day, seven days a week, and without video output ports. The graphics card industry took a setback due to the 2020–21 chip shortage.
== Parts ==
A modern graphics card consists of a printed circuit board on which the components are mounted. These include:
=== Graphics processing unit ===
A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the building of images in a frame buffer intended for output to a display. Because of the large degree of programmable computational complexity for such a task, a modern graphics card is also a computer unto itself.
=== Heat sink ===
A heat sink is mounted on most modern graphics cards. A heat sink spreads out the heat produced by the graphics processing unit evenly throughout the heat sink and unit itself. The heat sink commonly has a fan mounted to cool the heat sink and the graphics processing unit. Not all cards have heat sinks, for example, some cards are liquid-cooled and instead have a water block; additionally, cards from the 1980s and early 1990s did not produce much heat, and did not require heat sinks. Most modern graphics cards need proper thermal solutions. They can be water-cooled or through heat sinks with additional connected heat pipes usually made of copper for the best thermal transfer.
=== Video BIOS ===
The video BIOS or firmware contains a minimal program for the initial set up and control of the graphics card. It may contain information on the memory and memory timing, operating speeds and voltages of the graphics processor, and other details which can sometimes be changed.
Modern Video BIOSes do not support full functionalities of graphics cards; they are only sufficient to identify and initialize the card to display one of a few frame buffer or text display modes. It does not support YUV to RGB translation, video scaling, pixel copying, compositing or any of the multitude of other 2D and 3D features of the graphics card, which must be accessed by software drivers.
=== Video memory ===
The memory capacity of most modern graphics cards ranges from 2 to 24 GB. But with up to 32 GB as of the last 2010s, the applications for graphics use are becoming more powerful and widespread. Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4, GDDR5, GDDR5X, and GDDR6. The effective memory clock rate in modern cards is generally between 2 and 15 GHz.
Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, as well as textures, vertex buffers, and compiled shader programs.
=== RAMDAC ===
The RAMDAC, or random-access-memory digital-to-analog converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as cathode-ray tube (CRT) displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, to minimize flicker. (This is not a problem with LCD displays, as they have little to no flicker.) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCD/plasma monitors and TVs and projectors with only digital connections work in the digital domain and do not require a RAMDAC for those connections. There are displays that feature analog inputs (VGA, component, SCART, etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion. With the VGA standard being phased out in favor of digital formats, RAMDACs have started to disappear from graphics cards.
=== Output interfaces ===
The most common connection systems between the graphics card and the computer display are:
==== Video Graphics Array (VGA) (DE-15) ====
Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Today, the VGA analog interface is used for high definition video resolutions including 1080p and higher. Some problems of this standard are electrical noise, image distortion and sampling error in evaluating pixels. While the VGA transmission bandwidth is high enough to support even higher resolution playback, the picture quality can degrade depending on cable quality and length. The extent of quality difference depends on the individual's eyesight and the display; when using a DVI or HDMI connection, especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image Constraint Token (ICT) is not enabled on the Blu-ray disc.
==== Digital Visual Interface (DVI) ====
Digital Visual Interface is a digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors. There were also some rare high-end CRT monitors that use DVI. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its native resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.
==== Video-in video-out (VIVO) for S-Video, composite video and component video ====
These connectors are included to allow connection with televisions, DVD players, video recorders and video game consoles. They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-Video in and out plus composite video in and out), or 6 connectors (S-Video in and out, component YPBPR out and composite in and out).
==== High-Definition Multimedia Interface (HDMI) ====
HDMI is a compact audio/video interface for transferring uncompressed video data and compressed/uncompressed digital audio data from an HDMI-compliant device ("the source device") to a compatible digital audio device, computer monitor, video projector, or digital television. HDMI is a digital replacement for existing analog video standards. HDMI supports copy protection through HDCP.
==== DisplayPort ====
DisplayPort is a digital display interface developed by the Video Electronics Standards Association (VESA). The interface is primarily used to connect a video source to a display device such as a computer monitor, though it can also be used to transmit audio, USB, and other forms of data.
The VESA specification is royalty-free. VESA designed it to replace VGA, DVI, and LVDS. Backward compatibility to VGA and DVI by using adapter dongles enables consumers to use DisplayPort fitted video sources without replacing existing display devices. Although DisplayPort has a greater throughput of the same functionality as HDMI, it is expected to complement the interface, not replace it.
==== USB-C ====
==== Other types of connection systems ====
=== Motherboard interfaces ===
Chronologically, connection systems between graphics card and motherboard were, mainly:
S-100 bus: Designed in 1974 as a part of the Altair 8800, it is the first industry-standard bus for the microcomputer industry.
ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It is an 8- or 16-bit bus clocked at 8 MHz.
NuBus: Used in Macintosh II, it is a 32-bit bus with an average bandwidth of 10 to 20 MB/s.
MCA: Introduced in 1987 by IBM it is a 32-bit bus clocked at 10 MHz.
EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It is a 32-bit bus clocked at 8.33 MHz.
VLB: An extension of ISA, it is a 32-bit bus clocked at 33 MHz. Also referred to as VESA.
PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic connectivity between devices, avoiding the manual adjustments required with jumpers. It is a 32-bit bus clocked 33 MHz.
UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It is a 64-bit bus clocked at 67 or 83 MHz.
USB: Although mostly used for miscellaneous devices, such as secondary storage devices or peripherals and toys, USB displays and display adapters exist. It was first used in 1996.
AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz.
PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the width of bus to 64 bits and the clock frequency to up to 133 MHz.
PCI Express: Abbreviated as PCIe, it is a point-to-point interface released in 2004. In 2006, it provided a data-transfer rate that is double of AGP. It should not be confused with PCI-X, an enhanced version of the original PCI specification. This is standard for most modern graphics cards.
The following table is a comparison between features of some interfaces listed above.
== See also ==
List of computer hardware
List of graphics card manufacturers
List of computer display standards – a detailed list of standards like SVGA, WXGA, WUXGA, etc.
AMD (ATI), Nvidia – quasi duopoly of 3D chip GPU and graphics card designers
GeForce, Radeon, Intel Arc – examples of graphics card series
GPGPU (i.e.: CUDA, AMD FireStream)
Framebuffer – the computer memory used to store a screen image
Capture card – the inverse of a graphics card
== References ==
== Sources ==
Mueller, Scott (2005) Upgrading and Repairing PCs. 16th edition. Que Publishing. ISBN 0-7897-3173-8
== External links ==
How Graphics Cards Work at HowStuffWorks | Wikipedia/Graphics_card |
Sustainable furniture design and sustainable interior design is the design of a habitable interior using furniture, finishes, and equipment while addressing the environmental impact of products and building materials used. By considering the life-cycle impact of each step, from raw material through the manufacturing process and through the product's end of life, sustainable choices can be made. Design considerations can include using recycled materials in the manufacturing process, reutilizing found furniture and using products that can be disassembled and recycled or reclaimed after their useful life. Another method of approach is working with local materials and vendors as a source for raw materials or products. Sustainable furniture design strives to create a closed-loop cycle in which materials and products are perpetually recycled so as to avoid disposal in landfills.
== Principles ==
The principles of sustainable interior design will allow designers to reduce negative effects on the environment and build for a more sustainable future. These principles include the following:
Energy efficiency
Low environmental impact
Reduce waste
Use of healthy materials
Create healthy indoor environments
== Certifications ==
CLIMATE NEUTRAL Certified
Cradle to Cradle
FSC (Forest Stewardship Council)
FTC (Fair Trade Certified)
GREENGUARD
GRS (Global Recycling Standard)
Global Organic Latex Standard
Global Organic Textiles Standard
GreenScreen
ISO14001
ISO9001
Indoor Advantage
LEVEL by BIFMA
MADESAFE
OEKO-TEX
== References ==
== Further reading ==
McDonough, W. & Braungart M. (2002). Cradle to Cradle: Rethinking the way we make things. New York, NY: North Point Press
== External links ==
[1] MBDC Cradle to Cradle Products
[2] EPEA GmbH | Wikipedia/Sustainable_furniture_design |
Inclusive design is a design process in which a product, service, or environment is designed to be usable for as many people as possible, particularly groups who are traditionally excluded from being able to use an interface or navigate an environment. Its focus is on fulfilling as many user needs as possible, not just as many users as possible. Historically, inclusive design has been linked to designing for people with physical disabilities, and accessibility is one of the key outcomes of inclusive design. However, rather than focusing on designing for disabilities, inclusive design is a methodology that considers many aspects of human diversity that could affect a person's ability to use a product, service, or environment, such as ability, language, culture, gender, and age. The Inclusive Design Research Center reframes disability as a mismatch between the needs of a user and the design of a product or system, emphasizing that disability can be experienced by any user. With this framing, it becomes clear that inclusive design is not limited to interfaces or technologies, but may also be applied to the design of policies and infrastructure.
Three dimensions in inclusive design methodology identified by the Inclusive Design Research Centre include:
Recognize, respect, and design with human uniqueness and variability.
Use inclusive, open, and transparent processes, and co-design with people who represent a diversity of perspectives.
Realize that you are designing in a complex adaptive system, where changes in a design will influence the larger systems that utilize it.
Further iterations of inclusive design include product inclusion, a practice of bringing an inclusive lens throughout development and design. This term suggests looking at multiple dimensions of identity including race, age, gender and more.
== History ==
In the 1950s, Europe, Japan, and the United States began to move towards "barrier-free design", which sought to remove obstacles in built environments for people with physical disabilities. By the 1970s, the emergence of accessible design began to move past the idea of building solutions specifically for individuals with disabilities towards normalization and integration. In 1973, the United States passed the Rehabilitation Act, which prohibits discrimination on the basis of disability in programs conducted by federal agencies, a crucial step towards recognizing that accessible design was a condition for supporting people's civil rights. In May 1974, the magazine Industrial Design published an article, "The Handicapped Majority," which argued that handicaps were not a niche concern and 'normal' users suffered from poor design of products and environments as well.
Clarkson and Coleman describe the emergence of inclusive design in the United Kingdom as a synthesis of existing projects and movement. Coleman also published the first reference to the term in 1994 with The Case for Inclusive Design, a presentation at the 12th Triennial Congress of the International Ergonomics Association. Much of this early work was inspired by an aging population and people living for longer times in older ages as voiced by scholars like Peter Laslett. Public focus on accessibility further increased with the passage of the passage of the Americans with Disabilities Act of 1990, which expanded the responsibility of accessible design to include both public and private entities.
In the 1990s, the United States followed the United Kingdom in shifting focus from universal design to inclusive design. Around this time, Selwyn Goldsmith (in the UK) and Ronald 'Ron' Mace (in the US), two architects who had both survived polio and were wheelchair users, advocated for an expanded view of design for everyone. Along with Mace, nine other authors from five organizations in the United States developed the Principles of Universal Design in 1997. In 1998, the United States amended Section 508 of the Rehabilitation Act to include inclusivity requirements for the design of information and technology.
In 2016, the Design for All Showcase at the White House featured a panel on inclusive design. The show featured clothing and personal devices either on the market or in development, modeled by disabled people. Rather than treating accessible and inclusive design as a product of compliance to legal requirements, the showcase positioned disability as a source of innovation.
== Related concepts and terms ==
Inclusive design is often equated to accessible or universal design, as all three concepts are related to ensuring that products are usable by all people.
=== Accessibility ===
Accessibility is oriented towards the outcome of ensuring that a product supports individual users' needs. Accessible design is often based upon compliance with government- or industry-designated guidelines, such as Americans with Disabilities Act (ADA) Accessibility Standards or Web Content Accessibility Guidelines (WCAG). As a result, it is limited in scope and often focuses on specific accommodations to ensure that people with disabilities have access to products, services, or environments. In contrast, inclusive design considers the needs of a wider range of potential users, including those with capability impairments that may not be legally recognized as disabilities. Inclusive design seeks out cases of exclusion from a product or environment, regardless of the cause, and seeks to reduce that exclusion. For example, a design that aims to reduce safety risks for people suffering from age-related long-sightedness would be best characterized as an inclusive design. Inclusive design also looks beyond resolving issues of access to improving the overall user experience.
As a result, accessibility is one piece of inclusive design, but not the whole picture. In general, designs created through an inclusive design process should be accessible, as the needs of people with different abilities are considered during the design process. But accessible designs aren't necessarily inclusive if they don't move beyond providing access to people of different abilities and consider the wider user experience for different types of people—particularly those who may not suffer from recognized, common cognitive, or physical disabilities.
=== Universal design ===
Universal design is design for everyone: the term was coined by Ronald Mace in 1980, and its aim is to produce designs that all people can use fully, without the need for adaptations. Universal design originated in work on the design of built environments, though its focus has expanded to encompass digital products and services as well.
Universal design principles include usefulness to people with diverse abilities; intuitive use regardless of user's skill level; perceptible communication of necessary information; tolerance for error; low physical effort; and appropriate size and space for all users. Many of these principles are compatible with accessible and inclusive design, but universal design typically provides a single solution for a large user base, without added accommodations. Therefore, while universal design supports the widest range of users, it does not aim to address individual accessibility needs. Inclusive design acknowledges that it is not always possible for one product to meet every user's needs, and thus explores different solutions for different groups of people.
=== Design justice ===
"Design justice" is a term coined to describe designing systems based on historical inequalities. As articulated by the Design Justice Network, it aims to “rethink design processes, center individuals who are typically marginalized by design, and employ collaborative, creative practices to tackle the most pressing challenges faced by communities.” By emphasizing the viewpoints of those historically excluded from design decisions, Design Justice seeks to redistribute power and promote inclusivity within systems, environments, and products.
== Approaches to inclusive design ==
In general, inclusive design involves engaging with users and seeking to understand their needs. Frequently, inclusive design approaches include steps such as: developing empathy for the needs and contexts of potential users; forming diverse teams; creating and testing multiple solutions; encouraging dialogue regarding a design rather than debate; and using structured processes that guide conversations toward productive outcomes.
=== Principles of Universal Design (1997) ===
Five United States organizations—including the Institute for Human Centered Design (IHCD) and Ronald Mace at North Carolina State University—developed the Principles of Universal Design in 1997. The IHCD has since shifted the language of the principles from 'universal' to 'inclusive.'
Equitable Use: Any group of users can use the design.
Flexibility in Use: A wide range of preferences and abilities is accommodated.
Simple, Intuitive Use: Regardless of the user's prior experience or knowledge, the use of the design is easy to understand.
Perceptible Information: Any necessary information is communicated to the user, regardless of environment or user abilities.
Tolerance for Error: Any adverse or hazardous consequences of actions is minimized.
Low Physical Effort: The design can be used efficiently and comfortably.
Size and Space for Approach & Use: Regardless of the user's body size, posture, or mobility, there is appropriate size and space to approach and use the design.
=== UK Commission for Architecture and Built Environment (2006) ===
The Commission for Architecture and the Built Environment (CABE) is an arm of the UK Design Council, which advises the government on architecture, urban design and public space. In 2006, they created the following set of inclusive design principles:
Inclusive: Everyone can use it safely, easily, and with dignity.
Responsive: Takes account of what people say they need and want.
Flexible: Different people can use it differently.
Convenient: Usable without too much effort.
Accommodating: For all people, regardless of age, gender, mobility, ethnicity, or circumstances.
Welcoming: No disabling barriers that might exclude some people.
Realistic: More than one solution to address differing needs.
Understandable: Everyone can locate and access it.
=== Inclusive Design Toolkit ===
The University of Cambridge's Inclusive Design Toolkit advocates incorporating inclusive design elements throughout the design process in iterative cycles of:
Exploring the needs
Creating solutions
Evaluating how well the needs are met
=== Corporate inclusive design approaches ===
Microsoft emphasizes the role of learning from people who represent different perspectives in their inclusive design approach. They advocate for the following steps:
Recognize exclusion: Open up products and services to more people.
Solve for one, extend to many: Designing for people with disabilities tends to result in designs that benefit other user groups as well.
Learn from diversity: Center people from the start of the design process, and develop insight from their perspectives.
At Adobe, the inclusive design process begins with identification of situations where people are excluded from using a product. They describe the following principles of inclusive design:
Identify ability-based exclusion: Proactively understand how and why people are excluded.
Identify situational challenges: These are specific scenarios where a user is unable to use a product effectively, such as when an environmental circumstance makes it difficult to use a design. For example, if a video does not include closed-captioning, it may be difficult to understand the audio in a noisy environment.
Avoid personal biases: Involve people from different communities throughout the design process.
Offer various ways to engage: When a user is given different options, they can choose a method that serves them best.
Provide equivalent experiences to all users: When designing different ways for people to engage with your product, ensure that the quality of these experiences is comparable for each user.
Google's, the inclusive design process is called product inclusion, and looks at 13 dimensions of identity and the intersections of those dimensions throughout the product development and design process.
=== Participatory design ===
Participatory design is rooted in the design of Scandinavian workplaces in the 1970s, and is based in the idea that those affected by a design should be consulted during the design process. Designers anticipate how users will actually use a product—and rather than focusing on merely designing a useful product, the whole infrastructure is considered: the goal is to design a good environment for the product at use time. This methodology treats the challenge of design as an ongoing process. Further, rather than viewing the design process in phases, such as analysis, design, construction, and implementation, the participatory design approach looks at projects in terms of a collection of users and their experiences.
== Examples of inclusive design ==
There are numerous examples of inclusive design that apply to interfaces and technology, consumer products, and infrastructure.
=== Interfaces and technology ===
Text legibility for older users: To ensure legibility of text for users of all ages, designers must use "reasonably large font sizes, have high contrast between characters in the foreground and background, and use a clean typeface." These design elements are beneficial to all users of an interface, but they are implemented to address the needs of senior users in particular. Other inclusive design solutions include adding buttons that allow users to adjust the font size of a website to their liking, or giving users the option to switch to a "dark mode" that is easier on the eyes for some users.
Assistive clothing: Individuals with physical disabilities may experience difficulties with dressing or undressing. In recent years, inclusive design practices have been implemented in the fashion industry by brands including Kohl’s, Nike, Target, Tommy Hilfiger, and Zappos. Such adaptive clothing products can take the form of magnets or velcro on the side of pants, zippers on the sleeves of tops to become detachable, and natural fibers to ensure breathability, temperature control, and overall comfort.
=== Consumer products ===
The ROPP Closure: Because most elderly and disabled people do not have the necessary strength to open packaging, a project was conducted that centered on modifying the packaging for plastic and glass containers. The roll-on pilfer-proof (ROPP) closure, a design used to seal spirit bottles, was used as a model in the project to determine the capable strength of consumers and the physical strength required to open the glass and plastic containers while also keeping the container properly sealed. If the strength limits of consumers and the design limits of the ROPP closure are solved, the majority of the public will be able to open a container, and the containers will be fully closed.
=== Infrastructure ===
Playgrounds/parks in urban areas: Children are often excluded from accessible public places in cities due to safety concerns, so spatial planners design playgrounds and parks within cities in order to give children the opportunity to freely examine their curiosities. These areas are significant for a child's growth because children can socialize with each other, explore their surroundings, and stay physically active without worrying about any common dangers that can occur in an urban environment.
Musholm: Musholm is a sports center located in Denmark which contains a 110 meter-long activity ramp with landings and recreational zones, where wheelchair users can engage in activities such as a climbing wall and a cable lift. The initial objective for the designers of this building was making sure that the facilities were accessible to everyone.
The Friendship Park: Located in Uruguay, this park was built so that it can be easily accessible for every child, especially children with disabilities. The park has swings available for children in wheelchairs, wide walkways, curved corners instead of sharp edges, and flooring that is cushioned and slip resistant. These features were added in order "to not only make the space safe, but to also make the space easy to use."
Tactile Pavement in Urban Area: Visually impaired individuals may have hard time navigating their surroundings. Implementing tactile pavement with different but identifiable textures allows them to understand what is ahead. Some indicators includes blister paving patterns, where it locates around crosswalks to indicate road crossing ahead, cycleway tactile paving that help alert people of cyclist path, where the pavements has lined tile along the path for the cyclist and directional tactile paving that help indicate the direction of the sidewalks when there are no other indicators on the street.
== See also ==
Feminist design
Universal design
Transgenerational design
Empathic design
Human-centered design
== References ==
== Further reading ==
The no 1 thing you're getting wrong about inclusive design
What is inclusive design?
Inclusive design principles
Inclusive design at IBM | Wikipedia/Design_justice |
In design, New Wave or Swiss Punk Typography refers to an approach to typography that defies strict grid-based arrangement conventions. Characteristics include inconsistent letterspacing, varying typeweights within single words and type set at non-right angles.
== Description ==
New Wave design was influenced by Punk and postmodern language theory. But there is a debate as to whether New Wave is a break or a natural progression of the Swiss Style. Sans-serif font still predominates, but the New Wave differs from its predecessor by stretching the limits of legibility. The break from the grid structure meant that type could be set center, ragged left, ragged right, or chaotic. The artistic freedom produced common forms such as the bold stairstep. The text hierarchy also strayed from the top down approach of the International Style. Text became textured with the development of transparent film and the increase in collage in graphic design. Further breakdown of minimalist aesthetic is seen in the increase of the number of type sizes and colours of fonts. Although punk and psychedelia embody the anti-corporate nature of their respective groups, the similarity between New Wave and the International Style has led some to label New Wave as “softer, commercialized punk culture.”
== History ==
Wolfgang Weingart is credited with developing New Wave typography in the early 1970s at the Basel School of Design, Switzerland. New Wave along with other postmodern typographical styles, such as Punk and Psychedelia, arose as reactions to International Typographic Style or Swiss Style which was very popular with corporate culture. International Typographic Style embodied the modernist aesthetic of minimalism, functionality, and logical universal standards. Postmodernist aesthetic rebuked the less is more philosophy, by ascribing that typography can play a more expressive role and can include ornamentation to achieve this. The increase in expression aimed to improve communication. Therefore, New Wave designers such as Weingart felt intuition was just as valuable as analytical skill in composition. The outcome is an increased kinetic energy in designs.
The adoption of New Wave Typography in the United States came through multiple channels. Weingart gave a lecture tour on the topic in the early 1970s which increased the number of American graphic designers who traveled to the Basel School for postgraduate training which they brought back to the States. Some of the prominent students from Weingart’s classes include April Greiman, Dan Friedman, and Willi Kunz (b.1943). They further developed the style, for example Dan Friedman rejected the term legibility for the broader term readability. The increase in ornamentation was further developed by William Longhauser and can be seen through the playful lettering used to display an architectural motif in an exhibition poster for Michael Graves (To see poster). Another strong contributor to the New Wave movement was the Cranbrook Academy of Art and their co-chair of graphic design, Katherine McCoy. McCoy asserted that “reading and viewing overlap and interact synergistically in order to create a holistic effect that features both modes of interpretation.”
The complexity of composition increased with the New Wave which transitioned well into computer developed graphic design. Complexity came to define the new digital aesthetic in graphic design. April Greiman was one of the first graphic designers to embrace computers and the New Wave aesthetic is still visible in her digital works.
== Important figures ==
=== Wolfgang Weingart ===
Weingart was a German graphic designer, known as the father of New Wave design. According to Weingart, he took "Swiss Typography" as his inspiration and considered himself a "typographic rebel".
Weingart began studying at the Merz Academy in Stuttgart, Germany. While there, he developed skills such as linocuts, woodblock printing, and typesetting. in 1963, Weingart moved to Basel, Switzerland and attended Basel School of Design. in 1968, he was asked to teach typography at the institution’s newly established department Weiterbildungsklasse für Grafik.
Weingart was a teacher, and taught typography. When the computer was introduced, Weingart was given the first personal Macintosh computer for his teachings. Like his colleagues, Weingart was uncertain about the new technologies. His limited use of technology can be seen is his work today.
=== Dan Friedman ===
Dan Friedman, an alumnus to Wolfgang Weingart, attended Carnegie Mellon University, and studied abroad in Ulm, Germany to get his master's degree in Graphic Design. Ulm started becoming unstable, forcing Friedman to transfer to Allgemeine Gewerbeschule, Basel in Switzerland. That is where he was instructed by Weingart. Friedman then started teaching graphic design full time at Yale in 1969.
He created projects for his students that reflect the things he was taught from his experience at Ulm and Basel. in 1972, Friedman would then go to accept another teaching job as an Assistant Professor of the Board of Study in Design at the State University of New York. Then, a year later he quit working for Yale.
=== April Greiman ===
April Greiman is a contemporary American Graphic designer. Like Weingart, She is one of the first designers to use technology in her work. She is recognized for introducing "New Wave" into the U.S.
Greiman is the Art Director for Made in Space, based in Los Angeles.
As a student, Greiman attended Kansas City Art Institute, then, in the 70's, went to Basel, Switzerland. She became a student at the Basel School of Design and was taught by Wolfgang Weingart. She then inherited the "New Wave" design style.
Currently, Greiman works at Woodbury University, School of Architecture, as an art professor.
== Origins of term ==
In terms of music, the term "New Wave" came from the late 1970's to early 1980's, inspired by the French New Wave cinema.
== References == | Wikipedia/New_Wave_(design) |
Ecological design or ecodesign is an approach to designing products and services that gives special consideration to the environmental impacts of a product over its entire lifecycle. Sim Van der Ryn and Stuart Cowan define it as "any form of design that minimizes environmentally destructive impacts by integrating itself with living processes." Ecological design can also be defined as the process of integrating environmental considerations into design and development with the aim of reducing environmental impacts of products through their life cycle.
The idea helps connect scattered efforts to address environmental issues in architecture, agriculture, engineering, and ecological restoration, among others. The term was first used by Sim Van der Ryn and Stuart Cowan in 1996. Ecological design was originally conceptualized as the “adding in “of environmental factor to the design process, but later turned to the details of eco-design practice, such as product system or individual product or industry as a whole. With the inclusion of life cycle modeling techniques, ecological design was related to the new interdisciplinary subject of industrial ecology.
== Overview ==
As the whole product's life cycle should be regarded in an integrated perspective, representatives from advanced product design, production, marketing, purchasing, and project management should work together on the Ecodesign of a further developed or new product. Together, they have the best chance to predict the holistic effects of changes of the product and their environmental impact. Considerations of ecological design during product development is a proactive approach to eliminate environmental pollution due to product waste.
An eco-design product may have a cradle-to-cradle life cycle ensuring zero waste is created in the whole process. By mimicking life cycles in nature, eco-design can serve as a concept to achieve a truly circular economy.
Environmental aspects which ought to be analysed for every stage of the life cycle are:
Consumption of resources (energy, materials, water or land area)
Emissions to air, water, and the ground (our Earth) as being relevant for the environment and human health, including noise emissions
Waste (hazardous waste and other waste defined in environmental legislation) is only an intermediate step and the final emissions to the environment (e.g. methane and leaching from landfills) are inventoried. All consumables, materials and parts used in the life cycle phases are accounted for, and all indirect environmental aspects linked to their production.
The environmental aspects of the phases of the life cycle are evaluated according to their environmental impact on the basis of a number of parameters, such as extent of environmental impact, potential for improvement, or potential of change.
According to this ranking the recommended changes are carried out and reviewed after a certain time.
As the impact of design and the design process has evolved, designers have become more aware of their responsibilities. The design of a product unrelated to its sociological, psychological, or ecological surroundings is no longer possible or acceptable in modern society.
With respect to these concepts, online platforms dealing in only Ecodesign products are emerging, with the additional sustainable purpose of eliminating all unnecessary distribution steps between the designer and the final customer.
Another area of ecological design is through designing with urban ecology in mind, similar to conservation biology, but designers take the natural world into account when designing landscapes, buildings. or anything that impacts interactions with wildlife. A such example in architecture is that of green roofs, offices, where these are spaces that nature can interact with the man made environment but also where humans benefit from these design technologies. Another area is with landscape architecture in the creation of natural gardens, and natural landscapes, these allow for natural wildlife to thrive in urban centres.
Multifunctionality increases consumption in both production and use, so products can also be made ecologically intentional through a series of undesign approaches. The core forms of undesign include self-inhibition, exclusion, removal, replacement, restoration, and safeguarding. This sustainability strategy emphasizes the long-term interests of the planet and is related to environmental restoration and technological mindfulness. An example of undesign is digital detox which refers to voluntarily limiting the use of digital media. Terms such as "non-use,""unplugging," and "digital disconnection" are included in the strategic features of some current productivity applications. Application designers effectively distance users from digital interfaces by limiting screen time and reducing smartphone distractions, thereby reducing energy consumption.
== Ecological design issues and the role of designers ==
=== The rise and conceptualization of ecological design ===
Since the Industrial Revolution, design fields have been criticized for employing unsustainable practices. The architect-designer Victor Papanek (1923–1998) suggested that industrial design has murdered by creating new species of permanent garbage and by choosing materials and processes that pollute the air. Papanek states that the designer-planner shares responsibility for nearly all of our products and tools, and hence, nearly all of our environmental mistakes. To address these issues, R. Buckminster Fuller (1895–1983) demonstrated how design could play a central role in identifying and addressing major world problems. Fuller was concerned with the Earth's finite energy resources and natural resources, and how to integrate machine tools into efficient systems of industrial production. He promoted the principle of "ephemeralization", a term he coined himself to do "more with less" and increase technological efficiency. This concept is key in ecological design that works towards sustainability. In 1986, the design theorist Clive Dilnot argued that design must once again become a means of ordering the world rather than merely of shaping products.
Despite rising ecological awareness in the 20th century, unsustainable design practices continued. The 1992 conference "The Agenda 21: The Earth Summit Strategy to Save Our Planet” put forward a proposition that the world is on a path of energy production and consumption that cannot be sustained. The report drew attention to individuals and groups around the world who have a set of principles to develop strategies for change among many aspects of society, including design. More broadly, the conference emphasized that designers must address human issues. These problems included six items: quality of life, efficient use of natural resources, protecting the global commons, managing human settlements, the use of chemicals and the management of human industrial waste, and fostering sustainable economic growth on a global scale.
Though Western society has only recently espoused ecological design principles, indigenous peoples have long coexisted with the environment. Scholars have discussed the importance of acknowledging and learning from Indigenous peoples and cultures to move towards a more sustainable society. Indigenous knowledge is valuable in ecological design as well as other ecological realms such as restoration ecology.
=== Sustainable development issues ===
These concepts of design tie into the concept of sustainable development. The three pillars addressed in sustainable development are: ecological integrity, social equity, and economic security. Gould and Lewis argue in their book Green Gentrification that urban redevelopment and projects have neglected the social equity pillar, resulting in development that focuses on profit and deepens social inequality. One result of this is green or environmental gentrification. This process is often the result of good intentions to clean up an area and provide green amenities, but without setting protections in place for existing residents to ensure they are not forced out by increased property values and influxes of new wealthier residents.
Unhoused persons are one particularly vulnerable affected population of environmental gentrification. Government environmental planning agendas related to green spaces may lead to the displacement and exclusion of unhoused individuals, under a guise of pro-environmental ethics. One example of this type of design is hostile architecture in urban parks. Park benches designed with metal arched bars to prevent a person from laying on the bench restricts who benefits from green space and ecological design.
== Life Cycle Analysis ==
Life Cycle Analysis (LCA) is a tool used to understand the how a product impacts the environment at each stage of its life cycle, from raw input to the end of the products' life cycle. Life Cycle Cost (LCC) is an economic metric that "identifies the minimum cost for each life cycle stage which would be presented in the aspects of material, procedures, usage, end-of-life and transportation." LCA and LCC can be used to identify particular aspects of a product that is particularly environmentally damaging and reduce those impacts. For example, LCA might reveal that the fabrication stage of a product's life cycle is particularly harmful for the environment and switching to a different material can drive emissions down. However, switching material may increase environmental effects later in a products life time; LCA takes into account the whole life cycle of a product and can alert designers to the many impacts of a product, which is why LCA is important.
Some of the factors that LCA takes into account are the costs and emissions of:
Transportation
Materials
Production
Usage
End-of-life
End-of-life, or disposal, is an important aspect of LCA as waste management is a global issue, with trash found everywhere around the world from the ocean to within organisms. A framework was developed to assess sustainability of waste sites titled EcoSWaD, Ecological Sustainability of Waste Disposal Sites. The model focuses on five major concerns: (1) location suitability, (2) operational sustainability, (3) environmental sustainability, (4) socioeconomic sustainability, and (5) site capacity sustainability. This framework was developed in 2021, as such most established waste disposal sites do not take these factors into consideration. Waste facilities such as dumps and incinerators are disproportionately placed in areas with low education and income levels, burdening these vulnerable populations with pollution and exposure to hazardous materials. For example, legislation in the United States, such as the Cerrell Report, has encouraged these types of classist and racist processes for siting incinerators. Internationally, there has been a global 'race to the bottom' in which polluting industries move to areas with fewer restrictions and regulations on emissions, usually in developing countries, disproportionately exposing vulnerable and impoverished populations to environmental threats. These factors make LCA and sustainable waste sites important on a global scale.
== Urban Ecological Design ==
Related to ecological urbanism, Urban Ecological Design integrates aesthetic, social, and ecological concerns into an urban design framework that seeks to increase ecological functioning, sustainably generate and consume resources, and create resilient built environments and the infrastructure to maintain them. Urban ecological design is inherently interdisciplinary: it integrates multiple academic and professional fields including environmental studies, sociology, justice studies, urban ecology, landscape ecology, urban planning, architecture, and landscape architecture. Urban ecological design aims to solve issues related to multiple large-scale trends including the growth of urban areas, climate change, and biodiversity loss. Urban ecological design has been described as a "process model" contrasted to a normative approach that outlines principles of design. Urban ecological design blends a multitude of frameworks and approaches to create solutions to these issues by improving Urban resilience, sustainable use and management of resources, and integrating ecological processes into the urban landscape.
== Applications in design ==
EcoMaterials, such as the use of local raw materials, are less costly and reduce the environmental costs of shipping, fuel consumption, and CO₂ emissions generated from transportation. Certified green building materials, such as wood from sustainably managed forest plantations, with accreditations from companies such as the Forest Stewardship Council (FSC), or the Pan-European Forest Certification Council (PEFCC), can be used.
Several other types of components and materials can be used in sustainable objects and buildings. Recyclable and recycled materials are commonly used in construction, but it is important that they don't generate any waste during manufacture or after their life cycle ends. Reclaimed materials such as timber at a construction site or junkyard can be given a second life by reusing them as support beams in a new building or as furniture. Stones from an excavation can be used in a retaining wall. The reuse of these items means that less energy is consumed in making new products and a new natural aesthetic quality is achieved.
=== Architecture ===
Off-grid homes only use clean electric power. They are completely separated and disconnected from the conventional electricity grid and receive their power supply by harnessing active or passive energy systems. Off-grid homes are also not served by other publicly or privately managed utilities, such as water and gas in addition to electricity.
=== Art ===
Increased applications of ecological design have gone along with the rise of environmental art. Recycling has been used in art since the early part of the 20th century, when cubist artist Pablo Picasso (1881–1973) and Georges Braque (1882–1963) created collages from newsprints, packaging and other found materials. Contemporary artists have also embraced sustainability, both in materials and artistic content. One modern artist who embraces the reuse of materials is Bob Johnson, creator of River Cubes. Johnson promotes "artful trash management" by creating sculptures from garbage and scraps found in rivers. Garbage is collected, then compressed into a cube that represents the place and people it came from.
=== Clothing ===
There are some clothing companies that are using several ecological design methods to change the future of the textile industry into a more environmentally friendly one. Some approaches include recycling used clothing to minimize the use of raw resources, using biodegradable textile materials to reduce the lasting impact on the environment, and using plant dyes instead of poisonous chemicals to improve the appearance and impact of fabric.
=== Decorating ===
The same principle can be used inside the home, where found objects are now displayed with pride and collecting certain objects and materials to furnish a home is now admired rather than looked down upon. Take for example the electric wire reel reused as a center table.
There is a huge demand in Western countries to decorate homes in a "green" style. A lot of effort is placed into recycled product design and the creation of a natural look. This ideal is also a part of developing countries, although their use of recycled and natural products is often based in necessity and wanting to get maximum use out of materials. The focus on self-regulation and personal lifestyle changes (including decorating as well as clothing and other consumer choices) has shifted questions of social responsibility away from government and corporations and onto the individual.
Biophilic design is a concept used within the building industry to increase occupant connectivity to the natural environment through the use of direct nature, indirect nature, and space and place conditions.
== Active systems ==
These systems use the principle of harnessing the power generated from renewable and inexhaustible sources of energy, for example; solar, wind, thermal, biomass, geothermal, and hydropower energy.
Solar power is a widely known and used renewable energy source. An increase in technology has allowed solar power to be used in a wide variety of applications. Two types of solar panels generate heat into electricity. Thermal solar panels reduce or eliminate the consumption of gas and diesel, and reduce CO₂ emissions. Photovoltaic panels convert solar radiation into an electric current which can power any appliance. This is a more complex technology and is generally more expensive to manufacture than thermal panels.
Biomass is the energy source created from organic materials generated through a forced or spontaneous biological process.
Geothermal energy is obtained by harnessing heat from the ground. This type of energy can be used to heat and cool homes. It eliminates dependence on external energy and generates minimum waste. It is also hidden from view as it is placed underground, making it more aesthetically pleasing and easier to incorporate in a design.
Wind turbines are a useful application for areas without immediate conventional power sources, e.g., rural areas with schools and hospitals that need more power. Wind turbines can provide up to 30% of the energy consumed by a household but they are subject to regulations and technical specifications, such as the maximum distance at which the facility is located from the place of consumption and the power required and permitted for each property.
Water recycling systems such as rainwater tanks that harvest water for multiple purposes. Reusing grey water generated by households are a useful way of not wasting drinking water.
Hydropower, also known as water power, is the use of falling or fast-running water to produce electricity or to power machines. Hydropower is an attractive alternative to fossil fuels as it does not directly produce carbon dioxide or other atmospheric pollutants and it provides a relatively consistent source of power.
== Passive systems ==
Buildings that integrate passive energy systems (bioclimatic buildings) are heated using non-mechanical methods, thereby optimizing natural resources.
Passive daylighting involves the positioning and location of a building to allow for and make use of sunlight throughout the whole year. By using the sun's rays, thermal mass is stored in the building materials such as concrete and can generate enough heat for a room.
Green roofs are roofs that are partially or completely covered with plants or other vegetation. Green roofs are passive systems in that they create insulation that helps regulate the building's temperature. They also retain water, providing a water recycling system, and can provide soundproofing.
== History ==
1971 Ian McHarg, in his book Design with Nature, popularized a system of analyzing the layers of a site in order to compile a complete understanding of the qualitative attributes of a place. McHarg gave every qualitative aspect of the site a layer, such as the history, hydrology, topography, vegetation, etc. This system became the foundation of today's Geographic Information Systems (GIS), a ubiquitous tool used in the practice of ecological landscape design.
1978 Permaculture. Bill Mollison and David Holmgren coin the phrase for a system of designing regenerative human ecosystems. (Founded in the work of Fukuoka, Yeoman, Smith, etc..
1994 David Orr, in his book Earth in Mind: On Education, Environment, and the Human Prospect, compiled a series of essays on "ecological design intelligence" and its power to create healthy, durable, resilient, just, and prosperous communities.
1994 Canadian biologists John Todd and Nancy Jack Todd, in their book From Eco-Cities to Living Machines, describe the precepts of ecological design.
2000 Ecosa Institute begins offering an Ecological Design Certificate, teaching designers to design with nature.
2004 Fritjof Capra, in his book The Hidden Connections: A Science for Sustainable Living, wrote this primer on the science of living systems and considers the application of new thinking by life scientists to our understanding of social organization.
2004 K. Ausebel compiled personal stories of the world's most innovative ecological designers in Nature's Operating Instructions.
== Ecodesign research ==
Ecodesign research focuses primarily on barriers to implementation, ecodesign tools and methods, and the intersection of ecodesign with other research disciplines.
Several review articles provide an overview of the evolution and current state of ecodesign research:
== See also ==
== Notes and references ==
== Bibliography ==
Lacoste, R., Robiolle, M., Vital, X., (2011), "Ecodesign of electronic devices", DUNOD, France
McAloone, T. C. & Bey, N. (2009), Environmental improvement through product development - a guide, Danish EPA, Copenhagen Denmark, ISBN 978-87-7052-950-1, 46 pages
Lindahl, M.: Designer's utilization of DfE methods. Proceedings of the 1st International Workshop on "Sustainable Consumption", 2003. Tokyo, Japan, The Society of Non-Traditional Technology (SNTT) and Research Center for Life Cycle Assessment (AIST).
Wimmer W., Züst R., Lee K.-M. (2004): Ecodesign Implementation – A Systematic Guidance on Integrating Environmental Considerations into Product Development, Dordrecht, Springer
Charter, M./ Tischner, U. (2001): Sustainable Solutions. Developing Products and Services for the Future. Sheffield: Greenleaf
ISO TC 207/WG3
ISO TR 14062
The Journal of Design History: Environmental conscious design and inverse manufacturing, 2005. Eco Design 2005, 4th International Symposium
The Design Journal: Vol 13, Number 1, March 2010 - Design is the problem: The future of Design must be sustainable, N. Shedroff.
"Eco Deco", S. Walton
"Small ECO Houses - Living Green in Style", C. Paredes Benitez, A. Sanchez Vidiella
== Further reading ==
From Bauhaus to Ecohouse: A History of Ecological Design. By Peder Anker, Published by Louisiana State University Press, 2010. ISBN 0-8071-3551-8.
Ecological Design. By Sim Van der Ryn, Stuart Cowan, Published by Island Press, 2007. ISBN 978-1-59726-141-8 (2nd ed., 1st, 1996)
Ignorance and Surprise: Science, Society, and Ecological Design. By Matthias Gross, Published by MIT Press, 2010. ISBN 0-262-01348-7
== External links ==
Sustainable Design & Development Resource Guide
The European Commission's website on Ecodesign activities and related legislation including minimum requirements for energy using products
The European Commission's Directory of LCA and Ecodesign services, tools and databases
The European Commission's ELCD core database with Ecoprofiles (free of charge)
Environmental Effect Analysis (EEA) – Principles and structure
EIME, the ecodesign methodology of the electrical and electronic industry
4E, IEA Implementing Agreement on Efficient Electrical End-Use Equipment | Wikipedia/Ecological_design |
Ergonomics, also known as human factors or human factors engineering (HFE), is the application of psychological and physiological principles to the engineering and design of products, processes, and systems. Primary goals of human factors engineering are to reduce human error, increase productivity and system availability, and enhance safety, health and comfort with a specific focus on the interaction between the human and equipment.
The field is a combination of numerous disciplines, such as psychology, sociology, engineering, biomechanics, industrial design, physiology, anthropometry, interaction design, visual design, user experience, and user interface design. Human factors research employs methods and approaches from these and other knowledge disciplines to study human behavior and generate data relevant to previously stated goals. In studying and sharing learning on the design of equipment, devices, and processes that fit the human body and its cognitive abilities, the two terms, "human factors" and "ergonomics", are essentially synonymous as to their referent and meaning in current literature.
The International Ergonomics Association defines ergonomics or human factors as follows:
Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design to optimize human well-being and overall system performance.
Human factors engineering is relevant in the design of such things as safe furniture and easy-to-use interfaces to machines and equipment. Proper ergonomic design is necessary to prevent repetitive strain injuries and other musculoskeletal disorders, which can develop over time and can lead to long-term disability. Human factors and ergonomics are concerned with the "fit" between the user, equipment, and environment or "fitting a job to a person" or "fitting the task to the man". It accounts for the user's capabilities and limitations in seeking to ensure that tasks, functions, information, and the environment suit that user.
To assess the fit between a person and the technology being used, human factors specialists or ergonomists consider the job (activity) being performed and the demands on the user; the equipment used (its size, shape, and how appropriate it is for the task); and the information used (how it is presented, accessed, and modified). Ergonomics draws on many disciplines in its study of humans and their environments, including anthropometry, biomechanics, mechanical engineering, industrial engineering, industrial design, information design, kinesiology, physiology, cognitive psychology, industrial and organizational psychology, and space psychology.
== Etymology ==
The term ergonomics (from the Greek ἔργον, meaning "work", and νόμος, meaning "natural law") first entered the modern lexicon when Polish scientist Wojciech Jastrzębowski used the word in his 1857 article Rys ergonomji czyli nauki o pracy, opartej na prawdach poczerpniętych z Nauki Przyrody (The Outline of Ergonomics; i.e. Science of Work, Based on the Truths Taken from the Natural Science). The French scholar Jean-Gustave Courcelle-Seneuil, apparently without knowledge of Jastrzębowski's article, used the word with a slightly different meaning in 1858. The introduction of the term to the English lexicon is widely attributed to British psychologist Hywel Murrell, at the 1949 meeting at the UK's Admiralty, which led to the foundation of The Ergonomics Society. He used it to encompass the studies in which he had been engaged during and after World War II.
The expression human factors is a predominantly North American term which has been adopted to emphasize the application of the same methods to non-work-related situations. A "human factor" is a physical or cognitive property of an individual or social behavior specific to humans that may influence the functioning of technological systems. The terms "human factors" and "ergonomics" are essentially synonymous.
== Domains of specialization ==
According to the International Ergonomics Association, within the discipline of ergonomics there exist domains of specialization. These comprise three main fields of research: physical, cognitive, and organizational ergonomics.
There are many specializations within these broad categories. Specializations in the field of physical ergonomics may include visual ergonomics. Specializations within the field of cognitive ergonomics may include usability, human–computer interaction, and user experience engineering.
Some specializations may cut across these domains: Environmental ergonomics is concerned with human interaction with the environment as characterized by climate, temperature, pressure, vibration, light. The emerging field of human factors in highway safety uses human factors principles to understand the actions and capabilities of road users—car and truck drivers, pedestrians, cyclists, etc.—and use this knowledge to design roads and streets to reduce traffic collisions. Driver error is listed as a contributing factor in 44% of fatal collisions in the United States, so a topic of particular interest is how road users gather and process information about the road and its environment, and how to assist them to make the appropriate decision.
New terms are being generated all the time. For instance, "user trial engineer" may refer to a human factors engineering professional who specializes in user trials. Although the names change, human factors professionals apply an understanding of human factors to the design of equipment, systems and working methods to improve comfort, health, safety, and productivity.
=== Physical ergonomics ===
Physical ergonomics is concerned with human anatomy, and some of the anthropometric, physiological, and biomechanical characteristics as they relate to physical activity. Physical ergonomic principles have been widely used in the design of both consumer and industrial products for optimizing performance and preventing/treating work-related disorders by reducing the mechanisms behind mechanically-induced acute and chronic musculoskeletal injuries/disorders. Risk factors such as localized mechanical pressures, force and posture in a sedentary office environment lead to injuries attributed to an occupational environment. Physical ergonomics is important to those diagnosed with physiological ailments or disorders such as arthritis (both chronic and temporary) or carpal tunnel syndrome. Pressure that is insignificant or imperceptible to those unaffected by these disorders may be very painful, or render a device unusable, for those who are. Many ergonomically designed products are also used or recommended to treat or prevent such disorders, and to treat pressure-related chronic pain.
One of the most prevalent types of work-related injuries is musculoskeletal disorder. Work-related musculoskeletal disorders (WRMDs) result in persistent pain, loss of functional capacity and work disability, but their initial diagnosis is difficult because they are mainly based on complaints of pain and other symptoms. Every year, 1.8 million U.S. workers experience WRMDs and nearly 600,000 of the injuries are serious enough to cause workers to miss work. Certain jobs or work conditions cause a higher rate of worker complaints of undue strain, localized fatigue, discomfort, or pain that does not go away after overnight rest. These types of jobs are often those involving activities such as repetitive and forceful exertions; frequent, heavy, or overhead lifts; awkward work positions; or use of vibrating equipment. The Occupational Safety and Health Administration (OSHA) has found substantial evidence that ergonomics programs can cut workers' compensation costs, increase productivity and decrease employee turnover. Mitigation solutions can include both short term and long-term solutions. Short and long-term solutions involve awareness training, positioning of the body, furniture and equipment and ergonomic exercises. Sit-stand stations and computer accessories that provide soft surfaces for resting the palm as well as split keyboards are recommended. Additionally, resources within the HR department can be allocated to provide assessments to employees to ensure the above criteria are met. Therefore, it is important to gather data to identify jobs or work conditions that are most problematic, using sources such as injury and illness logs, medical records, and job analyses.
Innovative workstations that are being tested include sit-stand desks, height adjustable desk, treadmill desks, pedal devices and cycle ergometers. In multiple studies these new workstations resulted in decreased waist circumference and improved psychological well-being. However a significant number of additional studies have seen no marked improvement in health outcomes.
With the emergence of collaborative robots and smart systems in manufacturing environments, the artificial agents can be used to improve physical ergonomics of human co-workers. For example, during human–robot collaboration the robot can use biomechanical models of the human co-worker in order to adjust the working configuration and account for various ergonomic metrics, such as human posture, joint torques, arm manipulability and muscle fatigue. The ergonomic suitability of the shared workspace with respect to these metrics can also be displayed to the human with workspace maps through visual interfaces.
=== Cognitive ergonomics ===
Cognitive ergonomics is concerned with mental processes, such as perception, emotion, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. Relevant topics include mental workload, decision-making, skilled performance, human reliability, work stress and training as these may relate to human–system and human–computer interaction design.
=== Organizational ergonomics and safety culture ===
Organizational ergonomics is concerned with the optimization of socio-technical systems, including their organizational structures, policies, and processes. Relevant topics include human communication successes or failures in adaptation to other system elements, crew resource management, work design, work systems, design of working times, teamwork, participatory ergonomics, community ergonomics, cooperative work, new work programs, virtual organizations, remote work, and quality management. Safety culture within an organization of engineers and technicians has been linked to engineering safety with cultural dimensions including power distance and ambiguity tolerance. Low power distance has been shown to be more conducive to a safety culture. Organizations with cultures of concealment or lack of empathy have been shown to have poor safety culture.
== History ==
=== Ancient societies ===
Some have stated that human ergonomics began with Australopithecus prometheus (also known as "Little Foot"), a primate who created handheld tools out of different types of stone, clearly distinguishing between tools based on their ability to perform designated tasks. The foundations of the science of ergonomics appear to have been laid within the context of the culture of Ancient Greece. A good deal of evidence indicates that Greek civilization in the 5th century BC used ergonomic principles in the design of their tools, jobs, and workplaces. One outstanding example of this can be found in the description Hippocrates gave of how a surgeon's workplace should be designed and how the tools he uses should be arranged. The archaeological record also shows that the early Egyptian dynasties made tools and household equipment that illustrated ergonomic principles.
=== Industrial societies ===
Bernardino Ramazzini was one of the first people to systematically study the illness that resulted from work, earning himself the nickname "father of occupational medicine". In the late 1600s and early 1700s Ramazzini visited many worksites where he documented the movements of laborers and spoke to them about their ailments. He then published De Morbis Artificum Diatriba (Latin for "Diseases of Workers") which detailed occupations, common illnesses, and remedies. In the 19th century, Frederick Winslow Taylor pioneered the "scientific management" method, which proposed a way to find the optimum method of carrying out a given task. Taylor found that he could, for example, triple the amount of coal that workers were shoveling by incrementally reducing the size and weight of coal shovels until the fastest shoveling rate was reached. Frank and Lillian Gilbreth expanded Taylor's methods in the early 1900s to develop the "time and motion study". They aimed to improve efficiency by eliminating unnecessary steps and actions. By applying this approach, the Gilbreths reduced the number of motions in bricklaying from 18 to 4.5, allowing bricklayers to increase their productivity from 120 to 350 bricks per hour.
However, this approach was rejected by Russian researchers who focused on the well-being of the worker. At the First Conference on Scientific Organization of Labour (1921) Vladimir Bekhterev and Vladimir Nikolayevich Myasishchev criticised Taylorism. Bekhterev argued that "The ultimate ideal of the labour problem is not in it [Taylorism], but is in such organisation of the labour process that would yield a maximum of efficiency coupled with a minimum of health hazards, absence of fatigue and a guarantee of the sound health and all round personal development of the working people." Myasishchev rejected Frederick Taylor's proposal to turn man into a machine. Dull monotonous work was a temporary necessity until a corresponding machine can be developed. He also went on to suggest a new discipline of "ergology" to study work as an integral part of the re-organisation of work. The concept was taken up by Myasishchev's mentor, Bekhterev, in his final report on the conference, merely changing the name to "ergonology"
=== Aviation ===
Prior to World War I, the focus of aviation psychology was on the aviator himself, but the war shifted the focus onto the aircraft, in particular, the design of controls and displays, and the effects of altitude and environmental factors on the pilot. The war saw the emergence of aeromedical research and the need for testing and measurement methods. Studies on driver behavior started gaining momentum during this period, as Henry Ford started providing millions of Americans with automobiles. Another major development during this period was the performance of aeromedical research. By the end of World War I, two aeronautical labs were established, one at Brooks Air Force Base, Texas and the other at Wright-Patterson Air Force Base outside of Dayton, Ohio. Many tests were conducted to determine which characteristic differentiated the successful pilots from the unsuccessful ones. During the early 1930s, Edwin Link developed the first flight simulator. The trend continued and more sophisticated simulators and test equipment were developed. Another significant development was in the civilian sector, where the effects of illumination on worker productivity were examined. This led to the identification of the Hawthorne Effect, which suggested that motivational factors could significantly influence human performance.
World War II marked the development of new and complex machines and weaponry, and these made new demands on operators' cognition. It was no longer possible to adopt the Tayloristic principle of matching individuals to preexisting jobs. Now the design of equipment had to take into account human limitations and take advantage of human capabilities. The decision-making, attention, situational awareness and hand-eye coordination of the machine's operator became key in the success or failure of a task. There was substantial research conducted to determine the human capabilities and limitations that had to be accomplished. A lot of this research took off where the aeromedical research between the wars had left off. An example of this is the study done by Fitts and Jones (1947), who studied the most effective configuration of control knobs to be used in aircraft cockpits.
Much of this research transcended into other equipment with the aim of making the controls and displays easier for the operators to use. The entry of the terms "human factors" and "ergonomics" into the modern lexicon date from this period. It was observed that fully functional aircraft flown by the best-trained pilots, still crashed. In 1943 Alphonse Chapanis, a lieutenant in the U.S. Army, showed that this so-called "pilot error" could be greatly reduced when more logical and differentiable controls replaced confusing designs in airplane cockpits. After the war, the Army Air Force published 19 volumes summarizing what had been established from research during the war.
In the decades since World War II, human factors has continued to flourish and diversify. Work by Elias Porter and others within the RAND Corporation after WWII extended the conception of human factors. "As the thinking progressed, a new concept developed—that it was possible to view an organization such as an air-defense, man-machine system as a single organism and that it was possible to study the behavior of such an organism. It was the climate for a breakthrough." In the initial 20 years after the World War II, most activities were done by the "founding fathers": Alphonse Chapanis, Paul Fitts, and Small.
=== Cold War ===
The beginning of the Cold War led to a major expansion of Defense supported research laboratories. Many labs established during WWII started expanding. Most of the research following the war was military-sponsored. Large sums of money were granted to universities to conduct research. The scope of the research also broadened from small equipments to entire workstations and systems. Concurrently, a lot of opportunities started opening up in the civilian industry. The focus shifted from research to participation through advice to engineers in the design of equipment. After 1965, the period saw a maturation of the discipline. The field has expanded with the development of the computer and computer applications.
The Space Age created new human factors issues such as weightlessness and extreme g-forces. Tolerance of the harsh environment of space and its effects on the mind and body were widely studied.
=== Information age ===
The dawn of the Information Age has resulted in the related field of human–computer interaction (HCI). Likewise, the growing demand for and competition among consumer goods and electronics has resulted in more companies and industries including human factors in their product design. Using advanced technologies in human kinetics, body-mapping, movement patterns and heat zones, companies are able to manufacture purpose-specific garments, including full body suits, jerseys, shorts, shoes, and even underwear.
== Organizations ==
Formed in 1946 in the UK, the oldest professional body for human factors specialists and ergonomists is The Chartered Institute of Ergonomics and Human Factors, formally known as the Institute of Ergonomics and Human Factors and before that, The Ergonomics Society.
The Human Factors and Ergonomics Society (HFES) was founded in 1957. The Society's mission is to promote the discovery and exchange of knowledge concerning the characteristics of human beings that are applicable to the design of systems and devices of all kinds.
The Association of Canadian Ergonomists - l'Association canadienne d'ergonomie (ACE) was founded in 1968. It was originally named the Human Factors Association of Canada (HFAC), with ACE (in French) added in 1984, and the consistent, bilingual title adopted in 1999. According to its 2017 mission statement, ACE unites and advances the knowledge and skills of ergonomics and human factors practitioners to optimise human and organisational well-being.
The International Ergonomics Association (IEA) is a federation of ergonomics and human factors societies from around the world. The mission of the IEA is to elaborate and advance ergonomics science and practice, and to improve the quality of life by expanding its scope of application and contribution to society. As of September 2008, the International Ergonomics Association has 46 federated societies and 2 affiliated societies.
The Human Factors Transforming Healthcare (HFTH) is an international network of HF practitioners who are embedded within hospitals and health systems. The goal of the network is to provide resources for human factors practitioners and healthcare organizations looking to successfully apply HF principles to improve patient care and provider performance. The network also serves as collaborative platform for human factors practitioners, students, faculty, industry partners, and those curious about human factors in healthcare.
=== Related organizations ===
The Institute of Occupational Medicine (IOM) was founded by the coal industry in 1969. From the outset the IOM employed an ergonomics staff to apply ergonomics principles to the design of mining machinery and environments. To this day, the IOM continues ergonomics activities, especially in the fields of musculoskeletal disorders, heat stress, and the ergonomics of personal protective equipment (PPE). Like many in occupational ergonomics, the demands and requirements of an ageing UK workforce are a growing concern and interest to IOM ergonomists.
The International Society of Automotive Engineers (SAE) is a professional organization for mobility engineering professionals in the aerospace, automotive, and commercial vehicle industries. The Society is a standards development organization for the engineering of powered vehicles of all kinds, including cars, trucks, boats, aircraft, and others. The Society of Automotive Engineers has established a number of standards used in the automotive industry and elsewhere. It encourages the design of vehicles in accordance with established human factors principles. It is one of the most influential organizations with respect to ergonomics work in automotive design. This society regularly holds conferences which address topics spanning all aspects of human factors and ergonomics.
== Practitioners ==
Human factors practitioners come from a variety of backgrounds, though predominantly they are psychologists (from the various subfields of industrial and organizational psychology, engineering psychology, cognitive psychology, perceptual psychology, applied psychology, and experimental psychology) and physiologists. Designers (industrial, interaction, and graphic), anthropologists, technical communication scholars and computer scientists also contribute. Typically, an ergonomist will have an undergraduate degree in psychology, engineering, design or health sciences, and usually a master's degree or doctoral degree in a related discipline. Though some practitioners enter the field of human factors from other disciplines, both M.S. and PhD degrees in Human Factors Engineering are available from several universities worldwide.
=== Sedentary workplace ===
Contemporary offices did not exist until the 1830s, with Wojciech Jastrzębowski's seminal book on MSDergonomics following in 1857 and the first published study of posture appearing in 1955.
As the American workforce began to shift towards sedentary employment, the prevalence of work-related musculoskeletal disorders, cognitive issues, etc. began to rise. In 1900, 41% of the US workforce was employed in agriculture but by 2000 that had dropped to 1.9%. This coincides with an increase in growth in desk-based employment (25% of all employment in 2000) and the surveillance of non-fatal workplace injuries by OSHA and Bureau of Labor Statistics in 1971. Sedentary behavior requires a basal metabolic rate of 1.0–1.5 and occurs in a sitting or reclining position. Adults older than 50 years report spending more time sedentary and for adults older than 65 years this is often 80% of their awake time. Multiple studies show a dose-response relationship between sedentary time and all-cause mortality with an increase of 3% mortality per additional sedentary hour each day. High quantities of sedentary time without breaks is correlated to higher risk of chronic disease, obesity, cardiovascular disease, type 2 diabetes and cancer.
Currently, there is a large proportion of the overall workforce who is employed in low physical activity occupations. Sedentary behavior, such as spending long periods of time in seated positions poses a serious threat for injuries and additional health risks. Unfortunately, even though some workplaces make an effort to provide a well designed environment for sedentary employees, any employee who is performing large amounts of sitting will likely experience discomfort.
There are existing conditions that would predispose both individuals and populations to an increase in prevalence of living sedentary lifestyles, including: socioeconomic determinants, education levels, occupation, living environment, age (as mentioned above) and more. A study published by the Iranian Journal of Public Health examined socioeconomic factors and sedentary lifestyle effects for individuals in a working community. The study concluded that individuals who reported living in low income environments were more inclined to living sedentary behavior compared to those who reported being of high socioeconomic status. Individuals who achieve less education are also considered to be a high risk group to partake in sedentary lifestyles, however, each community is different and has different resources available that may vary this risk. Oftentimes, larger worksites are associated with increased occupational sitting. Those who work in environments that are classified as business and office jobs are typically more exposed to sitting and sedentary behavior while in the workplace. Additionally, occupations that are full-time, have schedule flexibility, are also included in that demographic, and are more likely to sit often throughout their workday.
=== Policy implementation ===
Obstacles surrounding better ergonomic features to sedentary employees include cost, time, effort and for both companies and employees. The evidence above helps establish the importance of ergonomics in a sedentary workplace, yet missing information from this problem is enforcement and policy implementation. As a modernized workplace becomes more technology-based, more jobs are becoming primarily seated, leading to a need to prevent chronic injuries and pain. This is becoming easier with the amount of research around ergonomic tools saving companies money by limiting the number of days missed from work and workers' compensation cases. The way to ensure that corporations prioritize these health outcomes for their employees is through policy and implementation.
In the United States, there are no nationwide policies that are currently in place; however, a handful of big companies and states have taken on cultural policies to ensure the safety of all workers. For example, the state of Nevada risk management department has established a set of ground rules for both agencies' responsibilities and employees' responsibilities. The agency responsibilities include evaluating workstations, using risk management resources when necessary and keeping OSHA records.
== Methods ==
Until recently, methods used to evaluate human factors and ergonomics ranged from simple questionnaires to more complex and expensive usability labs. Some of the more common human factors methods are listed below:
Ethnographic analysis: Using methods derived from ethnography, this process focuses on observing the uses of technology in a practical environment. It is a qualitative and observational method that focuses on "real-world" experience and pressures, and the usage of technology or environments in the workplace. The process is best used early in the design process.
Focus groups are another form of qualitative research in which one individual will facilitate discussion and elicit opinions about the technology or process under investigation. This can be on a one-to-one interview basis, or in a group session. Can be used to gain a large quantity of deep qualitative data, though due to the small sample size, can be subject to a higher degree of individual bias. Can be used at any point in the design process, as it is largely dependent on the exact questions to be pursued, and the structure of the group. Can be extremely costly.
Iterative design: Also known as prototyping, the iterative design process seeks to involve users at several stages of design, to correct problems as they emerge. As prototypes emerge from the design process, these are subjected to other forms of analysis as outlined in this article, and the results are then taken and incorporated into the new design. Trends among users are analyzed, and products redesigned. This can become a costly process, and needs to be done as soon as possible in the design process before designs become too concrete.
Meta-analysis: A supplementary technique used to examine a wide body of already existing data or literature to derive trends or form hypotheses to aid design decisions. As part of a literature survey, a meta-analysis can be performed to discern a collective trend from individual variables.
Subjects-in-tandem: Two subjects are asked to work concurrently on a series of tasks while vocalizing their analytical observations. The technique is also known as "Co-Discovery" as participants tend to feed off of each other's comments to generate a richer set of observations than is often possible with the participants separately. This is observed by the researcher, and can be used to discover usability difficulties. This process is usually recorded.
Surveys and questionnaires: A commonly used technique outside of human factors as well, surveys and questionnaires have an advantage in that they can be administered to a large group of people for relatively low cost, enabling the researcher to gain a large amount of data. The validity of the data obtained is, however, always in question, as the questions must be written and interpreted correctly, and are, by definition, subjective. Those who actually respond are in effect self-selecting as well, widening the gap between the sample and the population further.
Task analysis: A process with roots in activity theory, task analysis is a way of systematically describing human interaction with a system or process to understand how to match the demands of the system or process to human capabilities. The complexity of this process is generally proportional to the complexity of the task being analyzed, and so can vary in cost and time involvement. It is a qualitative and observational process. Best used early in the design process.
Human performance modeling: A method of quantifying human behavior, cognition, and processes; a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction.
Think aloud protocol: Also known as "concurrent verbal protocol", this is the process of asking a user to execute a series of tasks or use technology, while continuously verbalizing their thoughts so that a researcher can gain insights as to the users' analytical process. Can be useful for finding design flaws that do not affect task performance, but may have a negative cognitive effect on the user. Also useful for utilizing experts to better understand procedural knowledge of the task in question. Less expensive than focus groups, but tends to be more specific and subjective.
User analysis: This process is based around designing for the attributes of the intended user or operator, establishing the characteristics that define them, creating a persona for the user. Best done at the outset of the design process, a user analysis will attempt to predict the most common users, and the characteristics that they would be assumed to have in common. This can be problematic if the design concept does not match the actual user, or if the identified are too vague to make clear design decisions from. This process is, however, usually quite inexpensive, and commonly used.
"Wizard of Oz": This is a comparatively uncommon technique but has seen some use in mobile devices. Based upon the Wizard of Oz experiment, this technique involves an operator who remotely controls the operation of a device to imitate the response of an actual computer program. It has the advantage of producing a highly changeable set of reactions, but can be quite costly and difficult to undertake.
Methods analysis is the process of studying the tasks a worker completes using a step-by-step investigation. Each task in broken down into smaller steps until each motion the worker performs is described. Doing so enables you to see exactly where repetitive or straining tasks occur.
Time studies determine the time required for a worker to complete each task. Time studies are often used to analyze cyclical jobs. They are considered "event based" studies because time measurements are triggered by the occurrence of predetermined events.
Work sampling is a method in which the job is sampled at random intervals to determine the proportion of total time spent on a particular task. It provides insight into how often workers are performing tasks which might cause strain on their bodies.
Predetermined time systems are methods for analyzing the time spent by workers on a particular task. One of the most widely used predetermined time system is called Methods-Time-Measurement. Other common work measurement systems include MODAPTS and MOST. Industry specific applications based on PTS are Seweasy, MODAPTS and GSD.
Cognitive walkthrough: This method is a usability inspection method in which the evaluators can apply user perspective to task scenarios to identify design problems. As applied to macroergonomics, evaluators are able to analyze the usability of work system designs to identify how well a work system is organized and how well the workflow is integrated.
Kansei method: This is a method that transforms consumer's responses to new products into design specifications. As applied to macroergonomics, this method can translate employee's responses to changes to a work system into design specifications.
High Integration of Technology, Organization, and People: This is a manual procedure done step-by-step to apply technological change to the workplace. It allows managers to be more aware of the human and organizational aspects of their technology plans, allowing them to efficiently integrate technology in these contexts.
Top modeler: This model helps manufacturing companies identify the organizational changes needed when new technologies are being considered for their process.
Computer-integrated Manufacturing, Organization, and People System Design: This model allows for evaluating computer-integrated manufacturing, organization, and people system design based on knowledge of the system.
Anthropotechnology: This method considers analysis and design modification of systems for the efficient transfer of technology from one culture to another.
Systems analysis tool: This is a method to conduct systematic trade-off evaluations of work-system intervention alternatives.
Macroergonomic analysis of structure: This method analyzes the structure of work systems according to their compatibility with unique sociotechnical aspects.
Macroergonomic analysis and design: This method assesses work-system processes by using a ten-step process.
Virtual manufacturing and response surface methodology: This method uses computerized tools and statistical analysis for workstation design.
Computer-aided ergonomics: This method uses computers to solve complex ergonomic problems
=== Weaknesses ===
Problems related to measures of usability include the fact that measures of learning and retention of how to use an interface are rarely employed and some studies treat measures of how users interact with interfaces as synonymous with quality-in-use, despite an unclear relation.
Although field methods can be extremely useful because they are conducted in the users' natural environment, they have some major limitations to consider. The limitations include:
Usually take more time and resources than other methods
Very high effort in planning, recruiting, and executing compared with other methods
Much longer study periods and therefore requires much goodwill among the participants
Studies are longitudinal in nature, therefore, attrition can become a problem.
== See also ==
ISO 9241
Journal of Occupational Health Psychology
Wojciech Jastrzębowski (1799–1882), a Polish pioneer of ergonomics
Canadian Society for Biomechanics
== References ==
== Further reading ==
=== Books ===
=== Peer-reviewed Journals ===
(Numbers between brackets are the ISI impact factor, followed by the date)
Behavior & Information Technology (0.915, 2008)
Ergonomics (0.747, 2001–2003)
Ergonomics in Design (-)
Applied Ergonomics (1.713, 2015)
Human Factors (1.37, 2015)
International Journal of Industrial Ergonomics (0.395, 2001–2003)
Human Factors and Ergonomics in Manufacturing (0.311, 2001–2003)
Travail Humain (0.260, 2001–2003)
Theoretical Issues in Ergonomics Science (-)
International Journal of Human Factors and Ergonomics (-)
International Journal of Occupational Safety and Ergonomics (-)
== External links ==
Directory of Design Support Methods Directory of Design Support Methods
Engineering Data Compendium of Human Perception and Performance
Index of Non-Government Standards on Human Engineering...
Index of Government Standards on Human Engineering...
NIOSH Topic Page on Ergonomics and Musculoskeletal Disorders
Office Ergonomics Information from European Agency for Safety and Health at Work
Human Factors Standards & Handbooks from the University of Maryland Department of Mechanical Engineering
Human Factors and Ergonomics Resources
Human Factors Engineering Collection, The University of Alabama in Huntsville Archives and Special Collections | Wikipedia/Ergonomic_design |
Postage stamp design is the activity of graphic design as applied to postage stamps. Many thousands of designs have been created since a profile bust of Queen Victoria was adopted for the Penny Black in 1840; some designs have been considered very successful, others
less so.
A stamp design includes several elements required for it to accomplish its purpose satisfactorily. Most important is the denomination indicating its monetary value, while international agreements require a country name on almost all types of stamps. A graphic design is very nearly universal; in addition to making counterfeits harder to produce and aiding clerks in quick recognition of appropriate postage, postal customers simply expect stamps to carry a design.
== Denomination ==
The fundamental purpose of a stamp is to indicate the prepayment of postage. Since different kinds and sizes of mail normally pay different amounts of postage, the stamps need to carry a value. In a very few cases, the denomination has been omitted; for instance, during the tumults of 1949 China, undenominated stamps were issued, so as to allow the price of a stamp to fluctuate on a daily basis depending on the value of the gold yuan.
The usual form of the denomination is a number, optionally preceded or followed by a currency symbol. Many early stamps wrote the denomination out in words, but the Universal Postal Union later required that stamps on international mail use Arabic numerals, for the benefit of clerks in foreign countries. A number of recent stamps have substituted a textual description of the rate being charged, such as "1st" for first-class letters, or "presorted ZIP+4" to indicate a particular type of bulk mail. Another form of nonnumerical denomination is that used for rate change stamps, in which the timing and politics of the rate-setting process is such that the stamps must be printed before the rate is known. In such cases, the preprinted stamps simply state "A", "B", etc., with the equivalent rate being announced just before they go on sale.
Canada also uses a non-numerical denomination, the mark "P" printed over a maple leaf, on its domestic postage rate stamps. The letter "P" stands for "Permanent" which indicates that the stamp is always accepted regardless of the current domestic postal rate. Anytime the domestic rate changes, the permanent value stamps already in circulation continue to be accepted and the new rate applies to the purchase of new permanent value stamps.
Semi-postal stamps are usually denominated with two values, with a "+" between, the first indicating the actual rate, and the second the additional amount to be given to a charity. In a very few cases a country has had a dual currency, and the stamps may depict a value in both currencies.
== Country name ==
The second required element, at least for stamps intended to be used on international mail, is the name of the country. The first postage stamps, those of the United Kingdom, had no name. In 1874 the Universal Postal Union exempted the United Kingdom from its rule which stated that a country's name had to appear on their postage stamps, so a profile of the reigning monarch was all that was required for identification of the UK's stamps. To this day the UK remains the only country not required to name itself on its stamps. For all other UPU members, the name must appear in Latin letters. Many countries using non-Latin alphabets used only those on their early stamps, and they remain difficult for most collectors to identify today.
The name chosen is typically the country's own name for itself, with a modern trend towards using simpler and shorter forms, or abbreviations. For instance, the Republic of South Africa inscribes with "RSA", while Jordan originally used "The Hashemite Kingdom of Jordan", and now just "Jordan". Some countries have multiple allowed forms, from which the designer may choose the most suitable. The name may appear in an adjectival form, as in Poșta Română ("Romanian Post") for Romania. Dependent territories may or may not include the name of the parent country.
== Graphic design ==
The graphic element of a stamp design falls into one of four major categories:
Portrait bust - profile or full-face
Emblem - coat of arms, flag, national symbol, posthorn, etc.
Numeric - a design built around the numeral of value
Pictorial
The use of portrait busts (of the ruler or other significant person) or emblems was typical of the first stamps, by extension from currency, which was the closest model available to the early stamp designers.
Usage pattern has varied considerably; for 60 years, from 1840 to 1900, all British stamps used exactly the same portrait bust of Victoria, enclosed in a dizzying variety of frames, while Spain periodically updated the image of Alfonso XIII as he grew from child to adult. Norway has issued stamps with the same posthorn motif for over a century, changing only the details from time to time as printing technology improves, while the US has placed the flag of the United States into a wide variety of settings since first using it on a stamp in the 1950s.
While numeral designs are eminently practical, in that they emphasize the most important element of the stamp, they are the exception rather than the rule.
By far the greatest variety of stamp design seen today is in pictorial issues. The choice of image is nearly unlimited, ranging from plants and animals, to figures from history, to landscapes, to original artwork. Images may represent real-world objects, or be allegories or abstract designs.
The choice of pictorial designs is governed by a combination of anniversaries, required annual issues (such as Christmas stamps), postal rate changes, exhaustion of existing stamp stocks, and popular demand. Since postal administrations are either a branch of government or an official monopoly under governmental supervision, the government has ultimate control over the choice of designs. This means that the designs tend to depict a country as the government would like it to be perceived, rather than as it really is. The Soviet Union issued thousands of stamps extolling the successes of communism, even as it was falling apart, while in the US the only contemporary stamp hinting at the unrest of the 1960s is an issue exhorting Americans to support their local police.
In some cases, overt political pressure has resulted in a backlash; a famous example is that of the US in the late 1940s, when the US Congress had direct authority over stamp design, and a large number of issues were put out merely to please a representative's constituency or industry lobbyists. The resulting uproar resulted in the formation of an independent
Citizens' Stamp Advisory Committee that reviews and chooses from hundreds of proposals received each year. Occasionally the public is polled for its choice of design, as with the US Elvis stamp of 1993, or some issues of the Celebrate the Century series.
Many countries have specific rules governing the choice of designs or design elements. Stamps of the UK must depict the sovereign (typically as a silhouette), while stamps of the US may not visibly depict any person who has been dead for less than 10 years, except for ex-Presidents, who may appear on a stamp one year after their demise. The choice of postage stamp color may be specified, acting as a sort of color code to different rates.
Most countries issue commemorative issues from time to time, perhaps to celebrate some special event, with designs relating to the event. While they are legitimate postage stamps, and often used for routine post, they are intended to appeal particularly to
stamp collectors. Stamps that are collected without being used are paid for, but the purchaser chooses not to use the postal service purchased, leaving 100% clear profit. First day covers, often containing more stamps than are required for postage, are an additional source of revenue. This source of money is not inexhaustible, as excessive stamp issues go unpurchased.
Some countries, usually poorer ones, produce many special issues intended purely for collectors from other countries. These stamps are designed for visual appeal, with attractive brightly coloured designs on interesting topics, often large and of unusual shape. Themes have included space-related subjects from a country with no space program, polar animals from a country on the equator, Western rock stars from a conservative Muslim country, and so forth. International organizations of philatelists discourage the practice, not wanting collectors to be discouraged by floods of stamps which will never have any rarity value. See stamp program for more detail.
== Textual elements ==
Nearly all stamps have some amount of text embedded in their design. In addition to the expected denomination and country name, textual elements may include a statement of purpose ("postage", "official mail", etc.), a plate number, the name of a person being portrayed, the occasion being commemorated, the year of stamp issue, and national mottoes.
Occasionally designs use text as their primary design element; for instance, a series of US stamps from the 1970s featured quotations from the United States Declaration of Independence. In general however, text has come to be used more sparingly in recent years.
Countries with multiple languages and multiple scripts may need to write the material multiple times. Labuan is an early example; more recently, stamps of Israel include its name in Hebrew, Latin, and Arabic characters.
In addition to text woven into the description, stamps may also have inscriptions in the outside margin. These are almost always at the bottom, and are usually the name of the printer and/or designer. Occasionally a textual description of the design is found in the margin, while in recent years, the lower left margin has become a common place to include the year of issue. Philatelists count changes in these marginal inscriptions as distinct types of stamps.
== Hidden elements and "secret marks" ==
Sometimes designers include tiny elements into a design, sometimes at the request of the stamp-issuing authority, sometimes on their own. Stamps may have a year or name worked into a design, while the US stamp honoring Rabbi Bernard Revel has a minute Star of David visible in his beard.
Secret marks are small design alterations added to distinguish printings unambiguously. These usually take the form of small lines or marks added to clear areas of a design. Chinese stamps of the 1940s have secret marks in the form of slightly altered characters, where two arms might be changed to touch, when previously they were separate.
== Shape and size ==
The usual shape of a postage stamp is a rectangle, this being an efficient way to pack stamps on a sheet. A rectangle wider than tall is called a "horizontal design", while taller than wide is a "vertical design".
A number of additional shapes have been used, including triangles, rhombuses, octagons, circles, and various freeform shapes including heart shapes, and even a banana shaped stamp issued by Tonga from 1969 to 1985.
The usual size ranges from 10 to 30 mm in each direction, experience having shown this to be the easiest to handle. Many countries use only a limited selection of dimensions, to simplify automated machinery that handles stamps.
One of the world's smallest stamps is the square quarter stamp issued by Mecklenburg in 1856, measuring 9×9 mm. Currently, the smallest known stamp is the 7x7.5 mm stamp printed in 1992 in Békéscsaba, Hungary, by the Kner printing house, to commemorate the 500th anniversary of the discovery of America.
The biggest stamps in history were used in the United States from 1865 and measured 52 by 95 millimeters, but were used exclusively for mailing newspapers.
== Design evolution ==
Stamp design has undergone a gradual process of evolution, traceable both to advances in printing technology and general changes in taste. Design "fads" may also be observed, where a number of countries tend to imitate each other. This may be driven by printing houses, many of which design and print stamps for multiple countries.
For instance, although multi-color printing was always possible, and may be seen on the earliest stamps of Switzerland, the process was slow and expensive, and most stamps were in one or two colors until the 1960s.
From time to time postal administrations also try experiments. For instance, the US tried issuing very small stamps during the 1970s, as a cost savings measure. They were extremely unpopular, and the experiment was abandoned.
While modern tastes tend to favor simpler designs, some countries have also put out "retro" designs, using modern techniques to mimic the more elaborate designs of the past, perhaps even with anachronistic elements. A 2004 example is the Lewis and Clark stamps of the US, whose frames are classic 19th-century, surrounding full-color portraits of a quality not available until the latter half of the 20th century.
== Design process ==
Once a general subject has been chosen, the postal administration typically contracts an outside artist to produce a design.
In working up a design, the artist must take into account the rules and constraints as mentioned above, and perhaps additional requirements, such as membership in a series of related designs.
In addition, the artist must consider the consequence of working on a small "canvas"; for instance, traditional paintings often reduce into an amorphous blur, and so the stamp designer will opt to pick a single interesting and/or characteristic detail as the center of the design. Similarly, a stamp consisting of simply a portrait will mean little to many users, and the artist may opt to include a visual element suggesting the person's accomplishments, such as an architect's most famous building, or simply add the word "architect" somewhere in the design.
The artist then submits one or more designs for the postal administration's approval. The accepted design may undergo several rounds of modification before entering the production process. The design may also be abandoned, perhaps if circumstances have changed, such as a change of government.
Designs may also be modified as a result of other considerations; for instance, the design of a US stamp honoring Jackson Pollock was based on a photograph showing him smoking a cigarette, but not desiring to be promoting, the cigarette was removed from the design. In general, stamps are not photographic reproductions of the subjects they depict.
== Design successes and failures ==
In the end, successful stamp designs receive relatively little notice from the general public, but considerable praise from the philatelic press. Publications such as Linn's Stamp News will headline the most interesting new stamps on their front page, and report the results of popularity polls.
On the other side, design errors regularly get through the multiple stages of review and checking. Errors have ranged from minute points of rendition (such as the subtly reversed ears on an Austrian stamp of the 1930s), to misrepresentations of disputed territory in maps, to mistaken text ("Sir Codrington" on 1920s Greece), to the truly spectacular, such as the US "Legends of the West" sheet using the picture of the wrong person. See stamp design error for further detail.
Another category of failure includes designs that are simply rejected by the stamp-buying public. The 1981 anti-alcoholism stamp of the US is a well-known example; it consists merely of the slogan "Alcoholism: You Can Beat It!", which must have looked good during the design process, but affixed to the corner of an envelope it suggests that the recipient is an alcoholic in need of public encouragement, and few people ever used this stamp on their mail.
== See also ==
Art Deco stamps
Commemorative stamp
Postage stamp
== References and sources ==
References
Sources
Williams, Louis N., & Williams, Maurice (1990). Fundamentals of Philately (revised ed.). American Philatelic Society. ISBN 0-933580-13-4.{{cite book}}: CS1 maint: multiple names: authors list (link) | Wikipedia/Postage_stamp_design |
Value-driven design (VDD) is a systems engineering strategy based on microeconomics which enables multidisciplinary design optimization. Value-driven design is being developed by the American Institute of Aeronautics and Astronautics, through a program committee of government, industry and academic representatives. In parallel, the U.S. Defense Advanced Research Projects Agency has promulgated an identical strategy, calling it value-centric design, on the F6 Program. At this point, the terms value-driven design and value-centric design are interchangeable. The essence of these strategies is that design choices are made to maximize system value rather than to meet performance requirements.
This is also similar to the value-driven approach of agile software development where a project's stakeholders prioritise their high-level needs (or system features) based on the perceived business value each would deliver.
Value-driven design is controversial because performance requirements are a central element of systems engineering. However, value-driven design supporters claim that it can improve the development of large aerospace systems by reducing or eliminating cost overruns which are a major problem, according to independent auditors.
== Concept ==
Value-driven design creates an environment that enables and encourages design optimization by providing designers with an objective function and eliminating those constraints which have been expressed as performance requirements. The objective function inputs all the important attributes of the system being designed, and outputs a score. The higher the score, the better the design. Describing an early version of what is now called value-driven design, George Hazelrigg said, "The purpose of this framework is to enable the assessment of a value for every design option so that options can be rationally compared and a choice taken." At the whole system level, the objective function which performs this assessment of value is called a "value model." The value model distinguishes value-driven design from Multi-Attribute Utility Theory applied to design. Whereas in Multi-Attribute Utility Theory, an objective function is constructed from stakeholder assessments, value-driven design employs economic analysis to build a value model. The basis for the value model is often an expression of profit for a business, but economic value models have also been developed for other organizations, such as government.
To design a system, engineers first take system attributes that would traditionally be assigned performance requirements, like the range and fuel consumption of an aircraft, and build a system value model that uses all these attributes as inputs. Next, the conceptual design is optimized to maximize the output of the value model. Then, when the system is decomposed into components, an objective function for each component is derived from the system value model through a sensitivity analysis.
A workshop exercise implementing value-driven design for a GPS satellite was conducted in 2006, and may serve as an example of the process.
== History ==
The dichotomy between designing to performance requirements versus objective functions was raised by Herbert Simon in an essay called "The Science of Design" in 1969. Simon played both sides, saying that, ideally, engineered systems should be optimized according to an objective function, but realistically this is often too hard, so that attributes would need to be satisficed, which amounted to setting performance requirements. But he included optimization techniques in his recommended curriculum for engineers, and endorsed "utility theory and statistical decision theory as a logical framework for rational choice among given alternatives".
Utility theory was given most of its current mathematical formulation by von Neumann and Morgenstern, but it was the economist Kenneth Arrow who proved the Expected Utility Theorem most broadly, which says in essence that, given a choice among a set of alternatives, one should choose the alternative that provides the greatest probabilistic expectation of utility, where utility is value adjusted for risk aversion.
Ralph Keeney and Howard Raiffa extended utility theory in support of decision making, and Keeney developed the idea of a value model to encapsulate the calculation of utility. Keeney and Raiffa also used "attributes" to describe the inputs to an evaluation process or value model.
George Hazelrigg put engineering design, business plan analysis, and decision theory together for the first time in a framework in a paper written in 1995, which was published in 1998. Meanwhile, Paul Collopy independently developed a similar framework in 1997, and Harry Cook developed the S-Model for incorporating product price and demand into a profit-based objective function for design decisions.
The MIT Engineering Systems Division produced a series of papers from 2000 on, many co-authored by Daniel Hastings, in which many utility formulations were used to address various forms of uncertainty in making engineering design decisions. Saleh et al. is a good example of this work.
The term value-driven design was coined by James Sturges at Lockheed Martin while he was organizing a workshop that would become the Value-Driven Design Program Committee at the American Institute of Aeronautics and Astronautics (AIAA) in 2006. Meanwhile, value centric design was coined independently by Owen Brown and Paul Eremenko of DARPA in the Phase 1 Broad Agency Announcement for the DARPA F6 satellite design program in 2007.
Castagne et al. provides an example where value-driven design was used to design fuselage panels for a regional jet.
== Value-based acquisition ==
Implementation of value-driven design on large government systems, such as NASA or European Space Agency spacecraft or weapon systems, will require a government acquisition system that directs or incentivizes the contractor to employ a value model. Such a system is proposed in some detail in an essay by Michael Lippitz, Sean O'Keefe, and John White. They suggest that "A program office can offer a contract in which price is a function of value", where the function is derived from a value model. The price function is structured so that, in optimizing the product design in accordance with the value model, the contractor will maximize its own profit. They call this system Value Based Acquisition.
== See also ==
Systems engineering
Decision theory
Multidisciplinary optimization
== References == | Wikipedia/Value-driven_design |
Architectural lighting design is a field of work or study that is concerned with the design of lighting systems within the built environment, both interior and exterior. It can include manipulation and design of both daylight and electric light or both, to serve human needs.
Lighting design is based in both science and the visual arts. The basic aim of lighting within the built environment is to enable occupants to see clearly and without discomfort. The objective of architectural lighting design is to balance the art and the science of lighting to create mood, visual interest and enhance the experience of a space or place whilst still meeting the technical and safety requirements.
== Overview ==
The purpose of architectural lighting design is to balance the characteristics of light within a space to optimize the technical, the visual and, most recently, the non-visual components of ergonomics with respect to illumination of buildings or spaces.
The technical requirements include the amount of light needed to perform a task, the energy consumed by the lighting within the space and the relative distribution and direction of travel for the light so as not to cause unnecessary glare and discomfort. The visual aspects of the light are those that are concerned with the aesthetics and the narrative of the space (e.g. the mood of a restaurant, the experience of an exhibition within a museum, the promotion of goods within a retail space, the reinforcement of corporate brand) and the non-visual aspects are those concerned with human health and well-being.
As part of the lighting design process both cultural and contextual factors also need to be considered. For example, bright lighting was a mark of wealth through much of Chinese history, but if uncontrolled bright lights are known to be detrimental to insects, birds, and the view of stars.
== History ==
The history of electric light is well documented, and with the developments in lighting technology the profession of lighting developed alongside it. The development of high-efficiency, low-cost fluorescent lamps led to a reliance on electric light and a uniform blanket approach to lighting, but the energy crisis of the 1970s required more design consideration and reinvigorated the use of daylight.
The Illuminating Engineering Society of North America (IESNA) was formed in 1906 and the UK version was established in 1909 (now known as the Society of Light and Lighting and part of CIBSE). The International Commission on Illumination (CIE) was established in 1913 and has become a professional organization accepted as representing the best authority on the subject matter of light and lighting. The Institution of Lighting Professionals was established as the Association of Public Lighting Engineers in 1924. Around the world similar professional organizations evolved.
Initially, these industry organizations were primarily focused on the science and engineering of lighting rather than the aesthetic design, but in 1969 a group of designers established the International Association of Lighting Designers (IALD). Other associations purely for lighting design include the Professional Lighting Designers' Association (PLDA) established in 1994, the Association de Concepteurs Eclairage (ACE) in France established 1995, the Associazione Professionisti dell'Illuminazione (APIL) in Italy established in 1998, the Associação Brasileira de Arquitetos de Iluminação in Brazil in 1999 and the Professional Association of Lighting Designers in Spain (APDI) established in 2008.
== As a profession ==
Architectural lighting designer is a stand-alone profession that sits alongside the professions of architecture, interior design, landscape architecture and electrical engineering.
One of the earliest proponents of architectural lighting design was Richard Kelly who established his practice in 1935. Kelly developed an approach to architectural lighting that is still used today, based on the perception of three visual elements as presented in a 1952 joint meeting of The American Institute of Architects, the American Society of Industrial Designers (now the Industrial Designers Society of America), and the Illuminating Engineering Society of North America, in Cleveland.
=== Education ===
While many architectural lighting designers have a background in electrical engineering, architectural engineering, architecture, or luminaire manufacturing, several universities and technical schools now offer degree programs specifically in architectural lighting design.
=== Process ===
The process of architectural lighting design generally follows the architect's plan of works in terms of key project stages: feasibility, concept, detail, construction documentation, site supervision and commissioning.
After the feasibility stage, where the parameters for the project are set, the concept stage is when the lighting design is developed in terms of lit effect, technical lighting targets and overall visual strategy usually using concept sketches, renderings, or mood boards.
== Day lighting ==
The source for daylight or natural lighting is the sun. Sunlight provides the greatest quality of light, rated 100 CRI, on the electromagnetic spectrum. There are psychological and physical health benefits that come from using daylight in a space. For example, it can help to ease seasonal affective disorder (SAD), it can provide people with the necessary vitamin D, and can assist in regulating circadian rhythms, or daily light and dark cycles. Using daylight as a light source can eliminate the use of energy. Daylighting can also cause deterioration of materials and finishes and an increased use of energy for cooling a space. The architectural makeup of a space impacts the day lighting. It can be used in a space through windows, openings of the interior, skylights, and reflective surfaces.
== Electric lighting ==
Electric lighting or artificial lighting is a type of architectural lighting that includes electric light sources. The overall purpose of electric lighting is to allow the user of the space to see at various times in the day, but especially at night or in winter season, when daylight is no longer a possible or sufficient source of light. Artificial lighting helps to create or enhance the aesthetic of a space. Various techniques can be implemented when it comes to electric lighting, since users have more control over the light. This can include dimming or increasing the brightness of a lamp, diffusion of the light source, and the use of different lamp hues. The main sources used for electric lighting include incandescent lamps, solid state lamps (LED, etc.), and gas discharge lamps.
== Fixtures ==
Lighting fixtures come in a wide variety of styles for various functions. The most important functions are as a holder for the light source, to provide directed light and to avoid visual glare. Some are very plain and functional, while some are pieces of art in themselves. Nearly any material can be used, so long as it can tolerate the excess heat and is in keeping with safety codes.
An important property of light fixtures is the luminous efficacy or wall-plug efficiency, meaning the amount of usable light emanating from the fixture per used energy, usually measured in lumen per watt. A fixture using replaceable light sources can also have its efficiency quoted as the percentage of light passed from the "bulb" to the surroundings. The more transparent the lighting fixture is, the higher efficacy. Shading the light will normally decrease efficiency but increase the directionality and the visual comfort probability.
The PH-lamps are a series of light fixtures designed by Danish designer and writer Poul Henningsen from 1926 onwards. The lamp is designed with multiple concentric shades to eliminate visual glare, only emitting reflected light, obscuring the light source.
== Lighting design layers ==
Designers utilize the idea of lighting layers when creating a lighting plan for a space. Lighting layers include: task layer, focal layer, ambient layer, decorative layer, and daylight layer. Each layer contributes a function to the space and often they work together to create a well composed lighting design. The task layer is lighting that serves a purpose to perform a certain job or task. Typically, in this layer, there tends to be a need for more light. An example of this would be the use of under cabinet lighting in a kitchen. The focal layer is when lighting is used to highlight a certain feature in a room, such as a fireplace. This type of lighting draws the eye to that certain area. The ambient layer provides for background or general lighting. This layer has a strong influence on the brightness of a space. In the decorative layer, lighting is used as an ornament to the space and can help develop the style. The daylight layer uses natural light or the sun to light a space. Using the layering technique helps to develop the aesthetic and functionality of lighting.
== Photometric studies ==
Photometric studies are performed to simulate lighting designs for projects before they are built or renovated. This enables architects, lighting designers, and engineers to determine whether a proposed lighting layout will deliver the amount of light intended. They will also be able to determine the contrast ratio between light and dark areas. In many cases these studies are referenced against IESNA or CIBSE recommended lighting practices for the type of application. Depending on the type of area, different design aspects may be emphasized for safety or practicality (i.e. such as maintaining uniform light levels, avoiding glare or highlighting certain areas). A specialized lighting design application is often used to create these, which typically combine the use of two-dimensional digital CAD drawings and lighting simulation software.
Color temperature for white light sources also affects their use for certain applications. The color temperature of a white light source is the temperature in kelvin of a theoretical black body emitter that most closely matches the spectral characteristics of the lamp. Incandescent light bulbs have a color temperature around 2700 to 3000 kelvin; daylight is around 6500 kelvin. Lower color temperature lamps have relatively more energy in the yellow and red part of the visible spectrum, while high color temperatures correspond to lamps with more of a blue-white appearance. For critical inspection or color matching tasks, or for retail displays of food and clothing, the color temperature of the lamps will be selected for the best overall lighting effect. Color may also be used for functional reasons. For example, blue light makes it difficult to see veins and thus may be used to discourage drug use.
== Correlated color temperature ==
The correlated color temperature (CCT) of a light source is the temperature of an ideal black-body radiator that radiates light of comparable chromaticity to that of the light source. Color temperature is a characteristic of visible light that has important applications in lighting, photography, videography, publishing, manufacturing, astrophysics, horticulture, and other fields. In practice, color temperature is only meaningful for light sources that do in fact correspond somewhat closely to the radiation of some black body (i.e. those on a line from red-orange via yellow and more or less white to blueish white); it does not make sense to speak of the color temperature of (e.g. a green or a purple light). Color temperature is conventionally stated in the SI unit of absolute temperature, the kelvin, having the unit symbol K.
For lighting building interiors, it is often important to take into account the color temperature of illumination. For example, a warmer (i.e. lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration in offices.
CCT dimming for LED technology is regarded as a difficult task, since binning, age and temperature drift effects of LEDs change the actual color value output. Here feedback loop systems can be used for example with color sensors, to actively monitor and control the color output of multiple color mixing LEDs.
The color temperature of the electromagnetic radiation emitted from an ideal black body is defined as its surface temperature in Kelvin, or alternatively in mireds (micro-reciprocal kelvin). This permits the definition of a standard by which light sources are compared.
== Methods ==
For simple installations, hand-calculations based on tabular data can be used to provide an acceptable lighting design. More critical or optimized designs now routinely use mathematical modeling on a computer.
Based on the positions and mounting heights of the fixtures, and their photometric characteristics, the proposed lighting layout can be checked for uniformity and quantity of illumination. For larger projects or those with irregular floor plans, lighting design software can be used. Each fixture has its location entered, and the reflectance of walls, ceiling, and floors can be entered. The computer program will then produce a set of contour charts overlaid on the project floor plan, showing the light level to be expected at the working height. More advanced programs can include the effect of light from windows or skylights, allowing further optimization of the operating cost of the lighting installation. The amount of daylight received in an internal space can typically be analyzed by undertaking a daylight factor calculation.
The IES zonal cavity method (also known as the lumen method) is used as a basis for both hand, tabulated, and computer calculations. This method uses the reflectance coefficients of room surfaces to model the contribution to useful illumination at the working level of the room due to light reflected from the walls and the ceiling. Simplified photometric values are usually given by fixture manufacturers for use in this method.
Computer modeling of outdoor flood lighting usually proceeds directly from photometric data. The total lighting power of a lamp is divided into small solid angular regions. Each region is extended to the surface which is to be lit and the area calculated, giving the light power per unit of area. Where multiple lamps are used to illuminate the same area, each one's contribution is summed. Again the tabulated light levels (in lux or foot-candles) can be presented as contour lines of constant lighting value, overlaid on the project plan drawing. Hand calculations might only be required at a few points, but computer calculations allow a better estimate of the uniformity and lighting level.
== Design-media terminology ==
Adjustable accent fixture
Used to point at certain design elements
Bollard
A type of architectural outdoor lighting that is a short, upright ground-mounted unit typically used to provide cutoff type illumination for egress lighting, to light walkways, steps, or other pathways
"Cans" with a variety of lamps
Jargon for inexpensive downlighting products that are recessed into the ceiling, or sometimes for uplights placed on the floor. The name comes from the shape of the housing. The term "pot lights" is often used in Canada and parts of the US.
Chandelier
A branched ornamental light fixture designed to be mounted on ceilings or walls
Cove light
Recessed into the ceiling in a long box against a wall
Emergency lighting or exit sign
Connected to a battery backup or to an electric circuit that has emergency power if the mains power fails
Flood lighting
Usually pole- or stanchion-mounted; for landscape, roadways, and parking lots
High- and low-bay lighting
Typically used for general lighting for industrial buildings and often big-box stores
Lamp
Lightbulb, comes in various shapes and sizes
Luminaire
Holds and supports lamp, provides electrification
Outdoor lighting and landscape lighting
Used to illuminate walkways, parking lots, roadways, building exteriors and architectural details, gardens, and parks
Pendant light
Suspended from the ceiling with a chain or pipe
Recessed light
The protective housing is concealed behind a ceiling or wall, leaving only the fixture itself exposed. The ceiling-mounted version is often called a downlight
Sconce
A decorative light fixture that is mounted to a wall
Street light
A type of outdoor pole-mounted light used to light streets and roadways; similar to pole-mounted flood lights but with a type II lens (side to side light distribution pattern) instead of type III
Strip lights or industrial lighting
Often long lines of fluorescent lamps used in a warehouse or factory
Surface-mounted light
The finished housing is exposed, not flush mount with surface.
Track lighting fixture
Individual fixtures (called track heads) can be positioned anywhere along the track, which provides electric power
Under-cabinet light
Mounted below kitchen wall cabinets
Troffer
Recessed fluorescent light fixtures, usually rectangular in shape to fit into a drop ceiling grid
Wall grazing fixture
Light is closely placed to wall, typically to enhance a textured surface.
Wallwasher
An asymmetric light fixture that lights from ceiling to floor and flatly illuminates the wall
== Lamp types ==
Different types of electric lighting have vastly differing efficacy and color temperature:
*Color temperature is defined as the temperature of a black body emitting a similar spectrum; these spectra are quite different from those of black bodies.
The most efficient source of electric light is the low-pressure sodium lamp. It produces, for all practical purposes, a monochromatic yellow light, which gives a similarly monochromatic perception of any illuminated scene. For this reason, it is generally reserved for outdoor public lighting usages. Low-pressure sodium lights are favored for public lighting by astronomers, since the light pollution that they generate can be easily filtered, contrary to broadband or continuous spectra.
=== Incandescent light bulb ===
The modern incandescent light bulb, with a coiled filament of tungsten, was commercialized in the 1920s developed from the carbon filament lamp introduced in about 1880. As well as bulbs for normal illumination, there is a very wide range, including low voltage, low-power types often used as components in equipment, but now largely displaced by LEDs.
=== Fluorescent lamp ===
Fluorescent lamps consist of a glass tube that contains mercury vapor or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet energy.
=== LED lamp ===
Light-emitting diodes (LEDs) became widespread as indicator lights in the 1970s. With the invention of high-output LEDs by Shuji Nakamura, LEDs are now in use as solid-state lighting for general lighting applications.
Initially, due to relatively high cost per lumen, LED lighting was most used for lamp assemblies of under 10 W such as flashlights. Development of higher-output lamps was motivated by programs such as the U.S. L Prize.
== See also ==
Architectural glass – Building material
Architectural light shelf – Architectural lighting feature
Architecture of the night – Architecture integrating and emphasizing electric light effects at the design stage
Daylight harvesting – Automatically variable electric lighting
Deck prism – Way of transmitting light from the sun to the inside of a boat
Light art – Visual art using light as a medium
Light + Building – Biennial architecture and technology trade fair
Light tube – Architectural element
Lighting control system – Intelligent network based lighting control
Lighting for the elderly – Age-appropriate lighting
Lumen method – Simplified light level calculation
Over illumination – Excess artificial light in an environmentPages displaying short descriptions of redirect targets
Passive solar building design – Architectural engineering that uses the Sun's heat without electric or mechanical systems
Professional Lighting and Sound Association – Trade organization
Smart glass – Glass with electrically switchable opacity
Stage lighting – Craft of lighting at performances
Sun path – Arc-like path that the Sun appears to follow across the sky
Sustainable lighting
Transom (architectural) – Horizontal structural piece separating a door from a window above itPages displaying short descriptions of redirect targets
Vivid Sydney – Recurring festival in Sydney, Australia
== References ==
== External links ==
Lighting design glossary | Wikipedia/Architectural_lighting_design |
Algorithmic art or algorithm art is art, mostly visual art, in which the design is generated by an algorithm. Algorithmic artists are sometimes called algorists. Algorithmic art is created in the form of digital paintings and sculptures, interactive installations and music compositions.
Algorithmic art is not a new concept. Islamic art is a good example of the tradition of following a set of rules to create patterns. The even older practice of weaving includes elements of algorithmic art.
As computers developed so did the art created with them. Algorithmic art encourages experimentation allowing artists to push their creativity in the digital age. Algorithmic art allows creators to devise intricate patterns and designs that would be nearly impossible to achieve by hand. Creators have a say on what the input criteria is, but not on the outcome.
== Overview ==
Algorithmic art, also known as computer-generated art, is a subset of generative art (generated by an autonomous system) and is related to systems art (influenced by systems theory). Fractal art is an example of algorithmic art. Fractal art is both abstract and mesmerizing.
For an image of reasonable size, even the simplest algorithms require too much calculation for manual execution to be practical, and they are thus executed on either a single computer or on a cluster of computers. The final output is typically displayed on a computer monitor, printed with a raster-type printer, or drawn using a plotter. Variability can be introduced by using pseudo-random numbers. There is no consensus as to whether the product of an algorithm that operates on an existing image (or on any input other than pseudo-random numbers) can still be considered computer-generated art, as opposed to computer-assisted art.
== History ==
Roman Verostko argues that Islamic geometric patterns are constructed using algorithms, as are Italian Renaissance paintings which make use of mathematical techniques, in particular linear perspective and proportion.
Some of the earliest known examples of computer-generated algorithmic art were created by Georg Nees, Frieder Nake, A. Michael Noll, Manfred Mohr and Vera Molnár in the early 1960s. These artworks were executed by a plotter controlled by a computer, and were therefore computer-generated art but not digital art. The act of creation lay in writing the program, which specified the sequence of actions to be performed by the plotter. Sonia Landy Sheridan established Generative Systems as a program at the School of the Art Institute of Chicago in 1970 in response to social change brought about in part by the computer-robot communications revolution. Her early work with copier and telematic art focused on the differences between the human hand and the algorithm.
Aside from the ongoing work of Roman Verostko and his fellow algorists, the next known examples are fractal artworks created in the mid to late 1980s. These are important here because they use a different means of execution. Whereas the earliest algorithmic art was "drawn" by a plotter, fractal art simply creates an image in computer memory; it is therefore digital art. The native form of a fractal artwork is an image stored on a computer –this is also true of very nearly all equation art and of most recent algorithmic art in general. However, in a stricter sense "fractal art" is not considered algorithmic art, because the algorithm is not devised by the artist.
In light of such ongoing developments, pioneer algorithmic artist Ernest Edmonds has documented the continuing prophetic role of art in human affairs by tracing the early 1960s association between art and the computer up to a present time in which the algorithm is now widely recognized as a key concept for society as a whole.
=== Rational approaches to art ===
While art has strong emotional and psychological ties, it also depends heavily on rational approaches. Artists have to learn how to use various tools, theories and techniques to be able to create impressive artwork. Thus, throughout history, many art techniques were introduced to create various visual effects. For example, Georges-Pierre Seurat invented pointillism, a painting technique that involves placing dots of complementary colors adjacent to each other. Cubism and Color Theory also helped revolutionize visual arts. Cubism involved taking various reference points for the object and creating a 2-Dimensional rendering. Color Theory, stating that all colors are a combination of the three primary colors (Red, Green and Blue), also helped facilitate the use of colors in visual arts and in the creation of distinct colorful effects. In other words, humans have always found algorithmic ways and discovered patterns to create art. Such tools allowed humans to create more visually appealing artworks efficiently. In such ways, art adapted to become more methodological.
=== Creating perspective through algorithms ===
Another important aspect that allowed art to evolve into its current form is perspective. Perspective allows the artist to create a 2-Dimensional projection of a 3-Dimensional object. Muslim artists during the Islamic Golden Age employed linear perspective in most of their designs. The notion of perspective was rediscovered by Italian artists during the Renaissance. The Golden Ratio, a famous mathematical ratio, was utilized by many Renaissance artists in their drawings. Most famously, Leonardo DaVinci employed that technique in his Mona Lisa, and many other paintings, such as Salvator Mundi. This is a form of using algorithms in art. By examining the works of artists in the past, from the Renaissance and Islamic Golden Age, a pattern of mathematical patterns, geometric principles and natural numbers emerges.
== Role of the algorithm ==
From one point of view, for a work of art to be considered algorithmic art, its creation must include a process based on an algorithm devised by the artist. An artists may also select parameters and interact as the composition is generated. Here, an algorithm is simply a detailed recipe for the design and possibly execution of an artwork, which may include computer code, functions, expressions, or other input which ultimately determines the form the art will take. This input may be mathematical, computational, or generative in nature. Inasmuch as algorithms tend to be deterministic, meaning that their repeated execution would always result in the production of identical artworks, some external factor is usually introduced. This can either be a random number generator of some sort, or an external body of data (which can range from recorded heartbeats to frames of a movie.) Some artists also work with organically based gestural input which is then modified by an algorithm. By this definition, fractals made by a fractal program are not art, as humans are not involved. However, defined differently, algorithmic art can be seen to include fractal art, as well as other varieties such as those using genetic algorithms. The artist Kerry Mitchell stated in his 1999 Fractal Art Manifesto:
Fractal Art is not..Computer(ized) Art, in the sense that the computer does all the work. The work is executed on a computer, but only at the direction of the artist. Turn a computer on and leave it alone for an hour. When you come back, no art will have been generated.
== Algorists ==
"Algorist" is a term used for digital artists who create algorithmic art. Pioneering algorists include Vera Molnár, Dóra Maurer and Gizella Rákóczy.
Algorists formally began correspondence and establishing their identity as artists following a panel titled "Art and Algorithms" at SIGGRAPH in 1995. The co-founders were Jean-Pierre Hébert and Roman Verostko. Hébert is credited with coining the term and its definition, which is in the form of his own algorithm:
if (creation && object of art && algorithm && one's own algorithm) {
return * an algorist *
} else {
return * not an algorist *
}
=== Types ===
Artists can write code that creates complex and dynamic visual compositions.
Cellular automata can be used to generate artistic patterns with an appearance of randomness, or to modify images such as photographs by applying a transformation such as the stepping stone rule (to give an impressionist style) repeatedly until the desired artistic effect is achieved. Their use has also been explored in music.
Fractal art consists of varieties of computer-generated fractals with colouring chosen to give an attractive effect. Especially in the western world, it is not drawn or painted by hand. It is usually created indirectly with the assistance of fractal-generating software, iterating through three phases: setting parameters of appropriate fractal software; executing the possibly lengthy calculation; and evaluating the product. In some cases, other graphics programs are used to further modify the images produced. This is called post-processing. Non-fractal imagery may also be integrated into the artwork.
Genetic or evolutionary art makes use of genetic algorithms to develop images iteratively, selecting at each "generation" according to a rule defined by the artist.
Algorithmic art is not only produced by computers. Wendy Chun explains:
Software is unique in its status as metaphor for metaphor itself. As A universal imitator/machine, it encapsulates a logic of general substitutability; a logic of ordering and creative, animating disordering. Joseph Weizenbaum has argued that computers have become metaphors for "effective procedures," that is, for anything that can be solved in a prescribed number of steps, such as gene expression and clerical work.
The American artist, Jack Ox, has used algorithms to produce paintings that are visualizations of music without using a computer. Two examples are visual performances of extant scores, such as Anton Bruckner's Eighth Symphony and Kurt Schwitters' Ursonate. Later, she and her collaborator, Dave Britton, created the 21st Century Virtual Color Organ that does use computer coding and algorithms.
Since 1996 there have been ambigram generators that auto generate ambigrams.
== Contemporary views on algorithmic art ==
=== The necessity of algorithmic art ===
In modern times, humans have witnessed a drastic change in their lives. One such glaring difference is the need for more comfortable and aesthetic environment. People have started to show particular interest towards decorating their environment with paintings. While it is not uncommon to see renowned, famous oil paintings in certain environments, it is still unusual to find such paintings in an ordinary family house. Oil paintings can be costly, even if its a copy of the painting. Thus, many people prefer simulating such paintings. With the emergence of Artificial Intelligence, such simulations have become possible. Artificial intelligence image processors utilize an algorithm and machine learning to produce the images for the user.
=== Studies on algorithmic and generative art ===
Recent studies and experiments have shown that artificial intelligence, using algorithms and machine learning, is able to replicate oil paintings. The image look relatively accurate and identical to the original image. Such improvements in algorithmic art and artificial intelligence can make it possible for many people to own renowned paintings, at little to no cost. This could prove to be revolutionary for various environments, especially with the rapid rise in demand for improved aesthetic. Using the algorithm, the simulator can create images with an accuracy of 48.13% to 64.21%, which would be imperceptible to most humans. However, the simulations are not perfect and are bound to error. They can sometimes give inaccurate, extraneous images. Other times, they can completely malfunction and produce a physically impossible image. However, with the emergence of newer technologies and finer algorithms, research are confident that simulations could witness a massive improvement. Other contemporary outlooks on art have focused heavily on making art more interactive. Based on the environment or audience feedback, the algorithm is fine-tuned to create a more appropriate and appealing output. However, such approaches have been criticized since the artist is not responsible for every detail in the painting. Merely, the artist facilitates the interaction between the algorithm and its environment and adjusts it based on the desired outcome.
== See also ==
Algorithmic composition
Computer-aided design
DeepDream
Demoscene
Display hack
Low-complexity art
Infinite compositions of analytic functions
== References ==
== Further reading ==
Oliver Grau (2003). Virtual Art: From Illusion to Immersion (MIT Press/Leonardo Book Series). Cambridge, Massachusetts: The MIT Press. ISBN 0-262-07241-6.
Wands, Bruce (2006). Art of the Digital Age, London: Thames & Hudson. ISBN 0-500-23817-0.
== External links ==
[1]
Algorithmic Art: Composing the Score for Visual Art - Roman Verostko
Compart - Database of Digital and Algorithmic Art
Fun with Computer-Generated Art
Thomas Dreher: Conceptual Art and Software Art: Notations, Algorithms and Codes
Real-Time Computer Generated Digital Painting | Wikipedia/Algorithmic_art |
Computer-aided architectural design (CAAD) software programs are the repository of accurate and comprehensive records of buildings and are used by architects and architectural companies for architectural design and architectural engineering. As the latter often involve floor plan designs CAAD software greatly simplifies this task.
== History ==
The first attempts to computerize the architectural design date back to the 1960s:
CRAFT (1963) was one of the first systems to automate an architectural design task (optimizing the layout of a manufacturing plant);
Sketchpad (1963) was the precursor of the computer-aided design programs, it computerized the drawing and allowed setting of parametric relationships between objects via a graphical user interface;
DAC-1 (1963-1964) was a CAD-like program developed by General Motors.
The first attempts to separate the CAAD from generic CAD were made in the 1970s. The practical commercial tools for architecture design and building information modeling appeared a decade later, in the 1980s. Due to availability of the tools, computerized design in architecture became a distinct field within the architecture. The intervening years were characterized by the rapid growth in the research: the Design Methods conference (1962) had put the design research on the map, the 1st International Congress on Performance (1972) discussed the early approaches to computerizing the building performance simulations.
New research journals had focused on the subject in the 1990s and 2000s: Automation in Construction (1992), International Journal of Architectural Computing (2003), Journal of Building Performance Simulation (2008). Architectural Design and Design Studies, established in 1979, gradually moved to CAAD.
Computer-aided design also known as CAD was originally the type of program that architects used, but since CAD could not offer all the tools that architects needed to complete a project, CAAD developed as a distinct class of software.
== Terminology ==
Use of terms in the field of computer design is not consistent. Caetano et al. analyzed the language of architectural research publications and noted the following trends:
Some authors use the term "computational design" (CD) as any activity involving the CAD tools, thus making it a synonym of digital design. Other researchers exclude the automation of drafting from the definition of CD.
The terms parametric design (PD), generative design (GD), algorithmic design (AD) are very popular for the non-drafting uses of the CAAD tools (3 out of top 4 CAAD design definition terms), frequently used together and confused with one another.
performance-based design is the 3rd most popular term, independent of PD, GD, and AD.
== Overview ==
All CAD and CAAD systems employ a database with geometric and other properties of objects; they all have some kind of graphic user interface to manipulate a visual representation rather than the database; and they are all more or less concerned with assembling designs from standard and non-standard pieces. Currently, the main distinction which causes one to speak of CAAD rather than CAD lies in the domain knowledge (architecture-specific objects, techniques, data, and process support) embedded in the system. A CAAD system differs from other CAD systems in two respects:
It has an explicit object database of building parts and construction knowledge.
It explicitly supports the creation of architectural objects.
In a more general sense, CAAD also refers to the use of any computational technique in the field of architectural design other than by means of architecture-specific software. For example, software which is specifically developed for the computer animation industry (e.g. Maya and 3DStudio Max), is also used in architectural design. These programs can produce photo realistic 3d renders and animations. Nowadays real-time rendering is being popular thanks to the developments in graphic cards. The exact distinction of what properly belongs to CAAD is not always clear. Specialized software, for example for calculating structures by means of the finite element method, is used in architectural design and in that sense may fall under CAAD. On the other hand, such software is seldom used to create new designs.
In 1974 Caad became a current word and was a common topic of commercial modernization.
== Three-dimensional objects ==
CAAD has two types of structures in its program. The first system is surface structure which provides a graphics medium to represent three-dimensional objects using two-dimensional representations. Also algorithms that allow the generation of patterns and their analysis using programmed criteria, and data banks that store information about the problem at hand and the standards and regulations that applies to it. The second system is deep structure which means that the operations performed by the computer have natural limitations. Computer hardware and machine languages that are supported by these make it easy to perform arithmetical operations quickly and accurately. Also an almost illogical number of layers of symbolic processing can be built enabling the functionalities that are found at the surface.
== Advantages ==
Another advantage to CAAD is the two way mapping of activities and functionalities. The two instances of mapping are indicated to be between the surface structures and the deep structures. These mappings are abstractions that are introduced in order to discuss the process of design and deployment of CAAD systems. In designing the systems the system developers usually consider surface structures. A one-to-one mapping is the typical statement, which is to develop a computer based functionality that maps as closely as possible into a corresponding manual design activity, for example, drafting of stairs, checking spatial conflict between building systems, and generating perspectives from orthogonal views.
The architectural design processes tend to integrate models isolated so far. Many different kinds of expert knowledge, tools, visualization techniques, and media are to be combined. The design process covers the complete life cycle of the building. The areas that are covered are construction, operations, reorganization, as well as destruction. Considering the shared use of digital design tools and the exchange of information and knowledge between designers and across different projects, we speak of a design continuum.
An architect's work involves mostly visually represented data. Problems are often outlined and dealt with in a graphical approach. Only this form of expression serves as a basis for work and discussion. Therefore, the designer should have maximum visual control over the processes taking place within the design continuum. Further questions occur about navigation, associative information access, programming and communication within very large data sets.
== See also ==
Architectural geometry
Architectural engineering
Artificial Architecture
Association for Computer Aided Architectural Design Research in Asia
Building information modeling
Comparison of CAD software
Computer-aided design
Geometric modeling kernel
Design computing
Digital morphogenesis
List of BIM software
== References ==
== Further reading ==
Caetano, Inês; Santos, Luís; Leitão, António (2020). "Computational design in architecture: Defining parametric, generative, and algorithmic design". Frontiers of Architectural Research. 9 (2): 287–300. doi:10.1016/j.foar.2019.12.008.
Kalay, Y. (2005). Architecture's New Media. MIT Press, Cambridge, Massachusetts.
Mark, E., Martens, B., & Oxman, R. (2003). Preliminary stages of CAAD education. Automation in Construction, 12(6), 661–670.
Maver, T. (1993). Computer aided architectural design futures [book review]. Information and Software Technology, 35, 700–701.
McGraw-Hill Inc. (1989, July 27). Can Architecture Be Computerized? Engineering News Record, Vol. 223, No. 4; p. 23.
Ryan, R.L.(1983). Computer Aided Architectural Graphics. Marcel Dekker, Inc.
Szalapaj, P. (2001). CAD Principles for Architectural Design. Architectural Press, Oxford.
== External links ==
Several organisations are active in education and research in CAAD:
Homepage ACADIA: Association for Computer Aided Design in Architecture.
Homepage ASCAAD: Arab Society for Computer Aided Architectural Design
Homepage CAAD Futures: Computer Aided Architectural Design futures foundation.
Homepage CAADRIA: Association for Computer Aided Architectural Design Research in Asia
Homepage eCAADe: Association for Education and Research in Computer Aided Architectural Design in Europe
Homepage SIGraDi: Sociedad Iberoamericana de Gráfica Digital.
Homepage CumInCAD Cumulative index of publications about computer aided architectural design. | Wikipedia/Computer-aided_architectural_design |
An error-tolerant design (or human-error-tolerant design) is one that does not unduly penalize user or human errors. It is the human equivalent of fault tolerant design that allows equipment to continue functioning in the presence of hardware faults, such as a "limp-in" mode for an automobile electronics unit that would be employed if something like the oxygen sensor failed.
== Use of behavior shaping constraints to prevent errors ==
Use of forcing functions or behavior-shaping constraints is one technique in error-tolerant design. An example is the interlock or lockout of reverse in the transmission of a moving car. This prevents errors, and prevention of errors is the most effective technique in error-tolerant design. The practice is known as poka-yoke in Japan where it was introduced by Shigeo Shingo as part of the Toyota Production System.
== Mitigation of the effects of errors ==
The next most effective technique in error-tolerant design is the mitigation or limitation of the effects of errors after they have been made. An example is a checking or confirmation function such as an "Are you sure" dialog box with the harmless option preselected in computer software for an action that could have severe consequences if made in error, such as deleting or overwriting files (although the consequence of inadvertent file deletion has been reduced from the DOS days by a concept like the trash can in Mac OS, which has been introduced in most GUI interfaces). Adding too great a mitigating factor in some circumstances can become a hindrance, where the confirmation becomes mechanical this may become detrimental - for example, if a prompt is asked for every file in a batch delete, one may be tempted to simply agree to each prompt, even if a file is deleted accidentally.
Another example is Google's use of spell checking on searches performed through their search engine. The spell checking minimises the problems caused by incorrect spelling by not only highlighting the error to the user, but by also providing a link to search using the correct spelling instead. Searches like this are commonly performed using a combination of edit distance, soundex, and metaphone calculations.
== See also ==
Human factors
Human reliability
Murphy's law
== References ==
To Err is Human, Chapter Five in Donald A. Norman (2002), The Design of Everyday Things.
== External links ==
Modeling Human Error for Experimentation, Training, and Error-tolerant Design
Making reliable distributed systems in the presence of software errors | Wikipedia/Error-tolerant_design |
Design optimization is an engineering design methodology using a mathematical formulation of a design problem to support selection of the optimal design among many alternatives. Design optimization involves the following stages:
Variables: Describe the design alternatives
Objective: Elected functional combination of variables (to be maximized or minimized)
Constraints: Combination of Variables expressed as equalities or inequalities that must be satisfied for any acceptable design alternative
Feasibility: Values for set of variables that satisfies all constraints and minimizes/maximizes Objective.
== Design optimization problem ==
The formal mathematical (standard form) statement of the design optimization problem is
minimize
f
(
x
)
s
u
b
j
e
c
t
t
o
h
i
(
x
)
=
0
,
i
=
1
,
…
,
m
1
g
j
(
x
)
≤
0
,
j
=
1
,
…
,
m
2
and
x
∈
X
⊆
R
n
{\displaystyle {\begin{aligned}&{\operatorname {minimize} }&&f(x)\\&\operatorname {subject\;to} &&h_{i}(x)=0,\quad i=1,\dots ,m_{1}\\&&&g_{j}(x)\leq 0,\quad j=1,\dots ,m_{2}\\&\operatorname {and} &&x\in X\subseteq R^{n}\end{aligned}}}
where
x
{\displaystyle x}
is a vector of n real-valued design variables
x
1
,
x
2
,
.
.
.
,
x
n
{\displaystyle x_{1},x_{2},...,x_{n}}
f
(
x
)
{\displaystyle f(x)}
is the objective function
h
i
(
x
)
{\displaystyle h_{i}(x)}
are
m
1
{\displaystyle m_{1}}
equality constraints
g
j
(
x
)
{\displaystyle g_{j}(x)}
are
m
2
{\displaystyle m_{2}}
inequality constraints
X
{\displaystyle X}
is a set constraint that includes additional restrictions on
x
{\displaystyle x}
besides those implied by the equality and inequality constraints.
The problem formulation stated above is a convention called the negative null form, since all constraint function are expressed as equalities and negative inequalities with zero on the right-hand side. This convention is used so that numerical algorithms developed to solve design optimization problems can assume a standard expression of the mathematical problem.
We can introduce the vector-valued functions
h
=
(
h
1
,
h
2
,
…
,
h
m
1
)
and
g
=
(
g
1
,
g
2
,
…
,
g
m
2
)
{\displaystyle {\begin{aligned}&&&{h=(h_{1},h_{2},\dots ,h_{m1})}\\\operatorname {and} \\&&&{g=(g_{1},g_{2},\dots ,g_{m2})}\end{aligned}}}
to rewrite the above statement in the compact expression
minimize
f
(
x
)
s
u
b
j
e
c
t
t
o
h
(
x
)
=
0
,
g
(
x
)
≤
0
,
x
∈
X
⊆
R
n
{\displaystyle {\begin{aligned}&{\operatorname {minimize} }&&f(x)\\&\operatorname {subject\;to} &&h(x)=0,\quad g(x)\leq 0,\quad x\in X\subseteq R^{n}\\\end{aligned}}}
We call
h
,
g
{\displaystyle h,g}
the set or system of (functional) constraints and
X
{\displaystyle X}
the set constraint.
== Application ==
Design optimization applies the methods of mathematical optimization to design problem formulations and it is sometimes used interchangeably with the term engineering optimization. When the objective function f is a vector rather than a scalar, the problem becomes a multi-objective optimization one. If the design optimization problem has more than one mathematical solutions the methods of global optimization are used to identified the global optimum.
Optimization Checklist
Problem Identification
Initial Problem Statement
Analysis Models
Optimal Design Model
Model Transformation
Local Iterative Techniques
Global Verification
Final Review
A detailed and rigorous description of the stages and practical applications with examples can be found in the book Principles of Optimal Design.
Practical design optimization problems are typically solved numerically and many optimization software exist in academic and commercial forms. There are several domain-specific applications of design optimization posing their own specific challenges in formulating and solving the resulting problems; these include, shape optimization, wing-shape optimization, topology optimization, architectural design optimization, power optimization. Several books, articles and journal publications are listed below for reference.
One modern application of design optimization is structural design optimization (SDO) is in building and construction sector. SDO emphasizes automating and optimizing structural designs and dimensions to satisfy a variety of performance objectives. These advancements aim to optimize the configuration and dimensions of structures to optimize augmenting strength, minimize material usage, reduce costs, enhance energy efficiency, improve sustainability, and optimize several other performance criteria. Concurrently, structural design automation endeavors to streamline the design process, mitigate human errors, and enhance productivity through computer-based tools and optimization algorithms. Prominent practices and technologies in this domain include the parametric design, generative design, building information modelling (BIM) technology, machine learning (ML), and artificial intelligence (AI), as well as integrating finite element analysis (FEA) with simulation tools.
== Journals ==
Journal of Engineering for Industry
Journal of Mechanical Design
Journal of Mechanisms, Transmissions, and Automation in Design
Design Science
Engineering Optimization
Journal of Engineering Design
Computer-Aided Design
Journal of Optimization Theory and Applications
Structural and Multidisciplinary Optimization
Journal of Product Innovation Management
International Journal of Research in Marketing
== See also ==
Design Decisions Wiki (DDWiki) : Established by the Design Decisions Laboratory at Carnegie Mellon University in 2006 as a central resource for sharing information and tools to analyze and support decision-making
== References ==
== Further reading ==
Rutherford., Aris, ([2016], ©1961). The optimal design of chemical reactors : a study in dynamic programming. Saint Louis: Academic Press/Elsevier Science. ISBN 9781483221434. OCLC 952932441
Jerome., Bracken, ([1968]). Selected applications of nonlinear programming. McCormick, Garth P.,. New York,: Wiley. ISBN 0471094404. OCLC 174465
L., Fox, Richard ([1971]). Optimization methods for engineering design. Reading, Mass.,: Addison-Wesley Pub. Co. ISBN 0201020785. OCLC 150744
Johnson, Ray C. Mechanical Design Synthesis With Optimization Applications. New York: Van Nostrand Reinhold Co, 1971.
1905-, Zener, Clarence, ([1971]). Engineering design by geometric programming. New York,: Wiley-Interscience. ISBN 0471982008. OCLC 197022
H., Mickle, Marlin ([1972]). Optimization in systems engineering. Sze, T. W., 1921-2017,. Scranton,: Intext Educational Publishers. ISBN 0700224076. OCLC 340906.
Optimization and design; [papers]. Avriel, M.,, Rijckaert, M. J.,, Wilde, Douglass J.,, NATO Science Committee., Katholieke Universiteit te Leuven (1970- ). Englewood Cliffs, N.J.,: Prentice-Hall. [1973]. ISBN 0136380158. OCLC 618414.
J., Wilde, Douglass (1978). Globally optimal design. New York: Wiley. ISBN 0471038989. OCLC 3707693.
J., Haug, Edward (1979). Applied optimal design : mechanical and structural systems. Arora, Jasbir S.,. New York: Wiley. ISBN 047104170X. OCLC 4775674.
Uri., Kirsch, (1981). Optimum structural design : concepts, methods, and applications. New York: McGraw-Hill. ISBN 0070348448. OCLC 6735289.
Uri., Kirsch, (1993). Structural optimization : fundamentals and applications. Berlin: Springer-Verlag. ISBN 3540559191. OCLC 27676129.
Structural optimization : recent developments and applications. Lev, Ovadia E., American Society of Civil Engineers. Structural Division., American Society of Civil Engineers. Structural Division. Committee on Electronic Computation. Committee on Optimization. New York, N.Y.: ASCE. 1981. ISBN 0872622819. OCLC 8182361.
Foundations of structural optimization : a unified approach. Morris, A. J. Chichester [West Sussex]: Wiley. 1982. ISBN 0471102008. OCLC 8031383.
N., Siddall, James (1982). Optimal engineering design : principles and applications. New York: M. Dekker. ISBN 0824716337. OCLC 8389250.
1944-, Ravindran, A., (2006). Engineering optimization : methods and applications. Reklaitis, G. V., 1942-, Ragsdell, K. M. (2nd ed.). Hoboken, N.J.: John Wiley & Sons. ISBN 0471558141. OCLC 61463772.
N.,, Vanderplaats, Garret (1984). Numerical optimization techniques for engineering design : with applications. New York: McGraw-Hill. ISBN 0070669643. OCLC 9785595.
T., Haftka, Raphael (1990). Elements of Structural Optimization. Gürdal, Zafer., Kamat, Manohar P. (Second rev. edition ed.). Dordrecht: Springer Netherlands. ISBN 9789401578622. OCLC 851381183.
S., Arora, Jasbir (2011). Introduction to optimum design (3rd ed.). Boston, MA: Academic Press. ISBN 9780123813756. OCLC 760173076.
S.,, Janna, William. Design of fluid thermal systems (SI edition ; fourth edition ed.). Stamford, Connecticut. ISBN 9781285859651. OCLC 881509017.
Structural optimization : status and promise. Kamat, Manohar P. Washington, DC: American Institute of Aeronautics and Astronautics. 1993. ISBN 156347056X. OCLC 27918651.
Mathematical programming for industrial engineers. Avriel, M., Golany, B. New York: Marcel Dekker. 1996. ISBN 0824796209. OCLC 34474279.
Hans., Eschenauer, (1997). Applied structural mechanics : fundamentals of elasticity, load-bearing structures, structural optimization : including exercises. Olhoff, Niels., Schnell, W. Berlin: Springer. ISBN 3540612327. OCLC 35184040.
1956-, Belegundu, Ashok D., (2011). Optimization concepts and applications in engineering. Chandrupatla, Tirupathi R., 1944- (2nd ed.). New York: Cambridge University Press. ISBN 9781139037808. OCLC 746750296.
Okechi., Onwubiko, Chinyere (2000). Introduction to engineering design optimization. Upper Saddle River, NJ: Prentice-Hall. ISBN 0201476738. OCLC 41368373.
Optimization in action : proceedings of the Conference on Optimization in Action held at the University of Bristol in January 1975. Dixon, L. C. W. (Laurence Charles Ward), 1935-, Institute of Mathematics and Its Applications. London: Academic Press. 1976. ISBN 0122185501. OCLC 2715969.
P., Williams, H. (2013). Model building in mathematical programming (5th ed.). Chichester, West Sussex: Wiley. ISBN 9781118506189. OCLC 810039791.
Integrated design of multiscale, multifunctional materials and products. McDowell, David L., 1956-. Oxford: Butterworth-Heinemann. 2010. ISBN 9781856176620. OCLC 610001448.
M.,, Dede, Ercan. Multiphysics simulation : electromechanical system applications and optimization. Lee, Jaewook,, Nomura, Tsuyoshi,. London. ISBN 9781447156406. OCLC 881071474.
1962-, Liu, G. P. (Guo Ping), (2001). Multiobjective optimisation and control. Yang, Jian-Bo, 1961-, Whidborne, J. F. (James Ferris), 1960-. Baldock, Hertfordshire: Research Studies Press. ISBN 0585491941. OCLC 54380075.
=== Structural Topology Optimization === | Wikipedia/Design_optimization |
Solid Modeling Solutions (SMS) was a software company that specialized in 3D computer graphics geometry software. SMS was acquired by Nvidia Corporation of Santa Clara, CA in May 2022 and was dissolved as a separate corporate entity.
== History ==
The development of non-uniform rational B-spline (NURBS) originated with seminal work at Boeing and Structural Dynamics Research Corporation (SDRC) in the 1980s and 1990s, a company that led in mechanical computer-aided engineering (CAE) in those years. Boeing's involvement in NURBS dates back to 1979, when they began developing their own comprehensive computer-aided design (CAD) and computer-aided manufacturing (CAM), termed CAD/CAM, system, TIGER, to support the diverse needs of their aircraft and aerospace engineering groups. Three basic decisions were critical to establishing an environment conducive to developing NURBS. The first was Boeing's need to develop its own in-house geometry ability. Specifically, Boeing had complex surface geometry needs, especially for wing design, that was then not in any commercially available CAD/CAM system. Thus, the TIGER Geometry Development Group was established in 1979 and has received strong support for many years. The second decision critical to NURBS development was removing the constraint of upward geometric compatibility with the two systems used at Boeing then. One of these systems had evolved due to the iterative process inherent to wing design, while the other was best suited for adding to the constraints imposed by manufacturing, such as cylindrical and planar regions. The third crucial decision was simple but essential: adding the R to NURBS. Circles were to be represented precisely, with cubic approximation disallowed.
By late 1979, there were five or six well-educated mathematicians (PhDs from Stanford, Harvard, Washington, and Minnesota). Some had many years of software experience, but none had any industrial, much less CAD, geometry experience. Those were the days of an oversupply of math PhDs. The task was to choose the representations for the 11 required curve forms, which included everything from lines and circles to Bézier and B-spline curves.
By early 1980, the staff were busy choosing curve representations and developing the geometry algorithms for TIGER. One of the major tasks was curve/curve intersection. It became evident that if the general intersection problem could be solved for the Bézier/Bézier case, then it could be solved for any case. This is because everything from the lowest level could be represented in Bézier form. It was soon realized that the geometry development task would be substantially simplified if a way could be found to represent all of the curves using one form.
With this motive, the staff began work toward what became NURBS. The design of a wing demands free-form, C2 continuous, cubic splines to satisfy the needs of aerodynamic analysis, yet the circle and cylinders of manufacturing require at least rational Bézier curves. The properties of Bézier curves and uniform B-splines were well known, but the staff had to gain an understanding of non-uniform B-splines and rational Bézier curves and try to integrate the two. It was necessary to convert circles and other conics to rational Bézier curves for the curve/curve intersection. None of the staff realized the importance of the work then, and it was considered "too trivial" and "nothing new". The transition from uniform to non-uniform B-splines was rather straight forward, since the mathematical foundation had been available in the literature for many years. It simply had not yet become a part of standard CAD/CAM applied mathematics. Once there was a reasonably good understanding of rational Bézier and non-uniform splines, they still had to put them together. Up until then, the staff had not written or seen the form.
P
(
t
)
=
∑
i
w
i
P
i
b
i
(
t
)
∑
i
w
i
b
i
(
t
)
{\displaystyle P(t)={\frac {\sum _{i}w_{i}P_{i}b_{i}(t)}{\sum _{i}w_{i}b_{i}(t)}}}
was used for anything more than a conic Bézier segment.
Searching for a single form, the group worked together, learning about knots, as well as multiple knots, and how nicely Bézier segments, especially the conics, could be imbedded into a B-spline curve with multiple knots. By the end of 1980, the staff knew they had a way to present all the required curve forms using a single representation, now known as the NURBS form. But this new representation could easily have died at this point. The staff were already 12 to 18 months down a development path. They had completed a large number of algorithms using the old curve forms. They now had to convince managers and the other technical groups, such as the database and graphics groups, that they should be allowed to start over using a single representation for all curves. The NURBS surface form did not present a problem since they had not yet developed any surface algorithms. A review of this new TIGER curve form was held on February 13, 1981. The review was successful, and the staff were allowed to start over using the new curve form. It was at this time that the NURBS acronym was first used by the other side of the TIGER project, i.e., the TIGER software development groups of Boeing Computer Services. Management was very eager to promote the use of these new curve and surface forms. They had a limited understanding of mathematics, but they were very aware of the need to communicate geometric data between systems. Hence, Boeing very quickly prepared to propose NURBS to the August '81 IGES meetings.
There are two reasons why IGES so quickly accepted NURBS. The first was that IGES was in great need of a way to represent objects. Until then, there were, for example, only two surface definitions in IGES, and the B-spline form was restricted to cubic splines. The other surprisingly important reason for the rapid acceptance was that Boeing, not a CAD system supplier, was not a threat to any of the major turnkey system vendors. Evidently, IGES easily bogs down when different vendors support their own slightly different representations for the same objects. At this first IGES meeting, it was discovered that the SDRC representatives understood the presentation best. SDRC was also active in defining a single representation for standard CAD curves and was working on a similar definition.
Boehm's B-spline refinement paper from CAD '80 was of primary importance. It enabled the staff to understand non-uniform splines and to appreciate the geometrical nature of the definition so as to use B-splines in solving engineering problems. The first use of the geometrical nature of B-splines was in the curve/curve intersection. The Bezier subdivision process was utilized, and a second use was a curve offset algorithm, which was based on a polygon offset process that was eventually communicated to and used by SDRC and explained by Tiller and Hanson in their offset paper of 1984. The staff also developed an internal NURBS class taught to about 75 Boeing engineers. The class covered Bezier curves, Bezier to B-spline, and surfaces. The first public presentation of our NURBS work was at a Seattle CASA/SME seminar in March 1982. The staff had progressed quite far by then. They could take a rather simple NURBS surface definition of an aircraft and slice it with a plane surface to generate an interesting outline of some of the wings, body, and engines. The staff were allowed great freedom in pursuing our ideas, and Boeing correctly promoted NURBS, but the task of developing that technology into a useable form was too much for Boeing, which abandoned the TIGER task late in '84.
By late 1980, the TIGER Geometry Development Group consisted of Robert Blomgren, Richard Fuhr, George Graf, Peter Kochevar, Eugene Lee, Miriam Lucian, and Richard Rice. Robert Blomgren was "lead engineer".
In 1984, Robert M. Blomgren established Applied Geometry to commercialize the technology. Subsequently, Alias Systems Corporation/Silicon Graphics purchased Applied Geometry. Robert Blomgren and Jim Presti formed Solid Modeling Solutions (SMS) in early 1998. In late 2001, Nlib was purchased from GeomWare, and the alliance with IntegrityWare was terminated in 2004. Enhancements and major new features are added twice a year.
SMS software is based on years of research and application of NURBS technology. Les Piegl and Wayne Tiller (a partner of Solid Modeling Solutions) wrote the definitive "The NURBS Book" on non-uniform rational B-splines, with aids to designing geometry for computer-aided environment applications. The fundamental mathematics is well defined in this book, and the most faithful manifestation in software is implemented in the SMS product line.
== Philosophy ==
SMS provides source code to customers to enhance and enable their understanding of the underlying technology, provide opportunities for collaboration, improve time to repair, and protect their investment. Product delivery, maintenance, and communication are provided by web-based mechanisms. SMS has established a unique model of technical organization and an adaptive open-source approach. The subscription-based pricing philosophy provides a stable base of technical expertise, and it is cost-effective for its customers when viewed from the perspective of the total cost of ownership of complex software.
== SMS architecture ==
SMLib – fully functional non-manifold topological structure and solid modeling functions.
TSNLib – analyze NURBS based trimmed surface representations.
GSNLib – based on NLib with curve/curve and surface/surface intersection abilities.
NLib – an advanced geometric modeling kernel based on NURBS curves and surfaces.
VSLib – deformable modeling using the constrained optimization techniques of the calculus of variations.
PolyMLib – an object-oriented software toolkit library that provides a set of objects and corresponding methods to repair, optimize, review, and edit triangle mesh models.
data translators – NURBS-based geometry translator libraries, with interfaces for the SMLib, TSNLib, GSNLib, NLib, and SDLib family of products, including IGES, STEP, VDAFS, SAT, and OpenNURBS abilities.
== See also ==
Non-uniform rational B-spline (NURBS)
Solid modeling
Comparison of computer-aided design software
== References == | Wikipedia/Solid_Modeling_Solutions |
A graphical user interface, or GUI, is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation. In many applications, GUIs are used instead of text-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.
The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up displays (HUDs) are preferred), or not including flat screens like volumetric displays because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.
== GUI and interaction design ==
Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks.
The visible graphical interface features of an application are sometimes referred to as chrome or GUI. Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin or theme at will, and eases the designer's work to change the interface as user needs evolve. Good GUI design relates to users more, and to system architecture less.
Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool.
A GUI may be designed for the requirements of a vertical market as application-specific GUIs. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).
Cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.
== Examples ==
Sample graphical environments
== Components ==
A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.
A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the windows, icons, text fields, canvases, menus, pointer (WIMP) paradigm, especially in personal computers.
The WIMP style of interaction uses a virtual input device to represent the position of a pointing device's interface, most often a mouse, and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer.
In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism.
Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations in between exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon.
Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found on image search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameter display: inline-block;. A waterfall layout found on Imgur and TweetDeck with fixed width but variable height per item is usually implemented by specifying column-width:.
== Post-WIMP interface ==
Smaller app mobile devices such as personal digital assistants (PDAs) and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP UIs.
As of 2011, some touchscreen-based operating systems such as Apple's iOS (iPhone) and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.
== Interaction ==
Human interface devices, for the efficient interaction with a GUI include a computer keyboard, especially used together with keyboard shortcuts, pointing devices for the cursor (or rather pointer) control: mouse, pointing stick, touchpad, trackball, joystick, virtual keyboards, and head-up displays (translucent information devices at the eye level).
There are also actions performed by programs that affect the GUI. For example, there are components like inotify or D-Bus to facilitate communication between computer programs.
== History ==
=== Early efforts ===
Ivan Sutherland developed Sketchpad in 1963, widely held as the first graphical computer-aided design program. It used a light pen to create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at the Stanford Research Institute, led by Douglas Engelbart, developed the On-Line System (NLS), which used text-based hyperlinks manipulated with a then-new device: the mouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos".) In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers at Xerox PARC and specifically Alan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for the Smalltalk programming language, which ran on the Xerox Alto computer, released in 1973. Most modern general-purpose GUIs are derived from this system.
The Xerox PARC GUI consisted of graphical elements such as windows, menus, radio buttons, and check boxes. The concept of icons was later introduced by David Canfield Smith, who had written a thesis on the subject under the guidance of Kay. The PARC GUI employs a pointing device along with a keyboard. These aspects can be emphasized by using the alternative term and acronym for windows, icons, menus, pointing device (WIMP). This effort culminated in the 1973 Xerox Alto, the first computer with a GUI, though the system never reached commercial production.
The first commercially available computer with a GUI was the 1979 PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the ideas from the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as the Xerox Star. These early systems spurred many other GUI efforts, including Lisp machines by Symbolics and other manufacturers, the Apple Lisa (which presented the concept of menu bar and window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST with Digital Research's GEM, and Commodore Amiga in 1985. Visi On was released in 1983 for the IBM PC compatible computers, but was never popular due to its high hardware demands. Nevertheless, it was a crucial influence on the contemporary development of Microsoft Windows.
Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the GUIs used in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in various desktop environments for Unix-like operating systems, such as macOS and Linux. Thus most current GUIs have largely common idioms.
=== Popularization ===
GUIs were a hot topic in the early 1980s. The Apple Lisa was released in 1983, and various windowing systems existed for DOS operating systems (including PC GEM and PC/GEOS). Individual applications for many platforms presented their own GUI variants. Despite the GUI's advantages, many reviewers questioned the value of the entire concept, citing hardware limits and problems in finding compatible software.
In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS, with allusions to George Orwell's noted novel Nineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems, and becoming a signature representation of Apple products.
In 1985, Commodore released the Amiga 1000, along with Workbench and Kickstart 1.0 (which contained Intuition). This interface ran as a separate task, meaning it was very responsive and, unlike other GUIs of the time, it didn't freeze up when a program was busy. Additionally, it was the first GUI to introduce something resembling Virtual Desktops.
Windows 95, accompanied by an extensive marketing campaign, was a major success in the marketplace at launch and shortly became the most popular desktop operating system.
In 2007, with the iPhone and later in 2010 with the introduction of the iPad, Apple popularized the post-WIMP style of interaction for multi-touch screens, and those devices were considered to be milestones in the development of mobile devices.
The GUIs familiar to most people as of the mid-late 2010s are Microsoft Windows, macOS, and the X Window System interfaces for desktop and laptop computers, and Android, Apple's iOS, Symbian, BlackBerry OS, Windows Phone/Windows 10 Mobile, Tizen, WebOS, and Firefox OS for handheld (smartphone) devices.
== Comparison to other interfaces ==
People said it's more of a right-brain machine and all that—I think there is some truth to that. I think there is something to dealing in a graphical interface and a more kinetic interface—you're really moving information around, you're seeing it move as though it had substance. And you don't see that on a PC. The PC is very much of a conceptual machine; you move information around the way you move formulas, elements on either side of an equation. I think there's a difference.
=== Command-line interfaces ===
Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions.
Command-line interfaces are more lightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned. But reaching this level takes some time because the command words may not be easily discoverable or mnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However, windows, icons, menus, pointer (WIMP) interfaces present users with many widgets that represent and can trigger some of the system's available commands.
GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script.
WIMPs extensively use modes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory and environment variables.
Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention.
=== GUI wrappers ===
GUI wrappers find a way around the command-line interface versions (CLI) of (typically) Linux and Unix-like software applications and their text-based UIs or typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steep learning curve of the command-line, which requires commands to be typed on the keyboard. By starting a GUI wrapper, users can intuitively interact with, start, stop, and change its working parameters, through graphical icons and visual indicators of a desktop environment, for example. Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in a shell script.
== Three-dimensional graphical user interface ==
Many environments and games use the methods of 3D graphics to project 3D GUI objects onto the screen. The use of 3D graphics has become increasingly common in mainstream operating systems (ex. Windows Aero, and Aqua (macOS)) to create attractive interfaces, termed eye candy (which includes, for example, the use of drop shadows underneath windows and the cursor), or for functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube with faces representing each user's workspace, and window management is represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows.
The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm, File System Navigator, File System Visualizer, 3D Mailbox, and GopherVR. Zooming (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. In 2006, Hillcrest Labs introduced the first ZUI for television. Other innovations include the menus on the PlayStation 2; the menus on the Xbox; Sun's Project Looking Glass; Metisse, which was similar to Project Looking Glass; BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents; Croquet OS, which is built for collaboration; and compositing window managers such as Enlightenment and Compiz. Augmented reality and virtual reality also make use of 3D GUI elements.
=== In science fiction ===
3D GUIs have appeared in science fiction literature and films, even before certain technologies were feasible or in common use.
In prose fiction, 3D GUIs have been portrayed as immersible environments, coined as William Gibson's "cyberspace" and Neal Stephenson's "metaverse" and "avatars".
The 1993 American film Jurassic Park features Silicon Graphics' 3D file manager File System Navigator, a real-life file manager for Unix operating systems.
The film Minority Report has scenes of police officers using specialized 3D data systems.
== See also ==
== Notes ==
== References ==
== External links ==
Evolution of Graphical User Interface in last 50 years by Raj Lal
The men who really invented the GUI by Clive Akass
Graphical User Interface Gallery, screenshots of various GUIs
Marcin Wichary's GUIdebook, Graphical User Interface gallery: over 5500 screenshots of GUI, application and icon history
The Real History of the GUI by Mike Tuck
In The Beginning Was The Command Line by Neal Stephenson
3D Graphical User Interfaces (PDF) by Farid BenHajji and Erik Dybner, Department of Computer and Systems Sciences, Stockholm University
Topological Analysis of the Gibbs Energy Function (Liquid-Liquid Equilibrium Correlation Data). Including a Thermodinamic Review and a Graphical User Interface (GUI) for Surfaces/Tie-lines/Hessian matrix analysis – University of Alicante (Reyes-Labarta et al. 2015–18)
Innovative Ways to Use Information Visualization across a Variety of Fields Archived 2024-06-20 at the Wayback Machine by Ryan Erwin Digital marketing specialist (CLLAX) (2022-05) | Wikipedia/Graphical_user_interface |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.