text
stringlengths
0
1.49k
[1511.02 --> 1512.30] doesn't work well.
[1513.44 --> 1513.96] So
[1513.96 --> 1515.32] I'm sure
[1515.32 --> 1515.88] you're familiar
[1515.88 --> 1516.20] with
[1516.20 --> 1516.54] serve
[1516.54 --> 1516.78] or
[1516.78 --> 1517.16] console
[1517.16 --> 1518.04] at the very least
[1518.04 --> 1518.76] from HashiCorp.
[1519.42 --> 1519.68] Oh yeah,
[1519.88 --> 1520.16] definitely.
[1521.16 --> 1521.32] Yeah,
[1521.44 --> 1521.76] or
[1521.76 --> 1522.80] Cassandra
[1522.80 --> 1524.56] for example.
[1525.24 --> 1525.52] And so
[1525.52 --> 1526.20] all of these
[1526.20 --> 1526.70] systems
[1526.70 --> 1527.22] the
[1527.22 --> 1527.98] ACA
[1527.98 --> 1529.14] is another one
[1529.14 --> 1529.92] from the JVM.
[1530.36 --> 1530.98] All these systems
[1530.98 --> 1531.72] are gossip-based
[1531.72 --> 1532.10] membership
[1532.10 --> 1533.32] systems.
[1533.78 --> 1534.70] And
[1534.70 --> 1535.54] they
[1535.54 --> 1538.34] exhibit
[1538.34 --> 1538.84] very
[1538.84 --> 1539.62] interesting
[1539.62 --> 1540.12] failure
[1540.12 --> 1540.66] behavior.
[1541.66 --> 1542.14] When you
[1542.14 --> 1542.72] turn off
[1542.72 --> 1543.10] like
[1543.10 --> 1543.84] if you have
[1543.84 --> 1544.22] a deployment
[1544.22 --> 1544.66] of 100
[1544.66 --> 1545.26] or 1,000
[1545.26 --> 1545.62] nodes
[1545.62 --> 1545.90] and you
[1545.90 --> 1546.24] turn off
[1546.24 --> 1546.86] 50%
[1546.86 --> 1549.00] or 60%
[1549.00 --> 1549.32] of the
[1549.32 --> 1549.64] nodes
[1549.64 --> 1550.70] things
[1550.70 --> 1551.12] aren't
[1551.12 --> 1551.36] going to
[1551.36 --> 1551.98] go well.
[1553.32 --> 1553.70] Cassandra
[1553.70 --> 1554.32] gets to
[1554.32 --> 1555.16] data loss
[1555.16 --> 1556.84] console
[1556.84 --> 1557.98] has
[1557.98 --> 1558.84] takes a
[1558.84 --> 1559.34] long time
[1559.34 --> 1560.22] to stabilize
[1560.22 --> 1561.96] ACA
[1561.96 --> 1562.92] similarly.
[1564.14 --> 1564.42] And so
[1564.42 --> 1565.68] what I've
[1565.68 --> 1566.58] been working
[1566.58 --> 1567.64] on
[1567.64 --> 1568.24] or
[1568.24 --> 1568.86] what I've
[1568.86 --> 1569.48] been spending
[1569.48 --> 1570.20] my free time
[1570.20 --> 1570.84] on with the
[1570.84 --> 1571.64] VMware research
[1571.64 --> 1571.92] group
[1571.92 --> 1572.42] is
[1572.42 --> 1574.46] improving
[1574.46 --> 1575.06] the gossip
[1575.06 --> 1575.78] algorithm.
[1576.10 --> 1576.38] And so
[1576.38 --> 1576.94] I'm working
[1576.94 --> 1577.66] on that
[1577.66 --> 1578.04] actually
[1578.04 --> 1579.24] in my
[1579.24 --> 1580.44] GitHub
[1580.44 --> 1580.98] account.
[1581.98 --> 1582.42] And so
[1582.42 --> 1583.16] the results
[1583.16 --> 1583.64] we have
[1583.64 --> 1584.34] is we go
[1584.34 --> 1584.78] from
[1584.78 --> 1585.62] interesting
[1585.62 --> 1586.00] failure
[1586.00 --> 1586.56] conditions
[1586.56 --> 1587.16] to ideal
[1587.16 --> 1587.58] case.
[1588.48 --> 1588.66] And so
[1588.66 --> 1588.90] I want
[1588.90 --> 1589.24] to prove
[1589.24 --> 1589.56] that out
[1589.56 --> 1589.82] a little
[1589.82 --> 1590.30] bit more