wop's picture
Update index.html
a280831 verified
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="description"
content="AI Now Lives in Time: Temporal Dense Networks for Distributed Generalization.">
<meta name="keywords" content="Temporal AI, DenseNet, Neural Networks, PyTorch, XOR Generalization">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Temporal DenseNet: AI Now Lives in Time</title>
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro" rel="stylesheet">
<link rel="stylesheet" href="./static/css/bulma.min.css">
<link rel="stylesheet" href="./static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="./static/css/bulma-slider.min.css">
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="./static/css/index.css">
<link rel="icon" href="./static/images/favicon.svg">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
<script src="./static/js/bulma-carousel.min.js"></script>
<script src="./static/js/bulma-slider.min.js"></script>
</head>
<body>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-1 publication-title">AI Now Lives in Time: Temporal DenseNet</h1>
<div class="is-size-5 publication-authors">
<span class="author-block">Independent Research</span>
</div>
<div class="column has-text-centered">
<div class="publication-links">
<span class="link-block">
<a href="#code-section" class="button is-normal is-rounded is-dark">
<span class="icon"><i class="fab fa-github"></i></span>
<span>Code</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
This network is a fully connected “temporal” architecture where each neuron sees not only the neurons in earlier layers of the current computation, but also all neurons from the previous computation step (tick).
</p>
<p>
By spreading information across layers and time, no single neuron can memorize an input-output pair directly. Instead, the network learns patterns in a distributed way, naturally favoring generalization over memorization. This makes it useful for tasks where datasets are small or where overfitting is a risk, because the architecture itself prevents simple lookup-table memorization and encourages learning the underlying rules.
</p>
</div>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-full-width">
<h2 class="title is-3">Temporal Accumulation Results</h2>
<div class="columns is-vcentered">
<div class="column">
<p>
The model demonstrates extreme precision on the XOR dataset by utilizing <strong>time ticks</strong> to accumulate state.
</p>
<ul style="list-style-type: square; margin-left: 20px;">
<li><strong>Final Loss:</strong> 0.000002</li>
<li><strong>Ticks:</strong> 3 iterative steps</li>
<li><strong>Structure:</strong> 8x8x8 Hidden layers</li>
</ul>
</div>
<div class="column">
<table class="table is-narrow is-fullwidth is-bordered">
<thead>
<tr><th>Epoch</th><th>MSE Loss</th></tr>
</thead>
<tbody>
<tr><td>200</td><td>0.000091</td></tr>
<tr><td>1000</td><td>0.000008</td></tr>
<tr><td>2000</td><td>0.000002</td></tr>
</tbody>
</table>
</div>
</div>
<h3 class="title is-4">Final Prediction Accuracy</h3>
<p>The network achieves near-perfect separation for non-linear logic:</p>
<pre style="background: #232323; color: #00ff00; padding: 15px; border-radius: 8px;">
Raw Predictions:
[[7.0460670e-04] [9.9793684e-01] [9.9922490e-01] [2.1156450e-03]]
Rounded: [0, 1, 1, 0]</pre>
</div>
</div>
</div>
</section>
<section class="section" id="code-section">
<div class="container is-max-desktop">
<h2 class="title is-3">PyTorch Implementation</h2>
<div class="content">
<p>The architecture uses <code>nn.ModuleList</code> to manage current tick layers ($U$) and previous tick recurrence ($W$).</p>
<pre style="background-color: #f5f5f5; padding: 20px; border-radius: 10px; border: 1px solid #ddd; font-size: 0.9em;"><code>
import torch
import torch.nn as nn
class TemporalDenseNet(nn.Module):
def __init__(self, input_size, hidden_sizes, output_size):
super().__init__()
self.num_layers = len(hidden_sizes)
self.hidden_sizes = hidden_sizes
self.prev_concat_size = sum(hidden_sizes)
# Current-tick linear layers U[i]
self.U = nn.ModuleList()
for i in range(self.num_layers):
in_size = input_size if i == 0 else sum(hidden_sizes[:i])
self.U.append(nn.Linear(in_size, hidden_sizes[i]))
# Previous-tick linear layers W[i]
self.W = nn.ModuleList([nn.Linear(self.prev_concat_size, hidden_sizes[i])
for i in range(self.num_layers)])
self.out = nn.Linear(self.prev_concat_size, output_size)
self.activation = torch.tanh
def forward(self, x, prev_outputs=None):
layer_outputs = []
prev_cat = torch.cat(prev_outputs, dim=1) if prev_outputs is not None else None
for i in range(self.num_layers):
current_input = x if i == 0 else torch.cat(layer_outputs, dim=1)
out = self.U[i](current_input)
if prev_cat is not None:
out = out + self.W[i](prev_cat)
out = self.activation(out)
layer_outputs.append(out)
final_cat = torch.cat(layer_outputs, dim=1)
return layer_outputs, torch.sigmoid(self.out(final_cat))
</code></pre>
</div>
</div>
</section>
<footer class="footer">
<div class="container">
<div class="content has-text-centered">
<p>Research integrated from <code>timeBasedAIDense.py</code>.</p>
</div>
</div>
</footer>
</body>
</html>