NN Blocks
influpaint.models.nn_blocks
ConvNextBlock
Bases: Module
https://arxiv.org/abs/2201.03545
Source code in influpaint/models/nn_blocks.py
PreNorm
Bases: Module
Group normalization
The DDPM authors interleave the convolutional/attention layers of the U-Net with group normalization
(Wu et al., 2018). Below, we define a PreNorm class, which will be
used to apply groupnorm before the attention layer, as we'll see further. Note that there's been a
debate about whether to apply
normalization before or after attention in Transformers.
Source code in influpaint/models/nn_blocks.py
ResnetBlock
Bases: Module
https://arxiv.org/abs/1512.03385
Source code in influpaint/models/nn_blocks.py
SinusoidalPositionEmbeddings
Bases: Module
Position embeddings
The SinusoidalPositionEmbeddings module takes a tensor of shape (batch_size, 1) as input
(i.e. the noise levels of several noisy images in a batch), and turns this into a tensor of
shape (batch_size, dim), with dim being the dimensionality of the position embeddings.
This is then added to each residual block, as we will see further.
Source code in influpaint/models/nn_blocks.py
Unet
Bases: Module
Conditional U-Net
Now that we've defined all building blocks (position embeddings, ResNet/ConvNeXT blocks, attention and group
normalization), it's time to define the entire neural network. Recall that the job of the network
\(\mathbf{\epsilon}_ heta(\mathbf{x}_t, t)\) is to take in a batch of noisy images + noise levels,
and output the noise added to the input. More formally:
- the network takes a batch of noisy images of shape `(batch_size, num_channels, height, width)` and a batch
of noise levels of shape `(batch_size, 1)` as input, and returns a tensor of shape
`(batch_size, num_channels, height, width)`
The network is built up as follows:
* first, a convolutional layer is applied on the batch of noisy images, and position embeddings are computed for the noise levels
* next, a sequence of downsampling stages are applied. Each downsampling stage consists of 2 ResNet/ConvNeXT blocks + groupnorm + attention + residual connection + a downsample operation
* at the middle of the network, again ResNet or ConvNeXT blocks are applied, interleaved with attention
* next, a sequence of upsampling stages are applied. Each upsampling stage consists of 2 ResNet/ConvNeXT blocks + groupnorm + attention + residual connection + an upsample operation
* finally, a ResNet/ConvNeXT block followed by a convolutional layer is applied.
Ultimately, neural networks stack up layers as if they were lego blocks (but it's important to [understand how they work](http://karpathy.github.io/2019/04/25/recipe/)).
Source code in influpaint/models/nn_blocks.py
256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 | |