当前位置: 首页 > news >正文

博野网站建设wordpress文章归类

博野网站建设,wordpress文章归类,北控京奥建设有限公司网站,烟台网架公司1--前言 以论文《High-Resolution Image Synthesis with Latent Diffusion Models》 开源的项目为例#xff0c;剖析Stable Diffusion经典组成部分#xff0c;巩固学习加深印象。 2--UNetModel 一个可以debug的小demo#xff1a;SD_UNet​​​​​​​ 以文生图为例#…1--前言 以论文《High-Resolution Image Synthesis with Latent Diffusion Models》  开源的项目为例剖析Stable Diffusion经典组成部分巩固学习加深印象。 2--UNetModel 一个可以debug的小demoSD_UNet​​​​​​​ 以文生图为例剖析UNetModel核心组成模块。 2-1--Forward总揽 提供的文生图Demo中实际传入的参数只有x、timesteps和context三个其中         x 表示随机初始化的噪声Tensorshape: [B*2, 4, 64, 64]*2表示使用Classifier-Free Diffusion Guidance。         timesteps 表示去噪过程中每一轮传入的timestepshape: [B*2]。         context表示经过CLIP编码后对应的文本Promptshape: [B*2, 77, 768]。 def forward(self, x, timestepsNone, contextNone, yNone,**kwargs):Apply the model to an input batch.:param x: an [N x C x ...] Tensor of inputs.:param timesteps: a 1-D batch of timesteps.:param context: conditioning plugged in via crossattn:param y: an [N] Tensor of labels, if class-conditional.:return: an [N x C x ...] Tensor of outputs.assert (y is not None) (self.num_classes is not None), must specify y if and only if the model is class-conditionalhs []t_emb timestep_embedding(timesteps, self.model_channels, repeat_onlyFalse) # Create sinusoidal timestep embeddings.emb self.time_embed(t_emb) # MLPif self.num_classes is not None:assert y.shape (x.shape[0],)emb emb self.label_emb(y)h x.type(self.dtype)for module in self.input_blocks:h module(h, emb, context)hs.append(h)h self.middle_block(h, emb, context)for module in self.output_blocks:h th.cat([h, hs.pop()], dim1)h module(h, emb, context)h h.type(x.dtype)if self.predict_codebook_ids:return self.id_predictor(h)else:return self.out(h) 2-2--timestep embedding生成 使用函数 timestep_embedding() 和 self.time_embed() 对传入的timestep进行位置编码生成sinusoidal timestep embeddings。         其中 timestep_embedding() 函数定义如下而self.time_embed()是一个MLP函数。 def timestep_embedding(timesteps, dim, max_period10000, repeat_onlyFalse):Create sinusoidal timestep embeddings.:param timesteps: a 1-D Tensor of N indices, one per batch element.These may be fractional.:param dim: the dimension of the output.:param max_period: controls the minimum frequency of the embeddings.:return: an [N x dim] Tensor of positional embeddings.if not repeat_only:half dim // 2freqs torch.exp(-math.log(max_period) * torch.arange(start0, endhalf, dtypetorch.float32) / half).to(devicetimesteps.device)args timesteps[:, None].float() * freqs[None]embedding torch.cat([torch.cos(args), torch.sin(args)], dim-1)if dim % 2:embedding torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim-1)else:embedding repeat(timesteps, b - b d, ddim)return embedding self.time_embed nn.Sequential(linear(model_channels, time_embed_dim),nn.SiLU(),linear(time_embed_dim, time_embed_dim), ) 2-3--self.input_blocks下采样 在 Forward() 中使用 self.input_blocks 将输入噪声进行分辨率下采样经过下采样具体维度变化为[B*2, 4, 64, 64]  [B*2, 1280, 8, 8]         下采样模块共有12个 module其组成如下 ModuleList((0): TimestepEmbedSequential((0): Conv2d(4, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(1-2): 2 x TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features320, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 320, eps1e-06, affineTrue)(proj_in): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features320, out_features320, biasFalse)(to_v): Linear(in_features320, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features320, out_features2560, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features1280, out_features320, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features768, out_features320, biasFalse)(to_v): Linear(in_features768, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((320,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))))(3): TimestepEmbedSequential((0): Downsample((op): Conv2d(320, 320, kernel_size(3, 3), stride(2, 2), padding(1, 1))))(4): TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(320, 640, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features640, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 640, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(640, 640, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Conv2d(320, 640, kernel_size(1, 1), stride(1, 1)))(1): SpatialTransformer((norm): GroupNorm(32, 640, eps1e-06, affineTrue)(proj_in): Conv2d(640, 640, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features640, out_features640, biasFalse)(to_k): Linear(in_features640, out_features640, biasFalse)(to_v): Linear(in_features640, out_features640, biasFalse)(to_out): Sequential((0): Linear(in_features640, out_features640, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features640, out_features5120, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features2560, out_features640, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features640, out_features640, biasFalse)(to_k): Linear(in_features768, out_features640, biasFalse)(to_v): Linear(in_features768, out_features640, biasFalse)(to_out): Sequential((0): Linear(in_features640, out_features640, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((640,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((640,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((640,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(640, 640, kernel_size(1, 1), stride(1, 1))))(5): TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 640, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(640, 640, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features640, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 640, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(640, 640, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 640, eps1e-06, affineTrue)(proj_in): Conv2d(640, 640, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features640, out_features640, biasFalse)(to_k): Linear(in_features640, out_features640, biasFalse)(to_v): Linear(in_features640, out_features640, biasFalse)(to_out): Sequential((0): Linear(in_features640, out_features640, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features640, out_features5120, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features2560, out_features640, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features640, out_features640, biasFalse)(to_k): Linear(in_features768, out_features640, biasFalse)(to_v): Linear(in_features768, out_features640, biasFalse)(to_out): Sequential((0): Linear(in_features640, out_features640, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((640,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((640,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((640,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(640, 640, kernel_size(1, 1), stride(1, 1))))(6): TimestepEmbedSequential((0): Downsample((op): Conv2d(640, 640, kernel_size(3, 3), stride(2, 2), padding(1, 1))))(7): TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 640, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(640, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Conv2d(640, 1280, kernel_size(1, 1), stride(1, 1)))(1): SpatialTransformer((norm): GroupNorm(32, 1280, eps1e-06, affineTrue)(proj_in): Conv2d(1280, 1280, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features1280, out_features1280, biasFalse)(to_k): Linear(in_features1280, out_features1280, biasFalse)(to_v): Linear(in_features1280, out_features1280, biasFalse)(to_out): Sequential((0): Linear(in_features1280, out_features1280, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features1280, out_features10240, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features5120, out_features1280, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features1280, out_features1280, biasFalse)(to_k): Linear(in_features768, out_features1280, biasFalse)(to_v): Linear(in_features768, out_features1280, biasFalse)(to_out): Sequential((0): Linear(in_features1280, out_features1280, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(1280, 1280, kernel_size(1, 1), stride(1, 1))))(8): TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 1280, eps1e-06, affineTrue)(proj_in): Conv2d(1280, 1280, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features1280, out_features1280, biasFalse)(to_k): Linear(in_features1280, out_features1280, biasFalse)(to_v): Linear(in_features1280, out_features1280, biasFalse)(to_out): Sequential((0): Linear(in_features1280, out_features1280, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features1280, out_features10240, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features5120, out_features1280, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features1280, out_features1280, biasFalse)(to_k): Linear(in_features768, out_features1280, biasFalse)(to_v): Linear(in_features768, out_features1280, biasFalse)(to_out): Sequential((0): Linear(in_features1280, out_features1280, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(1280, 1280, kernel_size(1, 1), stride(1, 1))))(9): TimestepEmbedSequential((0): Downsample((op): Conv2d(1280, 1280, kernel_size(3, 3), stride(2, 2), padding(1, 1))))(10-11): 2 x TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())) ) 12个 module 都使用了 TimestepEmbedSequential 类进行封装根据不同的网络层将输入噪声x与timestep embedding和prompt context进行运算。 class TimestepEmbedSequential(nn.Sequential, TimestepBlock):A sequential module that passes timestep embeddings to the children thatsupport it as an extra input.def forward(self, x, emb, contextNone):for layer in self:if isinstance(layer, TimestepBlock):x layer(x, emb)elif isinstance(layer, SpatialTransformer):x layer(x, context)else:x layer(x)return x 2-3-1--Module0 Module 0 是一个2D卷积层主要对输入噪声进行特征提取 # init 初始化 self.input_blocks nn.ModuleList([TimestepEmbedSequential(conv_nd(dims, in_channels, model_channels, 3, padding1))] )# 打印 self.input_blocks[0] TimestepEmbedSequential((0): Conv2d(4, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)) ) 2-3-2--Module1和Module2 Module1和Module2的结构相同都由一个ResBlock和一个SpatialTransformer组成 # init 初始化 for _ in range(num_res_blocks):layers [ResBlock(ch,time_embed_dim,dropout,out_channelsmult * model_channels,dimsdims,use_checkpointuse_checkpoint,use_scale_shift_normuse_scale_shift_norm,)]ch mult * model_channelsif ds in attention_resolutions:if num_head_channels -1:dim_head ch // num_headselse:num_heads ch // num_head_channelsdim_head num_head_channelsif legacy:#num_heads 1dim_head ch // num_heads if use_spatial_transformer else num_head_channelslayers.append(AttentionBlock(ch,use_checkpointuse_checkpoint,num_headsnum_heads,num_head_channelsdim_head,use_new_attention_orderuse_new_attention_order,) if not use_spatial_transformer else SpatialTransformer(ch, num_heads, dim_head, depthtransformer_depth, context_dimcontext_dim))self.input_blocks.append(TimestepEmbedSequential(*layers))self._feature_size chinput_block_chans.append(ch)# 打印 self.input_blocks[1] TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features320, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 320, eps1e-06, affineTrue)(proj_in): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features320, out_features320, biasFalse)(to_v): Linear(in_features320, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features320, out_features2560, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features1280, out_features320, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features768, out_features320, biasFalse)(to_v): Linear(in_features768, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((320,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))) )# 打印 self.input_blocks[2] TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features320, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 320, eps1e-06, affineTrue)(proj_in): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features320, out_features320, biasFalse)(to_v): Linear(in_features320, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features320, out_features2560, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features1280, out_features320, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features768, out_features320, biasFalse)(to_v): Linear(in_features768, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((320,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))) ) 2-3-3--Module3 Module3是一个下采样2D卷积层。 # init 初始化 if level ! len(channel_mult) - 1:out_ch chself.input_blocks.append(TimestepEmbedSequential(ResBlock(ch,time_embed_dim,dropout,out_channelsout_ch,dimsdims,use_checkpointuse_checkpoint,use_scale_shift_normuse_scale_shift_norm,downTrue,)if resblock_updownelse Downsample(ch, conv_resample, dimsdims, out_channelsout_ch)))# 打印 self.input_blocks[3] TimestepEmbedSequential((0): Downsample((op): Conv2d(320, 320, kernel_size(3, 3), stride(2, 2), padding(1, 1))) ) 2-3-4--Module4、Module5、Module7和Module8 与Module1和Module2的结构相同都由一个ResBlock和一个SpatialTransformer组成只有特征维度上的区别 2-3-4--Module6和Module9 与Module3的结构相同是一个下采样2D卷积层。 2-3--5--Module10和Module11 Module10和Module12的结构相同只由一个ResBlock组成。 # 打印 self.input_blocks[10] TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity()) )# 打印 self.input_blocks[11] TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity()) )
http://www.pierceye.com/news/538226/

相关文章:

  • 福建网站建建设方案单一产品销售网站建设模板
  • 免费开源门户网站系统网站seo优化如何做
  • html网站分页怎么做wordpress cms plugin
  • 一个网站如何做seo优化卖书网站开发的背景
  • jsp网站开发源码实例广州网站优化排名推广
  • 网站建设中网站需求分析报告百度网盘电脑版下载
  • 爱做网站网址工商网站注册公司
  • 住房和城乡建设部网站下载魔改wordpress主题
  • dremrever怎么做网站阿里云php网站建设教程
  • 网站建设课程旅行社手机网站建设方案
  • 书店网站建设策划书总结关于外贸公司的网站模板
  • 张家港市规划建设网站房地产估价师
  • 创建网站有什么用南京做网站优化的企业
  • 网站seo设置是什么怎么知道网站被百度k了
  • 个人网站开发的意义自己建设网站需要什么手续
  • 网站的建设流程怎样使用仿站小工具做网站
  • 佛山企业模板建站企业微信管理系统
  • 百度推广登录网站网站开发需要什么技术人员
  • 有关网站升级建设的申请书中国工业设计公司
  • 线上销售怎么做优化网站哪家好
  • 成都网站建设备案audio player wordpress 使用
  • 做网站设计的公司上海装修公司名字
  • 处理器优化软件se 网站优化
  • 网站制作公司汉狮网络电子商务网站建设评估的指标有哪些?
  • asp网站伪静态教程网站建设多少钱实惠湘潭磐石网络
  • wordpress 外贸网站建设wordpress模板安装
  • 中国精准扶贫网站建设现状惠安规划局建设局网站
  • 营销型网站制作建设网络营销推广技巧
  • 哪里有做网站推广的宁波招聘网站开发
  • 建站工具帝国双语网站开发