当前位置: 首页 > news >正文

塘厦东莞网站建设中铁建工集团有限公司官网

塘厦东莞网站建设,中铁建工集团有限公司官网,h5制作工具有哪四个,六年级毕业留言册页面设计模板0. 引言 在缺陷检测中#xff0c;由于真实世界样本中的缺陷数据极为稀少#xff0c;有时在几千甚至几万个样品中才会出现一个缺陷数据。因此#xff0c;以往的模型只需在正常样本上进行训练#xff0c;学习正常样品的数据分布。在测试时#xff0c;需要手动指定阈值来区分…0. 引言 在缺陷检测中由于真实世界样本中的缺陷数据极为稀少有时在几千甚至几万个样品中才会出现一个缺陷数据。因此以往的模型只需在正常样本上进行训练学习正常样品的数据分布。在测试时需要手动指定阈值来区分每种项目的正常和异常实例然而这并不适用于实际的生产环境。 大型视觉语言模型LVLMs诸如 MiniGPT - 4 和 LLaVA已展现出强大的图像理解能力在各类视觉任务中取得显著性能。那么大模型能否应用于工业缺陷检测领域呢AnomalyGPT 对此展开了深入探索 1.AnomalyGPT 针对缺陷检测中的问题现有方法主要分为两大类基于重建和基于特征嵌入。基于重建的方法主要是将异常样本重建为相应的正常样本并通过计算重建误差来检测异常。而基于特征嵌入的方法则侧重于对正常样本的特征嵌入进行建模然后通过计算测试样本的特征嵌入与正常样本特征嵌入库的距离来判断是否异常。但这些现有方法在面对新数据时都需要大量数据重新训练无法满足真实的工业缺陷检测需求。 AnomalyGPT论文作者提出了创新性的解决办法开创性地将大视觉语言模型应用于工业异常检测领域推出了 AnomalyGPT 模型。该模型能够检测异常的存在分类和位置定位且无需手动设置阈值。此外AnomalyGPT 可以提供关于图像的信息并支持交互式参与使用户能够根据其需求和所提供的答案提出后续问题。同时AnomalyGPT 还可以对少量的正常样本无需缺陷样品进行上下文学习从而能够快速适应以前未见过的物体。 AnomalyGPT 模型的创新点如下 首次将大视觉语言模型应用到工业异常检测领域支持输出缺陷 mask支持多轮对话只需要少量数据即可泛化到其他新数据的检测当中。 2.环境安装 2.1 GPU环境 要本地部署AnomalyGPT 需要用到GPU加速GPU的显存要大于等于8G,我这里部署的环境是系统是win10,GPU是3090ti 24G显存cuda版本是11.8,cudnn版本是8.9。 2.2 创建环境 # 创建并配置环境依赖 conda create -n agpt python3.10 conda activate agpt2.3 下载源码 git clone https://github.com/CASIA-IVA-Lab/AnomalyGPT.git2.4 安装依赖 2.4.1 pytorch 这里pytorch建议单独安装可以找到cuda对应的版本进行安装 conda install pytorch2.0.0 torchvision0.15.0 torchaudio2.0.0 pytorch-cuda11.8 -c pytorch -c nvidia2.4.2 安装deepspeed 官方给的环境默认会安装deepspeed库支持sat库训练此库对于模型推理并非必要同时部分Windows环境安装此库的某些版本时会遇到问题。 这里可以使用deepspeed 0.3.16这个版本 pip install deepspeed0.3.162.4.3 安装requirements.txt文件内其他依赖 打开源码里面的requirements.txt文件把torch和deepspeed的依赖删掉然后安装 pip install -r requirements.txt3. 模型下载与合并 3.1 ImageBind模型 从https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth下载模型然后放到以下目录 3.2 合并模型 这里模型需要LLaMA的模型与Vicuna Delta模型合并得到。 3.2.1 下载LLaMA 7B模型 可以从LLaMa官方下载到7B模型这里我把模型转到百度网盘了通过网盘分享的文件LLaMA 链接: https://pan.baidu.com/s/1syklVFou4r252PxcCaZY7w 提取码: 5ffx 。只下载7B和tokenizer.model然后把model放在7B文件夹。 然后在AnomalyGPT根目录下创建一个LLaMA目录把7B目录复制到这个目录下 3.2.2 转换成Huggingface格式 安装protobuf pip install protobuf3.20转换模型 可以参考官网给的文档转换模型 也可以直接复制下面的代码进行模型转换 import argparse import gc import json import os import shutil import warnings from typing import Listimport torchfrom transformers import GenerationConfig, LlamaConfig, LlamaForCausalLM, LlamaTokenizer, PreTrainedTokenizerFast from transformers.convert_slow_tokenizer import TikTokenConvertertry:from transformers import LlamaTokenizerFast except ImportError as e:warnings.warn(e)warnings.warn(The converted tokenizer will be the slow tokenizer. To use the fast, update your tokenizers library and re-run the tokenizer conversion)LlamaTokenizerFast NoneNUM_SHARDS {7B: 1,8B: 1,8Bf: 1,7Bf: 1,13B: 2,13Bf: 2,34B: 4,30B: 4,65B: 8,70B: 8,70Bf: 8,405B: 8,405B-MP16: 16, }CONTEXT_LENGTH_FOR_VERSION {3.1: 131072, 3: 8192, 2: 4096, 1: 2048}def compute_intermediate_size(n, ffn_dim_multiplier1, multiple_of256):return multiple_of * ((int(ffn_dim_multiplier * int(8 * n / 3)) multiple_of - 1) // multiple_of)def read_json(path):with open(path, r) as f:return json.load(f)def write_json(text, path):with open(path, w) as f:json.dump(text, f)def write_model(model_path,input_base_path,model_sizeNone,safe_serializationTrue,llama_version1,vocab_sizeNone,num_shardsNone,instructFalse, ):os.makedirs(model_path, exist_okTrue)tmp_model_path os.path.join(model_path, tmp)os.makedirs(tmp_model_path, exist_okTrue)params read_json(os.path.join(input_base_path, params.json))num_shards NUM_SHARDS[model_size] if num_shards is None else num_shardsparams params.get(model, params)n_layers params[n_layers]n_heads params[n_heads]n_heads_per_shard n_heads // num_shardsdim params[dim]dims_per_head dim // n_headsbase params.get(rope_theta, 10000.0)inv_freq 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head))if base 10000.0 and float(llama_version) 3:max_position_embeddings 16384else:max_position_embeddings CONTEXT_LENGTH_FOR_VERSION[llama_version]if params.get(n_kv_heads, None) is not None:num_key_value_heads params[n_kv_heads] # for GQA / MQAnum_key_value_heads_per_shard num_key_value_heads // num_shardskey_value_dim dims_per_head * num_key_value_headselse: # compatibility with other checkpointsnum_key_value_heads n_headsnum_key_value_heads_per_shard n_heads_per_shardkey_value_dim dim# permute for sliced rotarydef permute(w, n_heads, dim1dim, dim2dim):return w.view(n_heads, dim1 // n_heads // 2, 2, dim2).transpose(1, 2).reshape(dim1, dim2)print(fFetching all parameters from the checkpoint at {input_base_path}.)# Load weightsif num_shards 1:# Not sharded# (The sharded implementation would also work, but this is simpler.)loaded torch.load(os.path.join(input_base_path, consolidated.00.pth), map_locationcpu)else:# Shardedcheckpoint_list sorted([file for file in os.listdir(input_base_path) if file.endswith(.pth)])print(Loading in order:, checkpoint_list)loaded [torch.load(os.path.join(input_base_path, file), map_locationcpu) for file in checkpoint_list]param_count 0index_dict {weight_map: {}}for layer_i in range(n_layers):filename fpytorch_model-{layer_i 1}-of-{n_layers 1}.binif num_shards 1:# Unshardedstate_dict {fmodel.layers.{layer_i}.self_attn.q_proj.weight: permute(loaded[flayers.{layer_i}.attention.wq.weight], n_headsn_heads),fmodel.layers.{layer_i}.self_attn.k_proj.weight: permute(loaded[flayers.{layer_i}.attention.wk.weight],n_headsnum_key_value_heads,dim1key_value_dim,),fmodel.layers.{layer_i}.self_attn.v_proj.weight: loaded[flayers.{layer_i}.attention.wv.weight],fmodel.layers.{layer_i}.self_attn.o_proj.weight: loaded[flayers.{layer_i}.attention.wo.weight],fmodel.layers.{layer_i}.mlp.gate_proj.weight: loaded[flayers.{layer_i}.feed_forward.w1.weight],fmodel.layers.{layer_i}.mlp.down_proj.weight: loaded[flayers.{layer_i}.feed_forward.w2.weight],fmodel.layers.{layer_i}.mlp.up_proj.weight: loaded[flayers.{layer_i}.feed_forward.w3.weight],fmodel.layers.{layer_i}.input_layernorm.weight: loaded[flayers.{layer_i}.attention_norm.weight],fmodel.layers.{layer_i}.post_attention_layernorm.weight: loaded[flayers.{layer_i}.ffn_norm.weight],}else:# Sharded# Note that attention.w{q,k,v,o}, feed_fordward.w[1,2,3], attention_norm.weight and ffn_norm.weight share# the same storage object, saving attention_norm and ffn_norm will save other weights too, which is# redundant as other weights will be stitched from multiple shards. To avoid that, they are cloned.state_dict {fmodel.layers.{layer_i}.input_layernorm.weight: loaded[0][flayers.{layer_i}.attention_norm.weight].clone(),fmodel.layers.{layer_i}.post_attention_layernorm.weight: loaded[0][flayers.{layer_i}.ffn_norm.weight].clone(),}state_dict[fmodel.layers.{layer_i}.self_attn.q_proj.weight] permute(torch.cat([loaded[i][flayers.{layer_i}.attention.wq.weight].view(n_heads_per_shard, dims_per_head, dim)for i in range(len(loaded))],dim0,).reshape(dim, dim),n_headsn_heads,)state_dict[fmodel.layers.{layer_i}.self_attn.k_proj.weight] permute(torch.cat([loaded[i][flayers.{layer_i}.attention.wk.weight].view(num_key_value_heads_per_shard, dims_per_head, dim)for i in range(len(loaded))],dim0,).reshape(key_value_dim, dim),num_key_value_heads,key_value_dim,dim,)state_dict[fmodel.layers.{layer_i}.self_attn.v_proj.weight] torch.cat([loaded[i][flayers.{layer_i}.attention.wv.weight].view(num_key_value_heads_per_shard, dims_per_head, dim)for i in range(len(loaded))],dim0,).reshape(key_value_dim, dim)state_dict[fmodel.layers.{layer_i}.self_attn.o_proj.weight] torch.cat([loaded[i][flayers.{layer_i}.attention.wo.weight] for i in range(len(loaded))], dim1)state_dict[fmodel.layers.{layer_i}.mlp.gate_proj.weight] torch.cat([loaded[i][flayers.{layer_i}.feed_forward.w1.weight] for i in range(len(loaded))], dim0)state_dict[fmodel.layers.{layer_i}.mlp.down_proj.weight] torch.cat([loaded[i][flayers.{layer_i}.feed_forward.w2.weight] for i in range(len(loaded))], dim1)state_dict[fmodel.layers.{layer_i}.mlp.up_proj.weight] torch.cat([loaded[i][flayers.{layer_i}.feed_forward.w3.weight] for i in range(len(loaded))], dim0)state_dict[fmodel.layers.{layer_i}.self_attn.rotary_emb.inv_freq] inv_freqfor k, v in state_dict.items():index_dict[weight_map][k] filenameparam_count v.numel()torch.save(state_dict, os.path.join(tmp_model_path, filename))filename fpytorch_model-{n_layers 1}-of-{n_layers 1}.binif num_shards 1:# Unshardedstate_dict {model.embed_tokens.weight: loaded[tok_embeddings.weight],model.norm.weight: loaded[norm.weight],lm_head.weight: loaded[output.weight],}else:concat_dim 0 if llama_version in [3, 3.1] else 1state_dict {model.norm.weight: loaded[0][norm.weight],model.embed_tokens.weight: torch.cat([loaded[i][tok_embeddings.weight] for i in range(len(loaded))], dimconcat_dim),lm_head.weight: torch.cat([loaded[i][output.weight] for i in range(len(loaded))], dim0),}for k, v in state_dict.items():index_dict[weight_map][k] filenameparam_count v.numel()torch.save(state_dict, os.path.join(tmp_model_path, filename))# Write configsindex_dict[metadata] {total_size: param_count * 2}write_json(index_dict, os.path.join(tmp_model_path, pytorch_model.bin.index.json))ffn_dim_multiplier params[ffn_dim_multiplier] if ffn_dim_multiplier in params else 1multiple_of params[multiple_of] if multiple_of in params else 256if llama_version in [3, 3.1]:bos_token_id 128000if instruct:eos_token_id [128001, 128008, 128009]else:eos_token_id 128001else:bos_token_id 1eos_token_id 2config LlamaConfig(hidden_sizedim,intermediate_sizecompute_intermediate_size(dim, ffn_dim_multiplier, multiple_of),num_attention_headsparams[n_heads],num_hidden_layersparams[n_layers],rms_norm_epsparams[norm_eps],num_key_value_headsnum_key_value_heads,vocab_sizevocab_size,rope_thetabase,max_position_embeddingsmax_position_embeddings,bos_token_idbos_token_id,eos_token_ideos_token_id,)config.save_pretrained(tmp_model_path)if instruct:generation_config GenerationConfig(do_sampleTrue,temperature0.6,top_p0.9,bos_token_idbos_token_id,eos_token_ideos_token_id,)generation_config.save_pretrained(tmp_model_path)# Make space so we can load the model properly now.del state_dictdel loadedgc.collect()print(Loading the checkpoint in a Llama model.)model LlamaForCausalLM.from_pretrained(tmp_model_path, torch_dtypetorch.bfloat16, low_cpu_mem_usageTrue)# Avoid saving this as part of the config.del model.config._name_or_pathmodel.config.torch_dtype torch.float16print(Saving in the Transformers format.)model.save_pretrained(model_path, safe_serializationsafe_serialization)shutil.rmtree(tmp_model_path, ignore_errorsTrue)class Llama3Converter(TikTokenConverter):def __init__(self, vocab_file, special_tokensNone, instructFalse, model_max_lengthNone, **kwargs):super().__init__(vocab_file, additional_special_tokensspecial_tokens, **kwargs)tokenizer self.converted()chat_template ({% set loop_messages messages %}{% for message in loop_messages %}{% set content |start_header_id| message[role] |end_header_id|\n\n message[content] | trim |eot_id| %}{% if loop.index0 0 %}{% set content bos_token content %}{% endif %}{{ content }}{% endfor %}{{ |start_header_id|assistant|end_header_id|\n\n }})self.tokenizer PreTrainedTokenizerFast(tokenizer_objecttokenizer,bos_token|begin_of_text|,eos_token|end_of_text| if not instruct else |eot_id|,chat_templatechat_template if instruct else None,model_input_names[input_ids, attention_mask],model_max_lengthmodel_max_length,)def write_tokenizer(tokenizer_path, input_tokenizer_path, llama_version2, special_tokensNone, instructFalse):tokenizer_class LlamaTokenizer if LlamaTokenizerFast is None else LlamaTokenizerFastif llama_version in [3, 3.1]:tokenizer Llama3Converter(input_tokenizer_path, special_tokens, instruct, model_max_lengthCONTEXT_LENGTH_FOR_VERSION[llama_version]).tokenizerelse:tokenizer tokenizer_class(input_tokenizer_path)print(fSaving a {tokenizer_class.__name__} to {tokenizer_path}.)tokenizer.save_pretrained(tokenizer_path)return tokenizerDEFAULT_LLAMA_SPECIAL_TOKENS {3: [|begin_of_text|,|end_of_text|,|reserved_special_token_0|,|reserved_special_token_1|,|reserved_special_token_2|,|reserved_special_token_3|,|start_header_id|,|end_header_id|,|reserved_special_token_4|,|eot_id|, # end of turn] [f|reserved_special_token_{i}| for i in range(5, 256 - 5)],3.1: [|begin_of_text|,|end_of_text|,|reserved_special_token_0|,|reserved_special_token_1|,|finetune_right_pad_id|,|reserved_special_token_2|,|start_header_id|,|end_header_id|,|eom_id|, # end of message|eot_id|, # end of turn|python_tag|,] [f|reserved_special_token_{i}| for i in range(3, 256 - 8)], }def main():parser argparse.ArgumentParser()parser.add_argument(--input_dir,helpLocation of LLaMA weights, which contains tokenizer.model and model folders,)parser.add_argument(--model_size,defaultNone,helpf Deprecated in favor of num_shards: models correspond to the finetuned versions, and are specific to the Llama2 official release. For more details on Llama2, checkout the original repo: https://huggingface.co/meta-llama,)parser.add_argument(--output_dir,helpLocation to write HF model and tokenizer,)parser.add_argument(--safe_serialization, defaultTrue, typebool, helpWhether or not to save using safetensors.)# Different Llama versions used different default values for max_position_embeddings, hence the need to be able to specify which version is being used.parser.add_argument(--llama_version,choices[1, 2, 3, 3.1],default1,typestr,helpVersion of the Llama model to convert. Currently supports Llama1 and Llama2. Controls the context size,)parser.add_argument(--num_shards,defaultNone,typeint,helpThe number of individual shards used for the model. Does not have to be the same as the number of consolidated_xx.pth,)parser.add_argument(--special_tokens,defaultNone,typeList[str],helpThe list of special tokens that should be added to the model.,)parser.add_argument(--instruct,defaultFalse,typebool,helpWhether the model is an instruct model or not. Will affect special tokens for llama 3.1.,)args parser.parse_args()if args.model_size is None and args.num_shards is None:raise ValueError(You have to set at least num_shards if you are not giving the model_size)if args.special_tokens is None:# no special tokens by defaultargs.special_tokens DEFAULT_LLAMA_SPECIAL_TOKENS.get(str(args.llama_version), [])spm_path os.path.join(args.input_dir, tokenizer.model)vocab_size len(write_tokenizer(args.output_dir,spm_path,llama_versionargs.llama_version,special_tokensargs.special_tokens,instructargs.instruct,))if args.model_size ! tokenizer_only:write_model(model_pathargs.output_dir,input_base_pathargs.input_dir,model_sizeargs.model_size,safe_serializationargs.safe_serialization,llama_versionargs.llama_version,vocab_sizevocab_size,num_shardsargs.num_shards,instructargs.instruct,)if __name__ __main__:main() 然后执行 python convert_llama_weights_to_hf.py --input_dir llama/7B --model_size 7B --output_dir llama/7Bhuggingface可能报以下错误 from transformers.convert_slow_tokenizer import TikTokenConverter ImportError: cannot import name TikTokenConverter from transformers.convert_slow_tokenizer (C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\transformers\convert_slow_tokenizer.py)解决方法 pip install -e . 或者 pip install --upgrade transformers在LLaMa下会多出一个7Bhuggingface的目录目录文件结构如下 3.2.3 获取Vicuna Delta权重 从https://huggingface.co/lmsys/vicuna-7b-delta-v0/tree/main 获取模型 然后在LLaMa目录创建相应的目录并把模型放到目录下 3.2.4 合并LLaMA和Vicuna Delta 安装fastchat pip install fschat可能会报下面的错误 Collecting wavedrom (from markdown2[all]-fschat0.2.1)Downloading http://172.16.2.230:8501/packages/be/71/6739e3abac630540aaeaaece4584c39f88b5f8658ce6ca517efec455e3de/wavedrom-2.0.3.post3.tar.gz (137 kB)Preparing metadata (setup.py) ... errorerror: subprocess-exited-with-error× python setup.py egg_info did not run successfully.│ exit code: 1╰─ [48 lines of output]C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\setuptools\__init__.py:94: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.!!********************************************************************************Requirements should be satisfied by a PEP 517 installer.If you are using pip, you can try pip install --use-pep517.********************************************************************************!!dist.fetch_build_eggs(dist.setup_requires)WARNING: The repository located at 172.16.2.230 is not a trusted or secure host and is being ignored. If this repository is available via HTTPS we recommend you use HTTPS instead, otherwise you may silence this warning and allow it anyway with --trusted-host 172.16.2.230.ERROR: Could not find a version that satisfies the requirement setuptools_scm (from versions: none)ERROR: No matching distribution found for setuptools_scmTraceback (most recent call last):File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\setuptools\installer.py, line 102, in _fetch_build_egg_no_warnsubprocess.check_call(cmd)File C:\Users\Easyai\.conda\envs\agpt\lib\subprocess.py, line 369, in check_callraise CalledProcessError(retcode, cmd)subprocess.CalledProcessError: Command [C:\\Users\\Easyai\\.conda\\envs\\agpt\\python.exe, -m, pip, --disable-pip-version-check, wheel, --no-deps, -w, d:\\temp\\tmpjryrv_kd, --quiet, setuptools_scm] returned non-zero exit status 1.The above exception was the direct cause of the following exception:Traceback (most recent call last):File string, line 2, in moduleFile pip-setuptools-caller, line 34, in moduleFile D:\temp\pip-install-6achpvqg\wavedrom_e8564a73a10342d7801b8a35deab645d\setup.py, line 28, in modulesetup(File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\setuptools\__init__.py, line 116, in setup_install_setup_requires(attrs)File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\setuptools\__init__.py, line 89, in _install_setup_requires_fetch_build_eggs(dist)File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\setuptools\__init__.py, line 94, in _fetch_build_eggsdist.fetch_build_eggs(dist.setup_requires)File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\setuptools\dist.py, line 617, in fetch_build_eggsreturn _fetch_build_eggs(self, requires)File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\setuptools\installer.py, line 39, in _fetch_build_eggsresolved_dists pkg_resources.working_set.resolve(File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\pkg_resources\__init__.py, line 897, in resolvedist self._resolve_dist(File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\pkg_resources\__init__.py, line 933, in _resolve_distdist best[req.key] env.best_match(File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\pkg_resources\__init__.py, line 1271, in best_matchreturn self.obtain(req, installer)File C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\pkg_resources\__init__.py, line 1307, in obtainreturn installer(requirement) if installer else NoneFile C:\Users\Easyai\.conda\envs\agpt\lib\site-packages\setuptools\installer.py, line 104, in _fetch_build_egg_no_warnraise DistutilsError(str(e)) from edistutils.errors.DistutilsError: Command [C:\\Users\\Easyai\\.conda\\envs\\agpt\\python.exe, -m, pip, --disable-pip-version-check, wheel, --no-deps, -w, d:\\temp\\tmpjryrv_kd, --quiet, setuptools_scm] returned non-zero exit status 1.[end of output]note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed× Encountered error while generating package metadata. ╰─ See above for output.note: This is an issue with the package mentioned above, not pip. hint: See above for details.解决方法 第一步 pip install setuptools_scm第二步清华源安装 pip install wavedrom -i https://pypi.tuna.tsinghua.edu.cn/simple然后安装 pip install fschat0.1.10合并模型 python -m fastchat.model.apply_delta --base llama/7Bhuggingface --target pretrained_ckpt/vicuna_ckpt/7b_v0 --delta llama/vicuna-7b-v0-delta合并注意target路径是合并后所在的文件夹路径即AnomalyGPT/pretrained_ckpt/vicuna_ckpt/7b_v0 3.3 获取AnomalyGPT Delta权重 3.3.1 Delta 权重 从官方的git界面给的连接下载对接权重权重下载链接https://huggingface.co/openllmplayground/pandagpt_7b_max_len_1024/tree/main 把下载好的模型放到下面目录 3.3.2 AnomalyGPT Delta权重 在AnomalyGPT/code目录下创建这三个目录然后从官方git界面下载相应的模型权重放到里面 对应的模型不能下错 4.运行项目 4.1 测试代码 带界面的测试代码在code目录下,切换到code,运行web_demo.py,这里可能要安装gradio pip install gradio3.50.0运行测试代码 python web_demo.py4.2 测试 打开http://127.0.0.1:7860,打开图像可以用中文或者英文进行交互效果如下 有缺陷的图像 无缺陷的图像
http://www.dnsts.com.cn/news/18862.html

相关文章:

  • wordpress可以做电影网站吗樟树市建设局网站
  • 中国城乡建设部证件查询网站seo快速优化软件网站
  • 无锡市建设招标网站全渠道分销零售平台
  • 网站维护费用明细网站版心怎么做
  • 十四五专业建设规划百度关键词优化教程
  • 做网站多少钱高安做网站
  • 自己做的网站怎么上排行榜网站怎么做别名
  • 两学一做微网站交流网络营销优化公司
  • 怎么注册网站免费的吗高效网站建设公司
  • 网站优化推广方案wordpress模板代码编辑插件
  • 经营网站需要什么费用精品成品网站入口
  • 垂直门户网站网站开发集成工具
  • 自己建设网站怎么做免费网站 视频
  • 网站建设_超速云建站平面设计免费课程视频
  • 上海新建设建筑设计有限公司网站教做家庭菜的网站
  • 云南建设厅网站教育类集群网站建设
  • 网站建设基础条件成都广告公司有哪些
  • 怎么更改织梦网站文章样式富民网站建设
  • 织梦网站模板怎么做瑶海合肥网站建设
  • 做网站编程河北新河网
  • 东莞建站模板搭建wordpress 跳转到首页
  • asp网站伪静态医院网站建设山东
  • 做微信公众平台的网站wordpress 大赛 投票
  • 网站制作的网站开发asp.net构建门户网站
  • 站内站怎么搭建住房和城乡建设厅焊工证
  • 网站开发价格预算wordpress 订单插件
  • 包装设计网站哪个好用注册网站那里能注册
  • 门户网站开发费用济南代做标书网站标志
  • 北京网站建设学校wordpress 画面做成
  • 广州网站下载安装wordpress搬家建立数据库连接时出错