基于区块链的毕业设计GIF: Generative Interpretable Faces – GIF:生成的可解释面

本文提供基于区块链的毕业设计国外最新区块链项目源码下载,包括solidity,eth,fabric等blockchain区块链,基于区块链的毕业设计GIF: Generative Interpretable Faces – GIF:生成的可解释面 是一篇很好的国外资料

GIF: Generative Interpretable Faces

This is the official inmplentation fo the paper GIF: Generative Interpretable Faces. GIF is a photorealistic generative face model with explicit 3D geometric (i.e. FLAME parameters) and photometric control.

  • Key words: Generative Interpretable Faces, conditional generative models, 3D conditioning of GANs, explicit 3D control of photorealistic faces, Photorealistic faces.

Important links

  • Project page https://gif.is.tue.mpg.de/
  • Paper pdf https://arxiv.org/abs/2009.00149
  • video demo https://www.youtube.com/watch?v=-ezPAHyNH9s

Watch a brief presentation

GIF: Generative Interpretable Faces - GIF:生成的可解释面

Citation

If you find our work useful in your project please cite us as

@inproceedings{GIF2020,     title = {{GIF}: Generative Interpretable Faces},     author = {Ghosh, Partha and Gupta, Pravir Singh and Uziel, Roy and Ranjan, Anurag and Black, Michael J. and Bolkart, Timo},     booktitle = {International Conference on 3D Vision (3DV)},     year = {2020},     url = {http://gif.is.tue.mpg.de/} }

Installation

  • python3 -m venv ~/.venv/gif
  • source ~/.venv/gif/bin/activate
  • pip install -r requirements.txt

First thing first

Before Running any program you will need to download a few resource files and create a suitable placeholder for the training artifacts to be stored

  1. you can use this link to download input files necessary to train GIF from scratch – http://files.is.tuebingen.mpg.de/gif/input_files.zip
  2. you can use this link to download checkpoints and samples generated from pre-trained GIF models and its ablated versions – http://files.is.tuebingen.mpg.de/gif/output_files.zip
  3. Now create a directory called GIF_resources and unzip the ipput zip or checpoint zip or both in this directory
  4. When you train or fine tune a model the output directory checkpoint and sample directory will be populated. Rmember that the model atifacts can easily become a few 10s of terabytes
  5. The main resource directory should be named GIF_resources and it should have input_files and output_fiels as sub-directories
  6. Now you need to provide the path to this directory in the constants.py script and make changes if necessary if you wish to change names of the subdirectories
  7. Edit the line resources_root = '/path/to/the/unzipped/location/of/GIF_resources'
  8. Modify any other paths as you need
  9. Download the FLAME 2020 model and the FLAME texture space from here – https://flame.is.tue.mpg.de/ (you need to sign up and agree to the license for access)
  10. Please make sure to dowload 2020 version. After signing in you sould be able to download FLAME 2020
  11. Download the FLAME_texture_data, Unzip it and place the texture_data_256.npy file in the flame resources directory.
  12. Please place the generic_model.pkl file in GIF_resources/input_files/flame_resource
  13. In this directory you will need to place the generic_model.pkl, head_template_mesh.obj, and FLAME_texture.npz in addition to the already provided files in the zip you just downloaded from the link given above. You can find these files from the official flame website. Link given in point 9.

Preparing training data

To train GIF you will need to prepare two lmdb datasets

  1. An LMDB datset containing FFHQ images in different scales
    1. To prepare this cd prepare_lmdb
    2. run python prepare_ffhq_multiscale_dataset.py --n_worker N_WORKER DATASET_PATH
    3. Here DATASET_PATH is the parth to the directory that contains the FFHQ images
    4. Place the created lmdb file in the GIF_resources/input_files/FFHQ directory, alongside ffhq_fid_stats
  2. An LMDB dataset containing renderings of the FLAME model
    1. To run GIF you will need the rendered texture and normal images of the FLAME mesh for FFHQ images. This is already provided as deca_rendered_with_public_texture.lmdb with the input_file zip. It is located in GIF_resources_to_upload/input_files/DECA_inferred
    2. To create this on your own simply run python create_deca_rendered_lmdb.py

Training

To resume training from a checkpoint run python train.py --run_id <runid> --ckpt /path/to/saved.mdl/file/<runid>/model_checkpoint_name.model

Note here that you point to the .model file not the npz one.

To start training from scratch run python train.py --run_id <runid>

Note that the training code will take all available GPUs in the system and perform data parallelization. You can set visible GPUs by etting the CUDA_VISIBLE_DEVICES environment variable. Run CUDA_VISIBLE_DEVICES=0,1 python train.py --run_id <runid> to run on GPU 0 and 1

To run random face generation follow the following steps

  1. Clone this repo
  2. Download the pretrained model. Please note that you have to download the model with correct run_id
  3. activate your virtual environment
  4. cd plots
  5. python generate_random_samples.py
  6. Remember to uncomment the appropriate run_id

To generate Figure 3

  1. cd plots
  2. python role_of_different_parameters.py it will generate batch_size number of directories in f'{cnst.output_root}sample/' named gen_iamges<batch_idx>. Each of these directory contain a column of images as shown in figure 3 in the paper.

Amazon mechanical turk (AMT) evaluations:

Disclaimer: This section can be outdated and/or have changed since the time of writing the document. It is neither intended to advertise nor recommend any particular 3rd party product. The inclusion of this guide is solely for quick reference purposes and is provided without any liability.

  • you will need 3 accounts
  1. Mturk – https://requester.mturk.com/
  2. Mturk sandbox – just for experiments (No real money involved) https://requestersandbox.mturk.com/
  3. AWS – for uploading the images https://aws.amazon.com/
  • once that is done you may follow the following steps
  1. Upload the images to S3 in AWS website (into 2 different folders. e.g. model1, model2)
  2. Make the files public. (You can verify it by clicking one file, and try to view the image using the link)
  3. Create one project (not so sure what are the optimal values there, I believe that people at MPI have experience with that
  4. In “Design Layout” insert the html code from mturk/mturk_layout.html or write your own layout
  5. Finally you will have to upload a CSV file which will have s3 or any other public links for images that will be shown to the participants
  6. You can generate such links using the generate_csv.py or the create_csv.py scripts
  7. Finally follow an AMT tutorial to deploye and obtain the results
  8. You may use the plot_results.py or plot_histogram_results.py script to visualize AMT results

Running the naive vector conditioning model

  • Code to run vector conditioning to arrvie soon on a different branch 🙂

Acknowledgements

GIF uses DECA to get FLAME geometry, appearance, and lighting parameters for the FFHQ training data. We thank H. Feng for prepraring the training data, Y. Feng and S. Sanyal for support with the rendering and projection pipeline, and C. Köhler, A. Chandrasekaran, M. Keller, M. Landry, C. Huang, A. Osman and D. Tzionas for fruitful discussions, advice and proofreading. We specially thank Taylor McConnell for voicing over our video. The work was partially supported by the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and by Amazon Web Services.


GIF:生成可解释的脸

重要链接

观看简短演示

引文

安装

首先

准备培训数据

要运行随机人脸生成,请遵循以下步骤

生成图3亚马逊机械土耳其人(AMT)评估:

运行朴素向量调节模型

确认

  • 关键词:生成可解释人脸、条件生成模型、GANs的3D条件调节、照片级真实感人脸的显式3D控制。
  • 项目页面https://gif.is.tue.mpg.de/
  • 纸质pdfhttps://arxiv.org/abs/2009.00149
  • 视频演示https://www.youtube.com/watch?v=-ezPAHyNH9s
  • python3-m venv~/.venv/gif
  • 源~/.venv/gif/bin/activate
  • pip安装-r要求.txt
  • 您可以使用此链接下载从头开始训练GIF所需的输入文件-http://files.is.tuebingen.mpg.de/gif/input_files.zip
  • 您可以使用此链接下载检查点以及从预先训练的GIF模型和烧蚀版本生成的样本-http://files.is.tuebingen.mpg.de/gif/output_files.zip
  • 现在创建一个名为GIFu resources的目录,并在此目录中解压ipput-zip或checpoint-zip或两者都解压
  • 当您训练或微调模型时,输出目录检查点和示例将填充目录。请记住,模型atifacts可以很容易地变成数个10兆字节,主资源目录应该命名为GIFu resources,它应该有输入u文件和输出u文件作为子目录,现在您需要在常量.py如果需要的话,编写脚本并进行更改更改子目录的名称https://flame.is.tue.mpg.de/(您需要注册并同意许可证才能访问)
  • 请确保下载2020版本。登录后,您可以下载FLAME 2020
  • 下载FLAME_texture_数据,将其解压缩并将texture_data_256.npy文件放入FLAME resources目录。
  • 请将_型号.pkl在GIF_resources/input_files/flame_resource中的文件_型号.pkl,头部模板_网格.obj,和火焰_纹理.npz除了已经提供的zip文件,您刚刚从上面给出的链接下载。你可以从官方网站上找到这些文件。第9点给出的链接。
  • 包含不同比例FFHQ图像的LMDB数据集以准备此cd prepareu LMDB run python prepareu FFHQu multiscale_py.py数据集–nu worker nu worker DATASETu PATH此处DATASETu PATH是包含FFHQ图像的目录的第h部分将创建的lmdb文件放置在GIFu resources/inputu files/FFHQ目录中,除了ffhq_fid_stats
  • 要准备此cd,请准备u lmdb
  • 运行python prepare_ffhq_multiscale_py.py数据集–n_worker n_worker DATASET_PATH
  • 此处DATASET_PATH是包含FFHQ图像的目录的一部分
  • 将创建的lmdb文件放在GIF_resources/input_files/FFHQ目录中,除了ffhqu fidu stats
  • 一个包含火焰模型渲染的LMDB数据集来运行GIF之外,您还需要ffhq图像的火焰网格的渲染纹理和法线图像。这已经作为deca_rendered_与_public一起提供_纹理.lmdb输入文件压缩。它位于GIF_resources_to_upload/input_files/DECA_推断,要创建这个,只需运行python create_DECA_rendered_lmdb.py文件
  • 要运行GIF,您需要FFHQ图像的火焰网格的渲染纹理和普通图像。这已经作为deca_rendered_与_public一起提供_纹理.lmdb输入文件压缩。它位于GIF_resources_to_upload/input_files/DECA_inferred
  • 要想自己创建它,只需运行python create_DECA_rendered_lmdb.py公司
  • 克隆此回购
  • 下载预训练模型。请注意,您必须下载具有正确运行id的模型
  • 激活您的虚拟环境
  • cd绘图
  • python generate_random_示例.py
  • 记住取消注释相应的runu id
  • cd plots
  • python roleu of theu different_参数.py它将生成批处理数量f’中的目录{cnst.output_根}示例/’命名的gen_iamges&lt;batchu idx&gt;。每个目录都包含一列图像,如本文图3所示。
  • 您需要3个账户https://requester.mturk.com/
  • Mturk沙盒-仅用于实验(不涉及实际资金)https://requestersandbox.mturk.com/
  • AWS-用于上传图像https://aws.amazon.com/
  • 图片到AWS网站中的S3(分成两个不同的文件夹。e、 g.model1,model2)

  • 将文件公开。(您可以通过单击一个文件来验证,并尝试使用链接查看图像)
  • 创建一个项目(不确定其中的最佳值是什么,我相信MPI的人有经验在“设计布局”中插入mturk/mturk的html代码_布局.html或者编写自己的布局
  • 最后,您必须上传一个CSV文件,该文件将包含s3或任何其他公开链接,用于显示给参与者的图像
  • 您可以生成这样的文件使用生成_csv.py文件或者创造_csv.py文件脚本
  • 最后按照AMT教程进行部署并获得结果
  • 您可以使用绘图_结果.py或绘制直方图_结果.py可视化AMT结果的脚本
  • 这是纸GIF的官方实现:生成的可解释的面孔。GIF是一个真实感的生成人脸模型,具有明确的三维几何(即火焰参数)和光度控制。

    • 关键词:生成可解释人脸、条件生成模型、GANs的3D条件调节、照片级真实感人脸的显式3D控制。

    重要链接

    • 项目页面https://gif.is.tue.mpg.de/
    • 纸质pdfhttps://arxiv.org/abs/2009.00149
    • 视频演示https://www.youtube.com/watch?v=-ezPAHyNH9s

    观看简短演示

    GIF: Generative Interpretable Faces - GIF:生成的可解释面

    引文

    <GIF: Generative Interpretable Faces>

    @inproceedings{GIF2020,     title = {{GIF}: Generative Interpretable Faces},     author = {Ghosh, Partha and Gupta, Pravir Singh and Uziel, Roy and Ranjan, Anurag and Black, Michael J. and Bolkart, Timo},     booktitle = {International Conference on 3D Vision (3DV)},     year = {2020},     url = {http://gif.is.tue.mpg.de/} }

    安装

    • python3-m venv~/.venv/gif
    • 源~/.venv/gif/bin/activate
    • pip安装-r要求.txt

    首先

    如果您发现我们的工作在您的项目中有用,请将我们称为

    1. 您可以使用此链接下载从头开始训练GIF所需的输入文件-http://files.is.tuebingen.mpg.de/gif/input_files.zip
    2. 您可以使用此链接下载检查点以及从预先训练的GIF模型和烧蚀版本生成的样本-http://files.is.tuebingen.mpg.de/gif/output_files.zip
    3. 现在创建一个名为GIFu resources的目录,并在此目录中解压ipput-zip或checpoint-zip或两者都解压
    4. 当您训练或微调模型时,输出目录检查点和示例将填充目录。请记住,模型atifacts可以很容易地变成数个10兆字节,主资源目录应该命名为GIFu resources,它应该有输入u文件和输出u文件作为子目录,现在您需要在常量.py如果需要的话,编写脚本并进行更改更改子目录的名称https://flame.is.tue.mpg.de/(您需要注册并同意许可证才能访问)
    5. 请确保下载2020版本。登录后,您可以下载FLAME 2020
    6. 下载FLAME_texture_数据,将其解压缩并将texture_data_256.npy文件放入FLAME resources目录。
    7. 请将_型号.pkl在GIF_resources/input_files/flame_resource中的文件_型号.pkl,头部模板_网格.obj,和火焰_纹理.npz除了已经提供的zip文件,您刚刚从上面给出的链接下载。你可以从官方网站上找到这些文件。第9点给出的链接。
    8. 包含不同比例FFHQ图像的LMDB数据集以准备此cd prepareu LMDB run python prepareu FFHQu multiscale_py.py数据集–nu worker nu worker DATASETu PATH此处DATASETu PATH是包含FFHQ图像的目录的第h部分将创建的lmdb文件放置在GIFu resources/inputu files/FFHQ目录中,除了ffhq_fid_stats
    9. 要准备此cd,请准备u lmdb
    10. 运行python prepare_ffhq_multiscale_py.py数据集–n_worker n_worker DATASET_PATH
    11. 此处DATASET_PATH是包含FFHQ图像的目录的一部分
    12. 将创建的lmdb文件放在GIF_resources/input_files/FFHQ目录中,除了ffhqu fidu stats
    13. 一个包含火焰模型渲染的LMDB数据集来运行GIF之外,您还需要ffhq图像的火焰网格的渲染纹理和法线图像。这已经作为deca_rendered_与_public一起提供_纹理.lmdb输入文件压缩。它位于GIF_resources_to_upload/input_files/DECA_推断,要创建这个,只需运行python create_DECA_rendered_lmdb.py文件

    准备培训数据

    在运行任何程序之前,您需要下载一些资源文件并为要存储的培训工件创建一个合适的占位符

    1. 要运行GIF,您需要FFHQ图像的火焰网格的渲染纹理和普通图像。这已经作为deca_rendered_与_public一起提供_纹理.lmdb输入文件压缩。它位于GIF_resources_to_upload/input_files/DECA_inferred
    2. cd绘图

    要运行随机人脸生成,请遵循以下步骤

    要训练GIF,您需要准备两个lmdb数据集

    才能继续从检查点训练运行python训练.py–运行 id&lt;runid&gt;–ckpt/path/to/已保存.mdl/file/&lt;runid&gt;/model_检查点_名称.型号

    请注意,您指向的是.model文件,而不是npz文件。

    要从头开始培训,请运行python训练.py–运行run-id&lt;runid&gt;

    生成图3亚马逊机械土耳其人(AMT)评估:

    1. cd plots
    2. python roleu of theu different_参数.py它将生成批处理数量f’中的目录{cnst.output_根}示例/’命名的gen_iamges&lt;batchu idx&gt;。每个目录都包含一列图像,如本文图3所示。
    3. 您需要3个账户https://requester.mturk.com/
    4. Mturk沙盒-仅用于实验(不涉及实际资金)https://requestersandbox.mturk.com/
    5. AWS-用于上传图像https://aws.amazon.com/
    6. 将文件公开。(您可以通过单击一个文件来验证,并尝试使用链接查看图像)

    运行朴素向量调节模型

    1. 创建一个项目(不确定其中的最佳值是什么,我相信MPI的人有经验在“设计布局”中插入mturk/mturk的html代码_布局.html或者编写自己的布局
    2. 最后,您必须上传一个CSV文件,该文件将包含s3或任何其他公开链接,用于显示给参与者的图像

    确认

    请注意,培训代码将获取系统中所有可用的GPU并执行数据并行化。您可以通过设置CUDAu visibleu DEVICES环境变量来设置visible gpu。运行CUDAu VISIBLEu DEVICES=0,1 python训练.py–run-id&lt;runid&gt;在GPU 0和1上运行

    • 您可以生成这样的文件使用生成_csv.py文件或者创造_csv.py文件脚本
    1. 最后按照AMT教程进行部署并获得结果
    2. 您可以使用绘图_结果.py或绘制直方图_结果.py可视化AMT结果的脚本
    3. AWS – for uploading the images https://aws.amazon.com/
    • once that is done you may follow the following steps
    1. Upload the images to S3 in AWS website (into 2 different folders. e.g. model1, model2)
    2. Make the files public. (You can verify it by clicking one file, and try to view the image using the link)
    3. Create one project (not so sure what are the optimal values there, I believe that people at MPI have experience with that
    4. In “Design Layout” insert the html code from mturk/mturk_layout.html or write your own layout
    5. Finally you will have to upload a CSV file which will have s3 or any other public links for images that will be shown to the participants
    6. You can generate such links using the generate_csv.py or the create_csv.py scripts
    7. Finally follow an AMT tutorial to deploye and obtain the results
    8. You may use the plot_results.py or plot_histogram_results.py script to visualize AMT results

    Running the naive vector conditioning model

    • Code to run vector conditioning to arrvie soon on a different branch 🙂

    Acknowledgements

    免责声明:此部分可能已过期和/或自编写文档时起已更改。它既不打算做广告也不推荐任何特定的第三方产品。本指南仅用于快速参考,不承担任何责任。

    部分转自网络,侵权联系删除区块链源码网

    www.interchains.cc

    https://www.interchains.cc/19424.html

    区块链毕设网(www.interchains.cc)全网最靠谱的原创区块链毕设代做网站 部分资料来自网络,侵权联系删除! 最全最大的区块链源码站 ! QQ3039046426
    区块链知识分享网, 以太坊dapp资源网, 区块链教程, fabric教程下载, 区块链书籍下载, 区块链资料下载, 区块链视频教程下载, 区块链基础教程, 区块链入门教程, 区块链资源 » 基于区块链的毕业设计GIF: Generative Interpretable Faces – GIF:生成的可解释面

    提供最优质的资源集合

    立即查看 了解详情