MyTetra Share
Делитесь знаниями!
Руководство по установке DeepFakes (ПО для замены лиц на видео) на английском
Время создания: 25.01.2018 12:57
Текстовые метки: deepfakes, FakeApp, программа, замена лица, подмена, нейросеть, нейронная сеть, установка, инсталляция, инструкция, руководство, face, лицо
Раздел: Компьютер - Программное обеспечение
Запись: xintrea/mytetra_syncro/master/base/1516874227pxwcbd3po6/text.html на raw.github.com


h ttps://www.reddit.com/r/deepfakes/comments/7ox5vn/fakeapp_a_desktop_tool_for_creating_deepfakes/

I've completed a desktop app /w GUI to create deepfakes. Here is a what it looks like (
https://imgur.com/BDX9ANb ). For anyone unfamiliar with this subreddit, deepfakes are neural network-generated faceswap videos created with a machine learning algorithm designed by /u/deepfakes( https://www.reddit.com/u/deepfakes ). Check the sub wiki for more info. Here is an excellent example of a deepfake of Daisy Ridley ( https://thumbs.gfycat.com/EasySecondDouglasfirbarkbeetle-size_restricted.gif ) produced with this app in less than a day by /u/nuttynutter6969 ( https://www.reddit.com/u/nuttynutter6969 ). This app is intended to allow users to move through the full deepfake creation pipeline—creating training data, training a model, and creating fakes with that model—without the need to install Python and other dependencies or parse code. The download link is in the comments.

Instructions:

Download CUDA 8.0 and store it's bin folder in the PATH environment variable (
https://imgur.com/a/itUH9 )

Split some videos with your two desired faces into two sets of a few hundred frames each with a tool like FFMPEG (
https://www.ffmpeg.org/ ). If you use FFMPEG, the command you want is: ffmpeg -i scene.mp4 -vf fps=[FPS OF VIDEO] "out%d.png". After splitting, run both directories of split frames through the "Align" tool to produce training data

Switch to the "Train" tool, and input the paths of the training data produced in step 1 (it should be in a folder called "aligned") as well as those of the encoder and decoders included in the "models" folder along with this project

Train until the preview window shows results you are satisfied with

Split the video to be faked into frames and run the "Merge" tool on them to create faked frames, which can then be remerged into a deepfaked video

Copy and reuse the same encoders for faster results in future fakes

Requirements:

-CUDA 8.0 must be installed, and its bin folder must be included in the PATH environment variable.

-At least a few GB of free space on disk to allow the app to create Temp files

Notes:

-Run fakeapp.bat to launch the app

-RuntimeError: module compiled against api version 0xc but this version numpy is 0xb is just a warning related to how the alignment libraries were installed, the app will run properly despite it appearing if no other errors occur

-It may take 30-45 seconds after pressing the Start button for the app to unpack and start the training/merging scripts the first time

-You can still quit training by focusing the training window and pressing "q"

-Paths to models/data must be absolute, not relative

If it doesn't work for you:

The console for the tool you are using (Align, Train, or Merge) will output a full error log if the tool fails. Here are some known errors with solutions:

General Issues

All directories used by the app should have names comprised of only English characters. Many users have had issues with directories with Cyrillic or Chinese characters, so if you have directories like this make sure to use different directories with English character names when running the app.

If any tool's log contains AssertionError: dir.isDir(), the tool cannot find your directories. Make sure they are typed into the app correctly.

Align Issues

If the Align log contains KeyError: state_dict, your version of a dynamic library called2DFan-4.pth.tar is corrupted. This is a library downloaded the first time the Align tool is run, and it is possible for that download to fail to produce a working 2DFan file. Download a working version from the link in the comments (this link is not allowed to be in the post), and replace the corrupted version in C:\Users\[NAME]\AppData\Local\face_alignment\data.

If the Align log contains AssertionError: can't find input files make sure the File Type parameter is set to the same image type as the images in your data folder (i.e. png, jpg, etc.)

If the Align log contains error while calling cudaMalloc() reason: out of memory, you are probably training with images that are too large. Make sure images are not greater than 1200x700 in resolution, this resolution is plenty to produce a good model

Train Issues

If the Train log contains AssertionError in image_augmentation.py you are training with images of the wrong size, make sure you are only training with the 256x256 images created using the Align tool

If the Train log contains Missing cudart_64.dll you have the wrong version of CUDA installed (Tensorflow requires 8.0)

If the Train log contains MemoryError train.py line 60/line 61 you are probably training with too many images at once for your GPU to handle and should reduce the number of images you are training with (500 is more than sufficient)

If the Train log contains OOM when allocating tensor with shape [W, X, Y, Z] the current model is too intensive for your GPU. Try lowering the batch size (to a lower power of 2 than 64) and see if that helps.

If the Train log mentions Theano Keras is wrongly trying to use Theano as a backend. Setting the KERAS_BACKEND environment variable to "tensorflow" should fix it.

Merge Issues

If Merge outputs faces with visible boxes around them, make sure that *Seamless is set to 'true' and faces are not too close up in the images you are working with. The scripts this app works with cannot always avoid creating a box on very large faces, but will almost always create a seamless merge with moderately-sized faces.

In the future I plan to make ease-of-use improvements to the app and look into replacing scripts with more efficient/streamlined/accurate versions as they come out.

Good_Man 11.Jan.201806:50ссылка 0.7

Link to FakeApp ( https://drive.google.com/file/d/1_D6JIZsv4JdIqydhfpXCP63HzlvnqCt6/view )

Link to an uncorrupted 2DFan if you get the KeyError: state_dict error. (
https://drive.google.com/file/d/19Xtwjohj8bO_Fvnlo81L7pSF5aN-zoX6/view )

Good_Man 11.Jan.201806:51ссылка 0.3

Забыл добавить wiki ( https://www.reddit.com/r/deepfakes/wiki/index ) от него же

FAQ

*Can I see some examples of videos created by this algorithm?

For the best new examples, sort the sub by top/all-time. The most widely known example is this faked video of Gal Gadot (NSFW) (
https://www.reddit.com/r/CelebFakes/comments/7amgwl/gal_gadot_oc/ ), which was featured on a number of sites ( http://www.complex.com/life/2017/12/ai-assisted-fake-is-here ).

*Where are the original scripts that produced these videos?

The source code can be found here (
https://github.com/deepfakes/faceswap ). The entire project with pre-trained models can be found here ( https://anonfile.com/p7w3m0d5be/face-swap.zip ).

*Is there a program/website that I can use to produce these videos?

FakeApp (
https://www.reddit.com/r/deepfakes/comments/7oc018/simple_desktop_app_with_gui/ ) is a community-developed desktop app to run the deepfakes algorithm without installing Python, Tensorflow, etc. Currently it supports only training/conversion of faces, and cannot create training data.

*How can I install and run the original scripts?

Follow the instructions here (
https://www.reddit.com/r/deepfakes/comments/7nq173/v2_tutorial_intelbased_python_easy_to_follow/?st=jc4lh5lx&sh=acde4329 ).

*What tools do I need to produce these videos?

At a minimum, your computer should have a good GPU. Failing this, you can rent cloud GPU(s) through services like Google Cloud Platform (
https://cloud.google.com/gpu/ ).

*How long does this whole process take?

Times vary by hardware quality, but generally speaking the pipeline is:

Extraction: Producing uniform training data of a model's face (5-20 min)

Training: Running a neural network to learn to emulate this face (8-12 hours)

Conversion: Using the neural network to project the target face onto the original face in a video frame-by-frame (5-20 min)

*Pages

-In progress

Warp On Other Domain Landmarks (
https://www.reddit.com/r/deepfakes/wiki/WarpOnOtherDomainLandmarks )

Pose Similarity Estimation (
https://www.reddit.com/r/deepfakes/wiki/PoseSimilarityEstimation )

Alternate Loss Functions (
https://www.reddit.com/r/deepfakes/wiki/AlternateLossFunctions )

-Placeholders

Original DeepFakes Implementation (
https://www.reddit.com/r/deepfakes/wiki/OriginalDeepFakesImplementation )

Image Selection Region Expansion (
https://www.reddit.com/r/deepfakes/wiki/ImageSelectionRegionExpansion )


Так же в этом разделе:
 
MyTetra Share v.0.59
Яндекс индекс цитирования