diff --git a/advanced/introduction/index.html b/advanced/introduction/index.html index 013a0a6..405389e 100644 --- a/advanced/introduction/index.html +++ b/advanced/introduction/index.html @@ -2564,6 +2564,10 @@

Introduction

+
+

Warning

+

The material in the Advanced module is being updated for Blender 3.6

+

The Advanced part of the course consists of a number of separate topics, each with a number of assignments:

-
+

Orbit around selection

Another option which you might consider enabling is Orbit Around Selection. By default this is turned off and in that mode any rotation of the 3D viewport will be around the center of the view, which might cause selected objects to go out of view. When the option is turned on viewport rotation will be around the selected object(s), always keeping them in view. You can find this option on the Navigation tab under Orbit & Pan.

@@ -2788,7 +2788,7 @@

Changes to default preference se Last update: - September 26, 2023 13:42:33 + November 27, 2023 09:44:40 • diff --git a/overview/support/index.html b/overview/support/index.html index de71b89..df56de6 100644 --- a/overview/support/index.html +++ b/overview/support/index.html @@ -2574,9 +2574,9 @@

Supportus.

Depending on the course you're following (basics or advanced) you need to use the category called -BASICS BLENDER COURSE or ADVANCED BLENDER COURSE. Within these categories you will find:

+BASICS BLENDER COURSE or ADVANCED BLENDER COURSE. Within these categories you will find two support channels:

    -
  • A shared text chat channel (e.g. 2022-04-blender-basics-chat) for interacting with the course teachers +
  • A shared text chat channel (e.g. 2023-12-blender-basics) for interacting with the course teachers and other course participants. Here you can ask questions, show your work, or anything else you feel like sharing.
  • A video channel (video channel), in case we want to share something through Discord
@@ -2588,7 +2588,7 @@

SupportNovember 20, 2023 13:38:17 + November 27, 2023 09:44:40 • diff --git a/search/search_index.json b/search/search_index.json index 352f854..6d2bcd3 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome!","text":"

Info

The material on these pages is being updated in preparation for the upcoming December 2023 course

These pages form the main online content for the two modules we provide for the course Introduction to Scientific Visualization with Blender. In this course you will learn how to use the 3D rendering and animation package Blender for creating images and animations from (scientific) data.

This course consists of two parts: Basics and Advanced. The Basics part assumes no knowledge of Blender, while the Advanced part builds upon the skills and knowledge of the Basics part.

In specific periods during the year we provide support for this course, which is otherwise self-paced. Please check the Schedule and News pages for upcoming dates. Or search the Euro CC course agenda for the course modules:

  • Introduction Scientific Visualisation with Blender: Data, Lights, Camera, Action!
  • Advanced topics in scientific visualization with Blender: geometry, scripts, animation, action!

This course is created and maintained by the visualization team of the SURF High-Performance Computing and Visualization group. This course is provided by SURF within the context of the EuroCC Netherlands NCC. We have been providing this course since 2018, usually twice a year, and initially in-person. Due to the restrictions during the COVID-19 lock-down period we decided to turn this course into a fully online version, based on positive experiences with the first advanced Blender course we provided online in 2020.

"},{"location":"privacy/","title":"Privacy and cookie statement","text":""},{"location":"privacy/#privacy","title":"Privacy","text":"

No personal information is gathered by SURF of visitors to this course website.

"},{"location":"privacy/#cookies","title":"Cookies","text":"

No cookies are used for the content published by SURF on this website, nor is any personal information about visits tracked by SURF.

The underlying MkDocs content generation system uses the browser's session storage for storing general site-map data (called /blender-course/.__sitemap), which is sometimes reported as a cookie.

"},{"location":"privacy/#third-party-cookies","title":"Third-party cookies","text":"

The embedded videos are hosted on YouTube, but using its privacy-enhanced mode and the \"www.youtube-nocookie.com\" domain. YouTube might ask for placement of third-party cookies, in which case explicit permission needs to be granted by the user. For more information, see the privacy controls of YouTube and the information linked from that page.

This website is hosted through GitHub Pages, which might set third-party cookies in which case explicit permission needs to be granted by the user. See here for the GitHub privacy policy.

"},{"location":"advanced/introduction/","title":"Introduction","text":"

The Advanced part of the course consists of a number of separate topics, each with a number of assignments:

  • Python scripting for performing all kinds of using code
  • Advanced materials using the node-based shaders
  • Using more complex Animation techniques
  • Mesh edit mode for cleaning up and/or improving your (imported) meshes

The final assignment is your own personal project of your choosing. If you want you can also work with a dataset we provide.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/","title":"\ud83d\udcbb The Shader Editor and advanced materials","text":"

In these two exercises in this chapter you will use the Blender Shader Editor on the familiar iso-surface of a CT scan of a fish from the basic course and try to make a visualization by using an advanced node setup. After that you will make a render of the moon with the high resolution textures of NASA with adaptive subdivision.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#the-fish","title":"\ud83d\udcbb The fish","text":"

When you opened the exercise blend file advanced_materials_assignment.blend you'll see the white fish iso-surface above a plane white plane. We are going to pimp this scene with advanced materials.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#shader-editor-materials-coloring-the-scene","title":"Shader editor materials - Coloring the scene","text":"

First we will add materials and give each object a different color.

  1. First activate the Rendered shading to see what kind of materials we are actually applying by pressing Z in the 3D Viewport panel and selecting Rendered from the radial pie-menu.
  2. Select the fishskin object and add a new material by clicking the New button in the middle of the top bar of the Shader Editor panel.
  3. Now we see a graph appearing with 2 nodes a Principled BSDF-node and a Material output-node also in the side panel you will see the familiar material settings. Change the Base Color to an appropriate color of a fish.
  4. Repeat step 2 and 3 for each 3D object in the scene (see Outliner) and give them a color of your choice.
"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#texture-mapping-placing-the-fish-on-a-picknick-table","title":"Texture mapping - Placing the fish on a picknick table","text":"

Now that the scene has some color we can start applying some realistic colors and texture to the ground plane or should we say table? We will do that by adding wood textures to the ground plane and connecting those textures to their appropriate parameters of the Principle BSDF.

  1. Select the groundplane 3D object.
  2. Add a Image texture-node to the Shader Editor graph of the groundplane with Shift-A > Texture > Image Texture.
  3. Connect the Color output of this node to the Base color input of the Principled BSDF-node.
  4. Now the groundplane doesn't look anything like a picknick table, its pink. This pink color comes from the fact that an image is missing from the Image Texture-node. Open an image by pressing the Open-button on the Image Texture-node, this will open a file browser window. Now select the blue_painted_planks_diff_4k.png image from the data/wood_textures/ directory and press Open Image.

Now we have our first image mapped on an object! Although you might have noticed that the fish is really small or rather the planks are very big. We are gonna solve that by scaling the texture coordinates.

  1. Before we can do that we first need to add the texture coordinates to the graph with Shift-A > Input > Texture Coordinates and connect the UV output to the Vector input of the Image Texture-node.
  2. Nothing changed because we didn't apply the scaling yet. Now add a Mapping node with Shift-A > Vector > Mapping and drag it on top of the edge between the Texture Coordinate-node and the Image Texture-node and let it go. As you can see it is automatically connected in between the nodes.
  3. Now on the Mapping-node change the Scale parameter x,y and z to 2. As you can see that reduced the planks to a smaller and better size.

Tip!: With the Node Wrangler Blender add-on you can just select a texture node and press CTRL+T to automatically add the Texture Coordinate and Mapping node. Node Wrangler can be added with: Menu-bar Edit > Preferences > Add-ons tab > Type 'Node Wranger' in search > check Node Wrangler add-on to activate.

Now we'll roughen the planks a bit with a Roughness map, a texture that will be use to change the Roughness parameter of the Principled BSDF.

  1. Select the previously added Image Texture-node and press SHIFT-D and place the new duplicated node underneath the other Image Texture-node.
  2. Connect its Vector input to the Vector output of the Mapping-node just like the other Image Texture-node and connect the Color output to the Roughness input of the Principled BSDF-node.
  3. As you can see became shiny, which wood is not (rotate the view around the object in the 3D Viewport to see the plane from different angles). This is because we haven't changed the texture yet. In this new Image Texture-node Open the blue_painted_planks_rough_4k.png from data/wood_textures.
  4. Now it is still a bit too shiny for wood. This is because the output is interpreted as an sRGB value. We need to change the Color Space parameter of this Image Texture-node to Non-color. Now the ground plane has the right rough look like wood.

The look of the wood is still very \"flat\" (the light still bounces of it at a straight angle), this is because we didn't add a normal map to the material yet. This normal map will accentuate all the nooks and crannies naturally present in wood which normally catch light to.

  1. As the previous Image Texture-node we again need to make a new one by duplicating (see step 8).
  2. Again the Mapping-node Vector output needs to be connected to the new Image Texture-node Vector input. The Color output however needs to go to a Normal Map-node.
  3. Add a Normal Map-node with Shift-A > Vector > Normal Map and connect the Image Texture-node Color output to the Normal Map-node Color input and connect the Normal Map-node Normal output to the Principled BSDF-node Normal input.
  4. Again this is also not a color so the Color Space needs to be set to Non-color.

Now you have a fully textured wooden ground plane! To see the full effect, rotate the view around it and see the light bounce off the surface based on the different texture types you just applied.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#multiple-materials-one-object-window-to-the-inside-of-the-fish","title":"Multiple materials one object - Window to the inside of the fish","text":"

We only see the fish, not the fish bones. In the Blender Basics course we learned how to reveal the bones on the inside by using a Boolean modifier, but we can achieve the same with just materials!

  1. Select the fishskin 3D object.
  2. If everything in the first couple of assignments the fish should already have one material called Material. For administrative reasons lets rename the material by clicking its name Material in the middle of the top bar of the Shader Editor panel and typing the new name called fishskinmat.
  3. Now left next to the rename box you have drop-down menu called Slot 1 when you click this you will see the material slots menu. In our case its only one material called fishskinmat.
  4. Now add a new Material slot by clicking the plus icon in this menu. The added material slot is still empty and needs a second material.
  5. Add a new material by clicking the New button in the middle of the top bar of the Shader Editor panel.
  6. Rename this material to fishskintransparentmat.

Now as you can see adjusting any value on the Principled BSDF-node doesn't seem to do anything. This is because there aren't any vertices assigned to this material slot yet (by default all vertices are assigned to the first material slot).

  1. To assign vertices we need to be able to select them and this can be done in the Edit Mode of the 3D Viewport-panel. With the fishskin 3D object selected and the focus on the 3D Viewport-panel (hovering over the 3D Viewport panel with your mouse) press TAB.
  2. First press 1 to see the vertices and then select a window of vertices on the side of the fish with the Border select tool by pressing B in the 3D Viewport-panel and dragging over the area you want to select.
  3. With these vertices selected press the Material slots button, select the fishskintransparentmat-material and press the Assign-button.

Now you can see the selected faces in that selection look different! This is because they are assigned to the second material. Now we'll make the fishskintransparentmat actually transparent with a combination of the Transparent BSDF and Principled BSDF through a Mix Shader. That way we can control the amount of transparency!

  1. In the Shader editor add a Mix Shader-node with Shift-A > Shader > Mix Shader.
  2. Drag this Mix Shader-node over the edge connecting the Principled BSDF-node and the Material Output-node to place it connected in between.
  3. Now add a Transparent BSDF with Shift-A > Shader > Transparent BSDF.
  4. Connect the BSDF output to the Mix Shader-node Shader input.
  5. Now the material is half shaded by the Transparent BSDF-node and half by the Principled BSDF-node. Experiment with the Mix shader-node's fac parameter to see how it changes the transparency of the fishskintransparentmat.

Now you have a window looking inside the fish! Now it's time to give the fish some actually fishy colors with the Project from view UV-mapping!

Bonus (Only when you have time left): As you can see the bones also contain the swim bladder which looks the same as the bones because the same material is assigned to it. Try to select the swim bladders vertices and assign a different more fitting material to the swim bladder.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#project-from-view-uv-mapping-add-actual-skin-to-the-fish","title":"Project from view UV-mapping - Add actual skin to the fish.","text":"

To add a real fish texture, or actually a photo from a carp, to the fishskin 3D object you can use the technique called Project from view UV-mapping. For this we introduce a new panel called the UV Editor. Before we go to the UV Editor we need to add a Image Texture-node to the fishskinmat.

  1. In the Shader Editor select the fishskinmat (slot 1) from the Material slot menu in the middle left of the top bar of the Shader Editor.
  2. Add a Image Texture-node to the material with Shift-A > Texture > Image Texture and connect the Color output to the Principled BSDF-node Base Color input and open the carp.jpg texture from the data/ directory.
  3. Next add a Texture Coordinate node with Shift-A > Input > Texture Coordinates and connect the UV output to the Image texture-node Vector input.

This fish is now black because the UV coordinates are not defined yet. That is what we will do in the UV Editor.

  1. Now that we do not need the Shader editor anymore we can replace it with the UV Editor. In the corner of the panel click the Editor Type-button and select the UV Editor from the list.
  2. Before we can start UV-mapping we need to be in Edit mode in the 3D viewport. In the 3D viewport panel press TAB to enter edit mode.
  3. Now select all geometry by pressing A.

To properly project from view you have to choose the right view to project from. We are gonna map a photo of a carp which has been taken from the side. In order to properly map the photo on the 3D object we also need to look at it from the side.

  1. Press BACK-TICK to open the view radial pie-menu and select Right or through the 3D Viewport menu in the header (View > Viewpoint > Camera).
  2. Now press U to open the UV-mapping-menu and select Project from view.

Now you can see the UV coordinates are mapped in the UV Editor but they are not properly scaled to fit the photo of the carp.

  1. Make sure that everything is still selected and then within the UV Editor press S and scale the UV-coordinates until they aligns with the photo of the carp.
  2. Scaling it alone is not enough. The UV-coordinates need to be moved a bit, use G to grab the UV-coordinates and translate them to better match the photo.

As you might have noticed it is not possible to completely match the photo without deforming the UV-coordinates.

  1. Before we start deforming parts of the UV-coordinates you need to activate Proportional editing by pressing the Proportional editing button in the top bar of the UV Editor. This proportional editing moves all UV-coordinates in the adjacent defined radius along with the currently selected UV-coordinates.
  2. Now select a UV-coordinate in the UV Editor that needs to be moved and press G.
  3. While grabbing, scroll with you mouse wheel to decrease or increase the Proportional editing radius and move your mouse to see the effect.
  4. Now with this Proportional editing try to match the UV-coordinates to the photo of the carp as good as possible.

Tip!: Whenever you are editing the UV-map in the UV editor it can be difficult to see how the texture is mapped on the 3D-object because of the visibility of all vertices, edges and faces because of the activated Edit mode. You can toggle between Edit mode and Object mode in the 3D Viewport panel to have a better look at the mapped texture.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#the-moon","title":"\ud83d\udcbb The moon","text":"

This moon exercise doesn't have a prepared blend file because you are gonna make it all by yourself! So open a new blend file and start to make the moon.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#the-basic-scene-sphere-sun-and-the-darkness-of-space","title":"The basic scene - Sphere, sun and the darkness of space","text":"

To create the moon we first need to prepare a very simple scene.

  1. First off we need to remove the Default cube (the cube that comes with a new blend file which only function is to be removed :'( ).
  2. Add a UV Sphere instead with Shift-A > Mesh > UV sphere.
  3. Set the UV Sphere's shading to smooth through the 3D Viewport menu in at the top of the 3D Viewport (Object > Shade Smooth).
  4. Select the default Light object in the Outliner and change it to a Sun light in the Light-tab in the Properties-panel on the right.
  5. Now change the shading in the 3D viewport to Rendered by pressing Z and then select Rendered. This rendered view is by default set to Eevee, to change that to Cycles for more realistic lighting go to the Render Properties-tab in the Properties-panel and change the Render Engine to Cycles.
  6. As you can see the sun is now way too bright. Lower the Strength of the sun from 1000 to 10 in the Light-tab in the Properties-panel. No need to have the power of a 1000 suns.
  7. Now that we have the sun we need to disable the World-lighting (the grey ambient light) since we only need the sun as a direct light source like it is in space. Go to the World properties-tab in the Properties-panel and set the Color in the Surface-section all the way to black.

Now we have the basic scene of a sphere in space, now we are gonna make it look like the moon by adding textures.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#applying-a-material-and-texturing-the-moon-thats-one-small-step","title":"Applying a material and texturing the moon - That's one small step...","text":"

Before we can edit the material we need to open the Shader Editor. For this we need to slightly modify the interface.

  1. Grab the edge between the 3D viewport-panel and the Timeline-panel by hovering above the edge until you see resize cursor then click and drag the edge until half of the Blender window.
  2. Now click the upper left Editor type dropdown menu (now the Timeline-icon ) and select the Shader Editor.
  3. In the Shader Editor add a new material.
  4. In this material add 2 Image Texture-nodes, 1 Texture Coordinate-node and 1 Displacement-node (Shift-A > Vector > Displacement).
  5. Connect the Texture Coordinate-node UV output to both Image Texture-nodes Vector inputs.
  6. Connect one of the Image Texture-nodes Color output to the Principled BSDF-node Base Color input and the others Color output to the Displacement-node Height input.
  7. Finally connect the Displacement-node Displacement output to the Material output-node Displacement input.
  8. Open the data/moon_textures/lroc_color_poles_8k.tif in the Image Texture-node that is connected to the Principled BSDF-node Base Color.
  9. Open the data/moon_textures/ldem_16.tif in the Image Texture-node that is connected to the Displacement-node Height input.
  10. Then finaly set the Image Texture-node Color Space-parameter of the node with the displacement texture to Non-Color.
  11. Initially the Displacement-node Scale parameter is set way too high making the moon look horrible. Set this parameter to 0.001.

As you can see it already looks quite like the moon but with some final tweaking you will get even more realism.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#adaptive-displacement-revealing-the-craters-mooore-details","title":"Adaptive displacement - Revealing the craters! Mooore details!","text":"

Everything we have seen until now has been rendered in the default EEVEE rendering engine, which is for visualization purposes very powerful, but if you want to add that extra little realism with adaptive displacement you have to use the Cycles rendering engine.

  1. Active the Cycles rendering engine with the Render Engine setting in the Rendering properties-tab of the Properties-panel.

While we are there, to be able to use adaptive displacement, we need to activate the Cycles experimental feature set.

  1. Set the Feature Set to Experimental.
  2. This Experimental feature set added an extra section in the current properties panel tab called Subdivision. In this section set Viewport to 2.

Now we need to add a Subdivision modifier that also got a new setting from the Experimental feature set that enables the adaptive displacement.

  1. Add a Subdivision modfier in the Modifier properties-tab of the Properties-panel.
  2. Enable the Adaptive Subdivision setting in this modifier.

Until now you only saw some slight differences because there is only one setting that has to be changed to make all of this worth it.

  1. Change the Displacement setting to Displacement Only in the Properties-panel > Material properties-tab > Settings-section > Surface-subsection.
  2. Now zoom in and toggle to the Edit mode and back, which re-triggers the adaptive subdivision computations, and see the craters in their full glory.

Bonus: For an artists rendition of the moon change the Displacement-node Scale parameter to a higher value and see how the craters get more noticeable (although less realistic).

"},{"location":"advanced/advanced_materials/introduction/","title":"Introduction","text":"

This chapter will introduce the Shader Editor and UV Editor of Blender which lets you create advanced materials to improve the look of your visualizations. The Shader editor and UV editor go hand in hand, with the UV-editor (and 3D viewport) you'll learn how to UV-unwrap your meshes and manipulate the UV-coordinates and with the Shader editor you'll project procedural or image textures based on the created UV-coordinates.

You'll learn how to apply PBR (Physically based rendering) style textures and where to find them, to make your objects look photo real.

And lastly a commonly used experimental feature called Adaptive Subdivision will be combined with vertex displacement to create some great looking micro-displacement details on the surfaces of your objects.

Before you start with the exercises the following video will give you the theoretical and practical background to make these exercises. In this video there are some Blender walk-throughs, if you want to follow along you can use the walk-through files in the walkthroughs/advanced/advanced_materials directory.

After you watched the video about advanced materials you are ready for the exercises!

"},{"location":"advanced/advanced_materials/node-wrangler/","title":"Node-wrangler reference","text":"

The node-wrangler add-on brings a wide variety of new features and hot-keys to automate steps within the Shader Editor to make life easier. In the walk-through only 2 features where shown, the 'Shader viewer' (Ctrl+Shift+LMB) and 'Add Texture Setup' (Ctrl+T), 2 very useful hot-keys but this is only the tip of the iceberg.

To see the full set of features/hotkeys that node-wrangler provides you need to go to Menu bar 'Edit' > Preferences... > Tab 'Add-ons' > Search for 'Node wrangler' > Show Hotkey List (see image below). For additional information on what each individual feature does please refer to the official documentation.

Warning

The hotkeys in the official documentation are not updated yet to 2.8+ therefor please refer only for the information of each feature and use the \"Show Hotkey List\" for the current hotkeys.

"},{"location":"advanced/advanced_materials/vertex_colors/","title":"Visualizing vertex colors with the Attribute node","text":"

In the basics course we already introduced the use of vertex colors with the Material-tab in the Properties-panel. What happens under the hood is that you basically add an Attribute-node to the node-network and attached its Color-output to the Base Color-input of the Principled BSDF shader-node (see images below).

Shader Editor node-network

3D viewport result

The blend file for the image above, vertex-color.blend, can be found among the walk-through files in the walkthroughs/advanced/advanced_materials directory.

"},{"location":"advanced/animation/2_assignment_cars/","title":"\ud83d\udcbb \"Cars\": the movie","text":"

In this exercise you can do some more complex keyframe animation by having multiple objects move to create a city full of driving cars. You will need basic keyframing skills and use of the Graph Editor.

  1. Load cars.blend

This scene has a very simple city with some building and some cars. An animation of 250 frames has been set up in the file, starting at frame 1, ending at frame 250.

Tip

All the geometry of the buildings is in the so-called collection \"Collection 2\". You can hide all these objects by clicking the eye icon right of \"Collection 2\" in the outliner.

  1. Change to the first frame in the animation with Shift-Left. Note that you can see the current frame you're working in by the blue vertical line in the Timeline at the bottom. Also, in the 3D view there's a piece of text in the upper-left that reads (1) Scene Collection | Plane: the current frame is listed between the parentheses.
  2. In the scene there's two cars behind each other. Select the front car of the two.
  3. Enter a keyframe for the car's location and rotation: press I followed by picking LocRot
  4. Change to the last frame in the animation with Shift-Right
  5. Move the car to the end of the road it's on, along the Y axis
  6. Enter another LocRot keyframe with I
  7. Check the car movement by playing back the animation with Space, or by changing the time in the Timeline editor with Shift-RMB

The car's speed currently is not constant: it speeds up near the beginning of the animation and slows down starting somewhere halfway. We can edit the curve for the Y location channel in the Graph Editor to influence this behaviour.

  1. In the Graph Editor on the left of the screen show all the location and rotation values being animated for the selected car by using the little triangle left of the name Object Transforms. Below the Object Transforms you should now see the 6 channels for which you created keyframes in steps 4 and 7: X, Y and Z Location, and X, Y and Z Euler Rotation.
  2. Click the eye icon next to Object Transforms to hide all the channels. Then click the eye next to Y Location to only show the graph for the Y location. Note that you can use the Home key to zoom to the full extent of the graph.

You should now see a curved line in green with two orange filled circles at the times of the beginning and end of the animation, i.e. frames 1 and 250. Attached to the squares are \"handles\" (the lines that end in open circles) that influence the shape of the curve.

  1. Select the open circular endpoints of the handles and move them around. See what this does for the shape of the curve and the subsequent behaviour of the car in the animation.

The two curve points are selectable with Shift-LMB, but also, for example, border select (B key). This works just like you normally select objects. Deleting keyframes can then be done with X.

  1. Select both curve points with A, Press V to bring up the Keyframe Handle type. This menu allows you to change how the curve is shaped based on the position of the handles.
  2. Select Vector. Notice how the curve's shape changes. See what happens when you move the handle endpoints.
  3. Press V again and choose Free. Again change the handle endpoints.
  4. Try out how the different curve shapes you can produce influence the car behaviour.

Now let's animate another car: the one at the start of the road with the bend in it.

  1. Animate the second to move over the bended road all the way to the end.
"},{"location":"advanced/animation/2_assignment_cars/#bonus","title":"Bonus","text":"

Make the cars drive over the road, choosing yourself which cars goes in what direction, how fast, which turns are made, etc. But don't make cars go through each other and have them wait if needed.

Add a camera that shows the busy streets in action :)

"},{"location":"advanced/animation/3_assignment_flipbook/","title":"Flipbook animation","text":"

As mentioned in the animation chapter's video flipbook animation is a simple animation technique in which a mesh is changed over time. Such a changing mesh occurs quite frequently in (scientific) simulations.

In general there's two different situations when it comes to an animated mesh:

  • The mesh topology stays fixed over time, but its vertex positions change each time step
  • The mesh topology and its vertices change over time

The exercise below shows a general technique how to handle any set of animated meshes (so for both types above), which are loaded individually from files. This technique has no restrictions on changing mesh topology, but is somewhat involved as it uses a Python script to set up the animation.

Below we also describe two modifiers that are available in Blender, each usable for one of the types above.

"},{"location":"advanced/animation/3_assignment_flipbook/#using-python-to-set-up-an-animated-mesh","title":"\ud83d\udcbb Using Python to set up an animated mesh","text":"

Here, we'll get more familiar with the flipbook animation approach, in which a series of meshes is animated over time by switching a single object's mesh data each frame.

  1. Extract dambreak.tar.gz in the same directory as animated_ply_imports.blend. These files are located in the data/advanced/animation directory.
  2. Load animated_ply_imports.blend

    This blend file contains not only a 3D scene, but also some Python scripts we use to set up the flipbook animation.

  3. The first step is to load the whole dataset of timesteps using one of the scripts. This might take a bit of time, depending on the speed of your system.

    Execute the script that imports the PLY files for the time steps. To do this step make sure the script called 1. import ply files is shown in the text editor panel. Then press the button in the top bar to run the script.

    Tip

    By default, only the first 100 steps are loaded. You can increase the number of files to the full 300 if you like by updating the variable N in both the import script and the animation handler script.

  4. The cursor changes to a numbered black square indicating the percentage of loading that has been completed. In case you get the idea something is wrong check the console output in the terminal where you started Blender, to see if there are any error messages.

  5. After all PLY files are loaded execute the script that installs the frame change handler. This script is called 2. register anim handler. Make sure the text editor is switched to this script and press the play button.

  6. Verify that the flipbook animation works with Space and/or moving the time slider in the Timeline with Shift-RMB.

    The playback speed will not only depend on the framerate setting, but also on your system's performance

  7. Change the Frame Rate value (in the Output properties tab at the right side of the screen, icon ) to different values to see how your system handles it. Is 60 fps feasible?

Use your skills with keyframe animation to do one of the following things (or both if you feel like it ;-)):

  • Have a camera follow the moving water in some cool way
  • Place a surfer on the moving wave of water. You can import the PLY model silver_surfer_by_melic.ply to use as 3D model. You can load it in Blender with File > Import > Stanford (.ply).
"},{"location":"advanced/animation/3_assignment_flipbook/#alternatives-using-modifiers","title":"Alternatives using modifiers","text":"

The above method uses a bit of a hack with Python to set up mesh changes over time. Although it's flexible (it can work with any type of file format by editing the import code), it is also a bit fragile, needs to load all meshes in memory all at once, etc.

In recent versions of Blender two modifiers were introduced that can be used for similar animation setups, although they each have their limitations. We describe them here in case they are useful for certain situations you might encounter.

"},{"location":"advanced/animation/3_assignment_flipbook/#mesh-sequence-cache-modifier","title":"Mesh Sequence Cache Modifier","text":"

The Mesh Sequence Cache Modifier takes one or more Alembic or USD files and sets up a time-varying mesh from those. The animated mesh data can either come from a single file (containing multiple time steps), or from multiple files (each containing a single time step).

The limitation of only supporting Alembic and USD file formats is somewhat unfortunate, but understandable, since those formats support storing animated meshes in a single file and they are used extensively in visual effects and animation.

If you want to use this modifier then you need to create an Alembic or USD file (or set of files) containing your animated mesh. If you then import that file the Mesh Sequence Cache modifier will be added automatically to set up the animation.

Tip

An example USD file to load can be found in data/advanced/animation/animated_plane.usdc. The file was created by exporting the example animation described below (involving gen_pc2_anim.py) from Blender to a USD file.

"},{"location":"advanced/animation/3_assignment_flipbook/#mesh-cache-modifier","title":"Mesh Cache Modifier","text":"

The Mesh Cache Modifier works somewhat differently in that it is applied to an existing mesh object and will animate vertex positions (only) of that mesh. The modifier supports reading the animated vertex data from a MMD or PC2 file.

Fixed mesh topology

The animated vertex data in the MMD or PC2 file is assumed to use the same vertex order over all time steps. The animated mesh can also not have a varying number of vertices, or a changing topology.

This means that, for example, the animated wave dataset from the exercise above cannot be represented as a series of .pc2 files, as the mesh size in vertices and its topology changes.

The MDD file format is mostly used to exchange data with other 3D software, while the PC2 is a general and simple point cloud caching format. Blender contains add-ons for exporting MDD and PC2 files, but they are not enabled by default. When enabled you can use them to convert a mesh sequence in a different format to one of these.

The PC2 file format is very simple, and can easily be written from, say, Python or C++. The format looks like this (based on information referenced here, and example Python code here):

  • The start of a .pc2 file is a 32-byte header containing:

    char    cacheSignature[12];   // 'POINTCACHE2' followed by a trailing null character.\nint32   fileVersion;          // Currently 1\nint32   numPoints;            // Number of points (i.e. vertices) per sample\nfloat   startFrame;           // Frame number where animation starts\nfloat   sampleRate;           // Duration of each sample *in frames*\nint32   numSamples;           // Defines how many samples are stored in the file.\n
  • Following the header, each set of point positions (collectively called a \"sample\") is stored consecutively. Each sample is stored one after the other as a flat array of x/y/z 32-bit floats for each point. So each sample uses numPoints * sizeof(float) * 3 bytes.

All in all, a .pc2 file provides a fairly compact method of storing a set of animated mesh vertices. Together with the Mesh Cache modifier they can be used to easily set up a mesh animation, for cases where only vertex positions need to be animated.

Tips

  • Note that the topology of the animated mesh is not stored in the .pc2 file and needs to be defined by creating a mesh in Blender first. After that, apply the Mesh Cache modifier and set the .pc2 file to use.
  • You can update the .pc2 file without having to re-apply the modifier. Blender will re-read the file when the frame number changes.
  • See data/advanced/animation/gen_pc2_anim.py for a simple example of generating and using a .pc2 file.
"},{"location":"advanced/animation/introduction/","title":"Introduction","text":"

The basic of (keyframe) animation in Blender were ready discussed in the Basics course, but if you need to refresh your memory then you can use this video:

"},{"location":"advanced/animation/shape_keys/","title":"Shape keys","text":""},{"location":"advanced/animation/shape_keys/#overview","title":"Overview","text":"

Shape keys can be used for a very specific type of animation: to morph one mesh into another over time, or to blend multiple meshes together into one result. This can be used, for example, to show the time-evolution of some object or the highlight differences between two meshes. Although this is a fairly specific use case, shape keys aren't too difficult too understand and use, hence we include this section.

There are some limitations to using shape keys:

  • The two meshes must have the same number of vertices
  • Preferably the two meshes should have the same topology (i.e. the way in which the vertices are connected to form polygons). If the topology doesn't match then strange results during morphing can occur.

The above are fairly annoying limitations, but there is no easy way around it in Blender currently.

"},{"location":"advanced/animation/shape_keys/#poor-bunny","title":"\ud83d\udcbb Poor Bunny","text":"
  1. Load bunny_shape_keys.blend
  2. This scene contains the Stanford Bunny and a completely flattened version of the Bunny
  3. Verify that these meshes have the same number of vertices. Do a visual comparison in wireframe mode (Z > Wireframe)

We'll now add some shape keys:

  1. Select the regular Bunny.
  2. Add a shape key under Shape Keys in the Mesh properties using the + button. The new shape keys will be called Basis.
  3. Add a second shape key, it will be called Key 1 and have a default influence of 0.000.
  4. Select the Key 1 shape key and enter mesh edit mode in the 3D view with TAB and make sure you're in vertex mode by pressing 1
  5. Select parts of the Bunny mesh and transform them as you like. The changes should be clearly visible.
  6. Exit mesh edit mode with TAB. You should notice that the mesh returns to its normal shape.
  7. Change the influence Value of Key 1 to see what happens to the resulting mesh. You can either click on it and enter a number, of click and drag the value.

Let's add another shape key:

  1. Add a third shape key, it will be called Key 2.
  2. Select Key 2 and apply a second set of mesh changes in edit mode.
  3. Once again exit edit mode.
  4. Play around with the influence values of both shape keys, as well as the checkboxes next to the influence values.

Checking the difference between relative and absolute shape keys:

  1. Uncheck the Relative checkbox to switch to absolute shape keys. Notice that the influence values have now disappeared.
  2. Change the Evolution Time value to understand how the morphing of the meshes is done now.

Using another mesh to define a shape key:

  1. Delete shape keys Key 1 and Key 2 using the - button and change back to relative shape keys by checking the Relative checkbox.
  2. Select the flattened mesh and the Shift-click the Bunny mesh to add it to the selection and make it the active object.
  3. Open the shape key menu using the downwards arrow below the + and - buttons. Select Join as Shapes.
  4. There should now be a new shape key called flattened mesh. Note that this shape key is only set on the bunny mesh, not on the flattened mesh mesh.
  5. Vary the influence of the shape key called flattened mesh to see the Bunny melt.
  6. Delete the flattened mesh object in the Outliner. Does the shape key that morphs the Bunny to its melted flat shape still work?

Looking closer at the behaviour of the mesh morphing:

  1. Try to reason why the head of the Bunny is the last part to melt.
  2. Zoom in a bit to see if you can spot the twisting motion that mesh makes as it melts.
  3. Try to transform the mesh in the melted shape key in such as way as to minimize the twist. Or toy around with other mesh transforms to see what morphs come out. Note that you need to make changes in edit mode.
"},{"location":"advanced/final_project/final_project/","title":"\ud83d\udcbb Final project: making a visualization of your own data","text":"

We would like you to spend the remainder of your time in this course on doing this little project. We have two options for you to choose from. The first and recommended one is making a visualization of your own (research) data. The second option is that you work on a visualization of data we have prepared.

Do not forget that if you are stuck to join us on Discord or in a feedback webinar so we can help. See the Course overview for more information.

If you made a nice visualization and still have time left in the course, why not make an animation?

"},{"location":"advanced/final_project/final_project/#option-1-your-own-data","title":"Option 1: your own data","text":"

So far you have learned how to make meshes and vertex colors in Blender using Python. So, think about if you can visualize your data using these techniques. You need to think about what you need to do to transform your data into a form that can be used to generate vertices, faces and vertex colors. And how do you want to visualize your data values? Can you visualize them through the Cartesian coordinates of the vertices and faces and maybe some colors? Do you need to use vertex coloring? Or do you need something else? Note that volumetric data will be difficult in Blender and you may need to think of some tricks.

"},{"location":"advanced/final_project/final_project/#option-2-visualize-a-computer-model-of-a-proto-planetary-disk","title":"Option 2: visualize a computer model of a proto-planetary disk","text":"

Although we highly recommend you to work on your own data, if you have none to use, you can use the following data to work on. Here we give a brief introduction to the data.

"},{"location":"advanced/final_project/final_project/#what-is-a-proto-planetary-disk","title":"What is a proto-planetary disk","text":"

A proto-planetary disk is a disk-like structure around a newly born star. This disk is filled with dust (solid-state particles with a diameter in the order of 1 micrometer) and gas. In the course of time this dust and gas can coalesce into planets. In this option we will look at a computer model of the dust in such a disk. The model calculates the temperature and density of the dust in the disk, taking the radiation and gravity of the star into account.

The calculations of the software (called MCMax) are done iteratively using Monte Carlo techniques. Packages of photons are emitted by the star in random directions and their wavelength sampled from the radiation distribution of the star (by default a blackbody). Using the absorption, scattering and emission properties of the dust grains in the disk, the scattering, absorption and re-emission of the photons are calculated throughout the disk. This is used to calculate a temperature structure in the disk. This temperature is then used to adapt the starting density structure of the disk after which a new pass is done by tracking a next set of photons and adapting the density subsequently. This is repeated until convergence is reached. The code uses a two dimensional (adaptable) grid in the radial and theta direction. The disk is assumed to be cylindrically symmetric around the polar axis (z-axis, see Fig. 1). The grid cell size is lowered in regions where the density becomes high.

Figure 1: definition of coordinates

"},{"location":"advanced/final_project/final_project/#how-to-start-visualizing-such-a-proto-planetary-disk","title":"How to start visualizing such a proto-planetary disk","text":"

You could create a 3D model of the disk at constant density and display the temperature as colors on the surface of the model. You could use this to make nice renders and animations to show the temperature structure of the disk. For this we need to pre-process the data from the model to get the spatial coordinates of the disk at a constant density. These coordinates then need to be converted into Cartesian coordinates of vertices and faces before creating the geometry in Blender. You can then add the temperatures to the faces using vertex coloring and by adding the needed shaders to the model.

"},{"location":"advanced/final_project/final_project/#how-the-model-data-is-structured","title":"How the model data is structured","text":"

You can download the data here. An example output file of modeling code MCMax is shown below.

# Format number\n     5\n# NR, NT, NGRAINS, NGRAINS2\n   100   100     1     1\n# Spherical radius grid [cm] (middle of cell)\n   7479900216981.22     \n   7479900572789.07     \n[...]\n# Theta grid [rad, from pole] (middle of cell)\n  9.233559849414326E-003\n  2.365344804038962E-002\n[...]\n# Density array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n  1.001753516582521E-050\n  1.001753516582521E-050\n[...]\n# Temperature array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n   1933.54960366819     \n   1917.22966277529     \n[...]\n# Composition array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n   1.00000000000000     \n   1.00000000000000     \n[...]\n# Gas density array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n  1.001753516582521E-048\n  1.001753516582521E-048\n[...]\n# Density0 array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n  1.001753516582521E-050\n  1.001753516582521E-050\n[...]\n
The file is structured in a way the scientist thought best at the time using the tools at hand. For us it is important to notice the NR and NT, which stands for number of radial and theta points respectively (NGRAINS is related to the number of different types of dust grains in the disk and you can ignore this). Further, the output file then lists the radius points and after that the theta points. Subsequently temperature and density values are listed by iterating over the radius and then the theta indices. The units of all the values in the MCMax output are: R[cm], Theta[radians], Density[gr/cm^3], Temperature[K].

The data from the MCMax code is in spherical coordinates, while the system in Blender works with Cartesian coordinates. The theta in the output is defined as the angle with the z-axis (See Fig. 1).

"},{"location":"advanced/final_project/final_project/#how-it-could-look","title":"How it could look","text":"

To help you get an idea of what the data of the proto-planetary disk might look like, check this video we made:

"},{"location":"advanced/mesh_editing/introduction/","title":"Introduction","text":"

Info

This chapter is an extension of the Basics course Simple mesh editing chapter, so the walkthrough of that chapter should suffice as background.

This chapter will give you an introduction on the Edit mode of the 3D viewport where you will learn how to patch up your imported meshes/visualizations and even learn how to generate your own 3D shapes.

To refresh your memory on basic mesh editing you can watch the Simple mesh editing intro video of the Basics part below:

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/","title":"\ud83d\udcbb Mesh Editing with the Edit mode","text":"

This assignment will be a brief introduction on the Edit mode in the 3D viewport.

Once you opened the exercise blend file sme_assignment.blend you'll see the familiar fish iso-surface above a plane.

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/#getting-familiar-with-the-edit-mode","title":"Getting familiar with the Edit mode","text":"

To edit the mesh we first need to go the Edit mode with the fish.

  1. Select the fish and enter the Edit mode by pressing Tab. Depending on the speed of the system you're working on edit mode might be entered instantly or might take half a second. In general, for larger meshes switching to edit may take longer.

Now you will be able to see all the vertices, edges and faces that make up the 3D model. You will now try to select and move around some vertices, edges and/or faces.

  1. Change the Mesh Select Mode to Vertex by pressing 1 (or click the left icon in at the 3D view header). This might already be active by default but it will be highlighted on the icons in the 3D view header ().
  2. Before you start selecting, de-select all all current selected vertices by pressing Alt-A or double 'A' rapidly.
  3. Now try to select a single vertex by clicking on it with the LMB, or multiple with Shift-LMB. You might have to zoom in a bit to separate the vertices enough.
  4. Another method is to use the selection tools:
    1. Box selection by pressing B and dragging a box around the vertices you want to select. Hold Shift to de-select.
    2. Circle selection by pressing C and left-clicking and dragging with the mouse over the vertices you want to select. To increase the size of the Circle selection tool simply scroll with your mouse Wheel. With MMB and dragging you can de-select vertices. Press Enter to exit circle select mode (or with RMB ).
  5. Once you selected your vertices you can transform them the same way you can do with objects by pressing the hotkeys G for translation, R for rotation, and S for scaling, etc.
  6. Probably now you did the vertex editing the fish looks a bit scrabbled. One way to clean it up is, of course, using Ctrl-Z to undo it. Another way is simply deleting the vertices by using the Delete popup menu X > Vertices. Try to remove part of the fish skin to it leaves a hole in the mesh which will reveal a part of the inside of the fish.

Tip!: If your fish has been \"meshed-up\" beyond repair you can always revert it to the last saved state with: File > Revert > Confirm.

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/#filling-the-holes","title":"Filling the holes","text":"

An imported mesh from a 3D visualization program can sometimes contain unwanted holes or separations in parts of the mesh, these can also be fixed in the edit mode. Conveniently the fish in the exercise file was already poked full of holes so you can fix these.

In between: To better inspect if there are any holes left you can switch back and forth between the Object mode and Edit mode because in the Object mode they are easier to see.

  1. First, make sure the whole mesh is selected by pressing a and then remove the small holes (the size of one triangle/quad) by pressing F3 in the 3D viewport in Edit mode and type in fill holes and press enter or click on it with LMB (this might take some time). Now this already cleaned up a lot of the holes in the geometry!
  2. Through inspection you might notices there are some bigger wholes that were not filled yet because they were skipped by the previous step since they were to large. To fill these they first need to be selected by first de-selecting everything with alt-a and then press F3 and type in non manifold and press enter or click on it with LMB.
  3. This selected the big holes but also other non-manifold geometry. To select only one of the holes hold CTRL+SHIFT and drag with LMB over one of the holes. This de-selects everything excepts what was in the drag-box.
  4. Now this selected hole can easily be fixed by pressing f.
  5. Repeat step 2 to 4 for the other 2 holes.

Tip!: The fill with f fills the hole with an n-gon, a face with more then 4 vertices. These can sometimes create shading artifacts in your final render. Another way to fill these holes is to use grid-fill (ctrl+f), this tries to fill the whole with a grid of quad shaped faces. This however might not always work for numerous reasons (uneven amount of vertices, closed loops etc) which can be fixed with additional mesh editing but the easy route would be to fill it with an n-gon face.

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/#separating-skin-from-bones","title":"Separating skin from bones","text":"

Now that you got a little familiar with mesh editing you can try to separate the skin from the bones by using mesh separation.

  1. While still in edit mode (press Tab if not), try to select all the outside skin with the select linked selection by hovering the mouse cursor over the geometry and pressing L. This will only select a connected part of the skin so continue this step until you think you selected all the outside skin. Note that it is difficult to do this perfectly, as some of the insides of the fish are sometimes also selectable. Unfortunately, this occurs frequently with this type of sensor-based 3D data.
  2. Once you think all the skin is selected you can press P and select Selection to separate the selected surfaces from the main mesh into another mesh object. This new mesh will be added to the Outliner with the name fish.001.
  3. In the Outliner double-click LMB on the mesh object fish.001 to rename it to fishskin. Do the same for the fish mesh object and rename it to fishbones.
  4. If you now select the fishskin mesh object and hide it by clicking the little icon in the Outliner will reveal the insides of the fish.

Tips!: - To reverse the separation of the mesh into bone and skin you can select both the mesh objects in Object mode and press Ctrl-J to join them back together into a single mesh. - Sometimes X-ray mode, toggled with Alt-Z can be useful when editing a complex mesh, as it makes all geometry in a mesh partly transparent

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/#bonus-make-your-own-annotation-arrow","title":"(BONUS) Make your own annotation arrow","text":"

Since the content of this course is mostly geared towards imported geometry or scripted geometry, you might not directly think about manually created geometry. This bonus exercise however will show you that it is relatively easy to create your own geometry in Blender. Lets start your manual mesh creation with an annotation arrow!

  1. In the 3D viewport make sure you are in Object mode and add a new cylinder with Shift-A > Mesh > Cylinder.
  2. Press / to isolate the mesh so that there are no distractions. This can be reversed again by pressing /.
  3. Press Tab to go into Edit mode.
  4. Grab the selected geometry by pressing g and press z to move it along the z-axis only and press 1 to move it 1 unit up so that the origin is at the bottom.
  5. De-select all the geometry with Alt-A and press 1 to set the select mode to Vertex and select all the bottom vertices (with LMB-drag over the vertices or with the b Box-select).
  6. Press s to scale them to a tiny point and press LMB to confirm.
  7. Now select the top vertices the same way you did with the bottom vertices, make sure that none of the bottom vertices are selected.
  8. Press i to inset the faces and move your mouse until you are satisfied with the width of the arrow shaft.
  9. Press e to extrude the selection and move the mouse up until you are satisfied with the length of the arrow shaft.
  10. Now press Tab and admire your newly created arrow!
  11. The arrow might now be a bit too big compared to the fish so scale the arrow down with s, move it to a point of interest with g and rotate the arrow to your liking with r (which is made relatively easy because we made it so that the origin is at the point)

Since the introduction of the Edit mode and switching back and forth between it and the Object mode you do need to make sure in which mode you are before adding new geometry or before using one of the transform operations (grab, scale and rotate). Otherwise you might add geometry to an already existing object instead of adding a new 3D object or you might move, scale or rotate 3D object geometry in the Edit mode and inadvertently change the origin of the object. This can be confusing sometimes but you'll get used to it!

"},{"location":"advanced/python_scripting/1_api_basics/","title":"Blender API basics","text":""},{"location":"advanced/python_scripting/1_api_basics/#introduction","title":"Introduction","text":"

Blender embeds a Python interpreter, which is used for multiple tasks. It is a central feature of Blender, as large parts of the user interface are set up and controlled from Python, as well as all add-ons (import/export, tools, etc) are written in Python.

As a user you can run scripts directly on this interpreter and also access Python modules provided by Blender, like bpy and mathutils to access scene elements. The bpy module gives access to Blender's data, functions and classes. In this section we will focus on using the Python API for automation, custom data import and manipulating geometry, but this is not all that is possible with the API, of course. The official API manual states the following things are possible using the Python API:

  • Edit any data the user interface can (Scenes, Meshes, Particles etc.).
  • Modify user preferences, key-maps and themes.
  • Run tools with own settings.
  • Create user interface elements such as menus, headers and panels.
  • Create new tools.
  • Create interactive tools.
  • Create new rendering engines that integrate with Blender.
  • Subscribe to changes to data and it's properties.
  • Define new settings in existing Blender data.
  • Draw in the 3D view using Python.

All in all, the Python API is very powerful.

More detailed Python API reference

In these chapters we provide an introduction to the Python API, using a number of examples. After finishing these chapters you can find a more extensive description of often-used Python API features in the separate API section.

"},{"location":"advanced/python_scripting/1_api_basics/#good-to-know","title":"Good to know","text":"

Before we continue, we list some bits of information and some tricks that are good to know.

  • Blender uses Python 3.x, specifically 3.10 in Blender 3.1
  • You can access the online API documentation from within Blender with Help > Python API Reference
  • Starting Blender from the console will allow you to see important outputs channels (warnings, exceptions, output of print() statements, etc). See the next section how to do this.
  • The Python Console area in Blender is great for testing Python one-liners. It also has auto-completion so you can inspect the API quickly. Example code shown with >>> lines in our course notes is assumed to be running in the Python Console.

    Python Console versus terminal console

    The Python Console is something different than the console we refer to below. The Python Console is an area within the Blender user interface in which you can enter and execute Python commands:

    While the other type of \"console\" is a terminal window or DOS box from which you start Blender. This console will then contain any output and exceptions from Python scripts that you run:

    • By enabling the Python Tooltips option in the Preferences under Interface > Display you can hover over almost any button, option, menu, etc and after a second a tool-tip is shown. This tool-tip shows information on how to access this element from the Python API.
    • Right clicking on almost any button, option, menu, etc in Blender gives you the option to 1) directly go to the API documentation with Online Manual or 2) Copy Data Path. Option 2 copies Python API properties related to that element to your clipboard to paste into your script. Note however, that not always the full path is copied, but only the last part.

In the upcoming sections we will first look at how to run Python scripts in Blender. Then we look at how to access Blenders data through scripts and we follow this up with creating geometry, vertex colors and materials in the last section.

"},{"location":"advanced/python_scripting/1_api_basics/#starting-blender-from-the-command-line","title":"Starting Blender from the command line","text":"

It is important, when scripting, to start Blender from a command line interface (macOS and Linux). Warnings, messages and print() statements will output into the console. How to start Blender from the command line depends on your operating system.

  • For macOS it would be like this:

    /Applications/Blender.app/Contents/MacOS/Blender\n
  • For Linux it would be something like:

    $ <blender installation directory>/blender\n
  • On Windows you can start Blender normally (i.e. from the Start menu) and then use Window > Toggle System Console to open the console window from within Blender.

More information on where the Blender executable is located on your system and where Blender directories of interest are located see this manual page.

"},{"location":"advanced/python_scripting/1_api_basics/#starting-blender-from-the-console","title":"\ud83d\udcbb Starting Blender from the console","text":"

Find the Blender executable on your machine. Open Blender through the console. Delete the cube in the default project of Blender, what output is shown in the console?

"},{"location":"advanced/python_scripting/1_api_basics/#running-scripts-within-the-blender-interface","title":"Running scripts within the Blender interface","text":"

When scripting inside Blender it is convenient to use the Scripting workspace (see the arrow in Fig. 1 below). For running scripts within Blender you have two main options:

  • Using the interactive Python Console (Fig. 1A)
  • Using the built-in Text Editor (Fig. 1B)

The Python Console is very useful for testing lines of Python code, and exploring the API using auto-complete (with TAB) to see what is available. The keyboard shortcuts are a bit different than you might be used to in other text editors. See this section in the Blender manual for an overview of menu options and shortcut keys.

Blender also has its own built-in text editor which you can use (Fig. 1B) to edit Python code and execute it by pressing the button in the top bar, or using Alt-P. Note that you can have multiple different text blocks, each with their own code.

If you want to use your own editor to edit your scripts you can do this by opening the script in both the Blender Text Editor and your own editor. To refresh the Blender Text Editor use Text > Reload or Alt R (or Option R on the Mac). You can also make a script that you open in the Blender Text Editor that executes an external script you edit in your own editor. See for example the script in Fig. 1B.

Figure 1: The Scripting workspace in Blender

"},{"location":"advanced/python_scripting/1_api_basics/#running-scripts-from-the-command-line","title":"Running scripts from the command-line","text":"

You can also run Python scripts in Blender directly from the command-line interface. An example of executing a script (-P) without opening the Blender GUI (-b, for background) would be:

blender -b -P script.py\n

You can combine running a Python script with, say, rendering the first frame (-f 1) from an example test.blend file. The output will go to the directory of the blender file (-o //...) and it will generate a PNG image file (-F PNG):

blender -b test.blend -o //render_ -F PNG -f 1\n

More information on command line arguments is here.

"},{"location":"advanced/python_scripting/1_api_basics/#custom-script-arguments","title":"Custom script arguments","text":"

You might want to pass extra arguments to your script, for example to provide a frame range, or file name. For this, Blender provides the -- marker option. Any arguments passed to Blender that follow -- will not get processed by Blender, but are passed in sys.argv:

# useargs.py\nimport sys\n\nargs = []\nidx = sys.argv.index('--')\nif idx != -1:\n    args = sys.argv[idx+1:]\n\nprint(args)\n# Do something with values in args\n
$ blender -b -P useargs.py -- -myopt 1,2,3\nBlender 3.1.2 (hash cc66d1020c3b built 2022-04-02 14:45:23)\nRead prefs: /home/melis/.config/blender/3.1/config/userpref.blend\n['-myopt', '1,2,3']\n\nBlender quit\n

You can then parse these custom arguments using a regular Python module like argparse.

"},{"location":"advanced/python_scripting/1_api_basics/#using-modules-and-external-scripts","title":"Using modules and external scripts","text":"

As we've shown above there's multiple ways to run Python code within Blender, either from a text editor block, the Python Console or from the command-line. Usually, you want to use Python modules or other scripts from the code you're running. Below we describe some common situations and how to handle them.

See this manual page for more tips and tricks related to working with Python scripting in Blender.

NumPy

The official binaries of Blender from blender.org include the numpy Python module, so if you need NumPy then import numpy should work out of the box.

"},{"location":"advanced/python_scripting/1_api_basics/#loading-modules-in-blender","title":"Loading modules in Blender","text":"

For modules you want to import you can use the normal Python method of editing sys.path (as needed) and importing the module:

# Example code run from a text block within Blender\nimport sys\n\n# A path somewhere on your file system\nsys.path.append(\"/some_directory/\")\n\n# Or a path relative to the current blender file\nblendfile_location = os.path.dirname(bpy.data.filepath)\nsys.path.append(blendfile_location)\n\n# Import module\nimport my_python_module\n\n# Call a function from the module\nmy_python_module.do_something()\n

However, suppose you you keep Blender running and edit my_python_module.py to update do_something(). Re-executing the above code will not pick up the changes in the module you're importing. The reason for this is that the Python interpreter doesn't reload a module if it is already loaded. So the import my_python_module has no effect the second time it is called.

To force a module to get reloaded you can use the importlib module:

import my_python_module\n\n# Force reload\nimport importlib\nimportlib.reload(my_python_module)\n\nmy_python_module.do_something()\n

Note that this will re-load the module from disk every time you run the above piece of Python code.

"},{"location":"advanced/python_scripting/1_api_basics/#executing-external-scripts","title":"Executing external scripts","text":"

To execute a Python script you can use the following:

# Execute script_file \nexec(compile(open(script_file).read(), script_file, 'exec'))\n

You could, for example, put this snippet of code in a text block and execute it every time you need to run it (or even paste it in the Python Console). This is a fairly simple way of executing externally stored Python code, while still being able to edit the external script as needed.

"},{"location":"advanced/python_scripting/1_api_basics/#adding-startup-scripts","title":"Adding startup scripts","text":"

You might want to permanently run one or more Python scripts when Blender starts. You can add these scripts in a special configuration directory. The location to place these scripts is system-dependent (see this manual page) for details. In general you want to place the scripts within the \"USER\" location of the platform you're working on:

  • Windows: %USERPROFILE%\\AppData\\Roaming\\Blender Foundation\\Blender\\3.1\\
  • Linux: $HOME/.config/blender/3.1/
  • macOS: /Users/$USER/Library/Application Support/Blender/3.1/

Inside the above directory create a scripts/startup directory. Any .py files placed there will be automatically executed when Blender starts. See this page for other special directories within the system-specific USER directory.

"},{"location":"advanced/python_scripting/2_accessing_data/","title":"Accessing Blender data","text":""},{"location":"advanced/python_scripting/2_accessing_data/#using-bpydata","title":"Using bpy.data","text":"

All data in a Blender file can be accessed through bpy.data. This contains, for example, all objects (bpy.data.objects), all meshes (bpy.data.meshes), all scenes (bpy.data.scenes) and all materials (bpy.data.materials).

The data is stored in a data-type called bpy_collection whose members (data blocks) can be accessed with both an index as well as a string (this in contrary to regular Python dictionaries). For example. bpy.data.objects[\"Camera\"] and bpy.data.objects[0] will be equivalent if Camera is the first object in the collection:

>>> bpy.data.objects\n<bpy_collection[2], BlendDataObjects>\n\n>>> len(bpy.data.objects)\n2\n\n>>> bpy.data.objects[0]\nbpy.data.objects['Camera']\n\n>>> bpy.data.objects['Camera']\nbpy.data.objects['Camera']\n

Attributes of data blocks (e.g an object, collection or material) can be accessed as regular Python attributes, for example:

>>> bpy.data.objects[0].name\n'Camera'\n

Here's two examples of changing those attributes (note that some operations only work if Blender is in the right mode):

bpy.data.objects[\"Cube\"].location.z += 1              # this works in both edit and object mode\nbpy.data.objects[\"Cube\"].data.vertices[0].co.z += 10  # this works only in object mode\n

Tips

  • Use the Python Console in Blender and the auto-complete functionality (TAB) to see what attributes bpy.data has.
  • The Info Editor in Blender shows the python commands being executed when you do operations manually in Blender (See Fig. 2.)
  • Hovering over buttons and input boxes in Blender shows how to access the underlying values through the Python API.

Figure 2: The Info Editor is a nice way to see what python commands are executed when you use Blender. In this figure we see that we deleted the initial cube, made a UV Sphere and translated it.

"},{"location":"advanced/python_scripting/2_accessing_data/#some-notes-on-bpycontext-and-bpyops","title":"Some notes on bpy.context and bpy.ops","text":"

In this section we want to briefly introduce how you can access something called the context, and use operators in the Blender Python API. bpy.context stores information about a user's selections and the context Blender is in. For example, if you want to check which mode is currently active in Blender you can check the value of bpy.context.mode.

Now if you want to change the mode, you can use an operator. Operators are tools that are usually accessed through the user interface with buttons and menus. You can access these operators with Python through bpy.ops. If we would like to change the mode we can do this using an operator, e.g. bpy.ops.object.mode_set(mode='OBJECT')

Of course the possibility of switching to, say, edit mode, depends on which objects are selected, which can be checked with bpy.context.selected_objects. But keep in mind that many of the variables in the context are read-only, for example altering bpy.context.selected_objects directly is not possible. Instead, you can select an object with the select_set() method of the object, e.g. bpy.data.objects['Cube'].select_set(True).

"},{"location":"advanced/python_scripting/2_accessing_data/#running-a-script-and-rendering-from-the-console","title":"\ud83d\udcbb Running a script and rendering from the console","text":"
  1. Write an external script that removes the Cube object that is part of the default scene 1
  2. Then, from the command line and without opening the Blender GUI execute this script and render the first frame. Let it output a PNG image file in the directory of the blender file.
  3. Was the cube indeed removed from the rendered image?
  4. Extra question: is the cube removed from the blender file?
  1. Although you might have altered your startup scene to not have the cube\u00a0\u21a9

"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/","title":"Geometry, colors and materials","text":""},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#creating-an-object-with-a-mesh","title":"Creating an object with a mesh","text":"

If we want to create a new mesh we can do this by calling the new function like this:

mesh = bpy.data.meshes.new(\"newMesh\")\n
This will create the mesh but it is not linked to an object (it will not show in the Outliner). So we make a new object and link the object to the mesh:
obj = bpy.data.objects.new(\"newObject\", mesh)\n

We can actually verify this worked correctly by checking the value of obj.data:

>>> obj.data\nbpy.data.meshes['newMesh']\n

If you check the Outliner in the user interface you will see both the object newObject and the mesh newMesh linked to it.

Now we have an empty mesh, linked to an object. We will now construct a simple piece geometry to show how this is done in Blender. Vertices are defined by their x, y and z values like this:

verts = [ (0,0,0), (0,2,0), (0,1,2) ]\n

Edges are defined as a tuple holding two indices pointing to two vertices in the verts list. So (0,1) refers to a line from vertex (0,0,0) (index 0 in verts) to (0,2,0) (index 1 in verts) in this example. We make the following edges:

edges = [ (0,1), (1,2), (2,0) ]\n

To make faces we need three or more vertices. Per face you make a tuple of three or more indices pointing to three vertices in the verts list. For example the face (0,1,2) is a face made up from the vertices (0,0,0), (0,2,0) and (0,1,2), which are at index 0, 1 and 2 in the verts list. For now lets make one face:

faces = [ (0,1,2) ]\n

We now use a function from the Python API to make a mesh from our verts, edges and faces:

mesh.from_pydata(verts, edges, faces)\n

Now the mesh and the object are created, but it does not yet show in the 3D viewport or the Outliner. This is because we still need to link the new object to an existing collection and in so doing to a scene.

bpy.data.collections[0].objects.link(obj)\n

To summarize here is the full code to generate this geometry:

import bpy\n\n# Create a new mesh\nob_name = \"triangle\"\nmesh = bpy.data.meshes.new(ob_name + \"_mesh\")\n\n# Create a new object with the mesh\nob = bpy.data.objects.new(ob_name, mesh)\n\n# Define some geometry\nverts = [ (0,0,0), (0,2,0), (0,1,2) ]\nedges = [ (0,1), (1,2), (2,0) ] # These are indices pointing to elements in the list verts\nfaces = [ (0,1,2) ] # These are indices pointing to elements in the list verts\n\n# Add it to the mesh\nmesh.from_pydata(verts, edges, faces)\n\n# Link the object to the first collection\nbpy.data.collections[0].objects.link(ob)\n

Tips

  • Note that in general you do not need to explicitly specify mesh edges, as these will be generated automatically based on the faces specified. It's only when you want to have edges that are not connected to faces that you need to specify them explicitly.
  • All objects in Blender (and object data of the same type, i.e. all meshes) are enforced to have unique names. When using the Python API this is no different. So if you create an object with bpy.data.objects.new(\"obj\", mesh) and there already is an object named \"obj\" the name of the new object will be automatically set to something else. This can become important if you generate many objects (say in a loop) but still want to be able to refer to them later by name.
"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#a-filled-disk-from-scratch","title":"\ud83d\udcbb A filled disk from scratch","text":"

In the text above we created a triangle, now as an exercise let's create a spherical disk. First create a ring of vertices, then create edges and a face.

"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#adding-vertex-colors-to-a-mesh","title":"Adding vertex colors to a mesh","text":"

Not seeing vertex colors?

In the video below there's an essential step that's only shown near the end (around 7:00), which setting a material on the geometry. If the correct material isn't set the vertex colors won't show.

Vertex coloring is a way to color a mesh without using textures or uv-mapping. It works by assigning for every face that a vertex is a member of a color to that vertex. So a vertex can have different colors for each of the different faces it is in. Let's say we have a mesh, named \"triangle_mesh\": mesh = bpy.data.meshes['triangle_mesh'], the vertex colors for this mesh will be stored in mesh.vertex_colors. If the mesh does not have a vertex color layer yet, you can make a new one with: mesh.vertex_colors.new(name='vert_colors'). Now we have a color layer to work with: color_layer = mesh.vertex_colors['vert_colors'].

"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#making-triangles-and-a-vertex-color-layer","title":"\ud83d\udcbb Making triangles and a vertex color layer","text":"

Let's take the triangle we made above, but let's add another triangle to it, attached to the first. The code would look like this:

import bpy\n\n# Create a new mesh\nob_name = \"triangle\"\nmesh = bpy.data.meshes.new(ob_name + \"_mesh\")\n\n# Create a new object with the mesh\nob = bpy.data.objects.new(ob_name, mesh)\n\n# Define some geometry\nverts = [ \n        (0,0,0), (0,2,0), (0,1,2) ,\n        (0,3,2)\n        ]\nedges = [ \n        (0,1), (1,2), (2,0),  \n        (1,3), (3, 2)\n        ] # These are indices pointing to elements in the list verts\nfaces = [ (0,1,2), (1,3,2) ] # These are indices pointing to elements in the list verts\n\n# Add it to the mesh\nmesh.from_pydata(verts, edges, faces)\n\n# Link the object to the first collection\nbpy.data.collections[0].objects.link(ob)\n

Now make a vertex color layer for your triangles. Then inspect how many entries are in color_layer = mesh.vertex_colors['vert_colors']. Why are they the same or different from the total number of vertices in the mesh?

In an earlier exercise we saw that color_layer.data contains six entries while we only have four vertices in the mesh. This is because a vertex has a color for every face it is in. So vertex (0,2,0) and (0,1,2) are each in two faces, while the other two vertices are only in one face. So the former vertices have two entries in the color layer, one for each face they are in, the latter only one color entry.

The link between vertex indices in a mesh and those in the vertex color layer can be deduced from the polygons in mesh.polygons. Let's take one polygon from the triangles, lets say the first (poly = mesh.polygons[0]). Now, for one vertex in the polygon, poly.vertices gives you the index of the vertex in the mesh and poly.loop_indices gives you the index of the vertex in color_layer.data. See Fig. 3.

Figure 3: Sketch of the two triangles from Exercise 4. For the vertices are shown the coordinates (in black italic (x, x, x)), indices of the vertex in its mesh (green, outside of the face) and the indices in the loop_indices of the polygon (red, italic and inside the faces.)

Once you have set colors for your vertices you need to set up the shader of the object. For this go to the Shading workspace. Create a Vertex Color node and connect it to a Principled BSDF (connect Color output to Base Color input). And then make a Material Output and connect the Principled BSDF to the Surface input of the Material Output. See Fig. 4.

Figure 4: Shader setup for vertex colors

"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#coloring-your-triangles","title":"\ud83d\udcbb Coloring your triangles","text":"

Let's take the two connected triangles of exercise 4. We will color them in two different ways, using vertex coloring and Python scripting:

  • Make the first triangle (face (0,1,2)) green and the second (face (1,3,2)) red.
  • Now color vertex (0,0,0) and (0,3,2) red and (0,2,0) and (0,1,2) green.
"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#adding-a-material","title":"Adding a material","text":"

You can also add materials through the Python API. As an example to show how you could do this, let's add a material to the triangle from exercise 4 in the last section. Materials are stored in bpy.data.material and we can make a new material:

# Make material\ntriangle_material_name = \"triangle_mat\"\nmat = bpy.data.materials.new(triangle_material_name)\n
The nodes and the node tree are stored in the material (node-based materials will be further described in another chapter).

mat.use_nodes = True\nnodes = mat.node_tree.nodes\n

Before we start making nodes we remove the automatically generated nodes.

nodes.clear()\n
We will make two nodes, one Principled BSDF shader and an output node. We can make the shader by making a new node.

shader = nodes.new(type='ShaderNodeBsdfPrincipled')\n
How a node type is called you can search up in Blender in the following way. Go to the Shading workspace and open the add menu in the Shader Editor. Now go to Shader and hover over Principled BSDF until an information pop-up appears. In the pop-up you can find how the node type is called. See Fig. 5.

Figure 5: The type name of a node can be found by navigating to the Add menu and hovering over the node of your interest

If you also want to organize the nodes in the Shader Editor you can place the node like this:

shader.location = 0, 300 # Location in the node window\n
We can set the inputs of the Principled BSDF shader to a default_value.

shader.inputs['Base Color'].default_value = (1,0,0,1)\n
We can now also make an output node and place it in the Shader Editor.

node_output = nodes.new(type='ShaderNodeOutputMaterial')\nnode_output.location = 400, 300\n
Links between nodes can be made using the links in the node_tree. A new link will take outputs and inputs from the nodes you want to link.

links = mat.node_tree.links\nlinks.new(shader.outputs[0], node_output.inputs[0])\n
Now we only need to add the material to the mesh containing the spherical disk.

mesh.materials.append( mat )\n

In summary, the total code for making the material is:

# Make material\ntriangle_material_name = \"triangle_mat\"\nmat = bpy.data.materials.new(triangle_material_name)\n\nmat.use_nodes = True\nnodes = mat.node_tree.nodes\n\n# Clear default nodes\nnodes.clear()\n\nshader = nodes.new(type='ShaderNodeBsdfPrincipled')\nshader.location = 0, 300 # Location in the node window\nshader.inputs['Base Color'].default_value = (1,0,0,1)\n\n# Create an output for the shader\nnode_output = nodes.new(type='ShaderNodeOutputMaterial')\nnode_output.location = 400, 300\n\nlinks = mat.node_tree.links\nlinks.new(shader.outputs['BSDF'], node_output.inputs['Surface'])\n\nmesh.materials.append( mat )\n
"},{"location":"advanced/python_scripting/4_volumetric_data/","title":"Visualizing volumetric data through OpenVDB","text":"

In this section we will show a simple example of how to visualize custom volumetric data with Blender and Python. The current support in Blender for volumetric data is directly tied to the OpenVDB file format. In fact, the only way to create a volume object is to load an OpenVDB file. This is a file format and data structure that originated from the motion-picture industry, where it is often used to show clouds, smoke and fire in computer graphics like movies and games. Here's an example of such a volumetric rendering:

Gasoline explosion. Free example from Embergen.

The reason OpenVDB is used for many volumetric data applications in computer graphics is that is allows sparse volumes to be stored efficiently, while also providing easy querying of the data, for example during rendering. OpenVDB is also a bit more than just a file format, as the OpenVDB library also supports more advanced operations. From the OpenVDB website:

OpenVDB is an Academy Award-winning C++ library comprising a hierarchical data structure and a suite of tools for the efficient manipulation of sparse, time-varying, volumetric data discretized on three-dimensional grids. It is based on VDB, which was developed by Ken Museth at DreamWorks Animation, and it offers an effectively infinite 3D index space, compact storage, fast data access, and a collection of algorithms specifically optimized for the data structure for common tasks such as filtering, CSG, compositing, numerical simulation, sampling, and voxelization from other geometric representations.

For more documentation on OpenVDB see here. Some example OpenVDB files can be found here, under Sample Models.

"},{"location":"advanced/python_scripting/4_volumetric_data/#example","title":"Example","text":"

OpenVDB models are mostly generated with specialized software like Houdini and Embergen. Volumetric data in general is also used for scientific visualizations, for example in ParaView, but support for OpenVDB is still lacking somewhat. In this section we will explain how OpenVDB files can be made from scratch. For example for when you have you own volumetric data in your own data format and you want to visualize or animate this in Blender. To convert your data to a OpenVDB format we will use the Python package pyopenvdb.

First we will create data in Python and write it to an OpenVDB file using OpenVDB and the Python package pyopenvdb.

"},{"location":"advanced/python_scripting/4_volumetric_data/#installation-of-pyopenvdb","title":"Installation of pyopenvdb","text":"

Installing the Python module to access the openVDB functionality can be very easy or more difficult depending on your operating system. See the installation instructions on the pyopenvdb website.

Tip

If you cannot get it to work that way, we made a simple Docker container you can use to run it, see here for the github repository.

"},{"location":"advanced/python_scripting/4_volumetric_data/#making-a-vdb-file-with-pyopenvdb","title":"Making a VDB file with pyopenvdb","text":"

Let us make a simple volumetric cube using pyopenvdb. To start we first load pyopenvdb and numpy:

import numpy as np\nimport pyopenvdb as vdb\n

And we make a zero filled array of size 400x400x400:

dimension = 400\narray = np.zeros((dimension, dimension, dimension))\n

We then fill a cube sized portion of the array with the value 1:

for i in range(dimension):\n   for j in range(dimension):\n      for k in range(dimension):\n         if i < 200 and i >=100 and \\\n          j < 200 and j >=100 and \\\n          k < 200 and k >=100:\n\n            array[i,j,k] = 1.0\n

Now we come to the openvdb part, where we first need to make a grid. In this case we make a float grid (there are more grids besides a float grid for example a BoolGrid and Vec3SGrid are also standardly available).

grid = vdb.FloatGrid()\n

We now copy the values in the array into the grid:

grid.copyFromArray(array)\n

The last important thing we need to do before we save it to file is to name the grid. You will use this name later when using the grid in Blender.

grid.name = \"cube\"\n

The last thing left to do is to save the grid to file:

vdb.write('cube.vdb', grids=[grid])\n
"},{"location":"advanced/python_scripting/4_volumetric_data/#loading-a-vdb-file-into-blender","title":"Loading a VDB file into Blender","text":"

Open a new Blender file and if its there, remove the starting cube. In the 3D viewport choose menu option Add > Volume > Import OpenVDB or use the shortcut Shift-A. Locate the cube.vdb file we just made through the script. You will most likely not see anything yet, so scale the cube down using the shortcut S until you can see the outline of the cube. Now if you change the **Viewport shading** in the top right of the 3D viewport to Rendered (see Fig. 1, #1), you will not see anything beside the outline since we still need to add a shader to the model.

Figure 1: definition of coordinates

Change to the Shading workspace (see Fig. 1, #2) and in the Shader editor click on new to make a new material (see Fig. 1, #3). You see Blender makes a Principled volume and Material output node. To make the cube appear we need to change one thing and for this we need to know the name of the grid in the VDB file.

From the Python script we know this is cube, but you can also figure out the grids and their names in a VDB file from within Blender. In the Properties panel go to Object Data Properties tab (see Fig. 1, #4). Here under Grids you can see the names of the grids in the VDB file. For now, in the Principled Volume node, add the name of the grid (cube) into the field next to Density Attribute (see Fig. 1, #5). This tells the node to use the values in the grid for the scattering density of the voxels.

"},{"location":"advanced/python_scripting/4_volumetric_data/#coloring-the-cube","title":"\ud83d\udcbb Coloring the cube","text":"

Now make a cube similar to the one we just made, but color it blue on one side and red on the other (See Fig. 2). First alter the Python script to include a second grid in the VDB file. In this second grid set one side of the cube to value 1 and the other to zero. Use an Attribute node (do not forget to add the grid name to the Name: field in the attribute node) to feed the second grid into a ColorRamp node (and choose the colors you want). Now feed the ColorRamp into the Color field of the Principled Volume. Do not forget to set the original grid in the Density Attribute.

Does it come out right? Maybe you need to play a bit with settings, like set the Density to 1. You might also need to play with the lighting. If you still have the original light in your scene, try increasing its Power and location. Now also see how it looks in Cycles compared to Eevee.

Figure 2: Colored cube"},{"location":"api/10000_foot_view/","title":"The 10,000 foot view","text":""},{"location":"api/10000_foot_view/#introduction","title":"Introduction","text":"

The Blender Python API mostly consists of a thin layer on top of the underlying Blender C/C++ data structures and methods. The underlying C/C++ code is used to automatically generate the Python API during the build process of the Blender executable, which means the API is always up-to-date with respect to the underlying code.

The user-facing Python API isn't the only part of Blender that uses Python. Large parts of the user interface, most import/export functionality and all add-ons are written in Python. It is therefore relatively easy to extend Blender with, say, new UI dialogs or a custom importer. This is one of the strengths of the Blender Python API.

Be careful

Since the API provides access to Blender internals at a very low level you can screw up the Blender state, causing unexpected behaviour, data corruption or even crashes. In the worst case you can end up with a file that will no longer load in Blender at all, although that's rare.

So when working with Python scripting, save your session to file often, preferably in a number of incremental versions, so you can recover or go a step back when needed.

In cases where you suspect Blender's current internal state has been corrupted you can save the current state to a temporary file, start a second instance of Blender (keeping the first Blender running!) and then open the temporary file in the second instance to help ensure you can start from a known-good state. This prevents you from saving a corrupt Blender state and overwriting your last known-good file.

Some things to be aware of:

  • Blender 3.1 embeds the Python 3.10 interpreter.
  • You can access the online API documentation from within Blender with Help > Python API Reference
  • Starting Blender from the console will allow you to see important outputs channels (warnings, exceptions, output of print() statements, etc).

The earlier chapter on the Python API provides a hands-on introduction, including basic information on how to execute Python scripts in Blender.

"},{"location":"api/10000_foot_view/#api-modules","title":"API modules","text":"

The Blender Python API is comprised of several modules, with bpy being the main one. But there's also useful routines in mathutils, bmesh and a few others.

Accessing API reference documentation

The API documentation on these modules can be easily accessed from within Blender using Help > Python API Reference.

By default none of the API modules, not even bpy, are loaded in the environment where a script file runs, so you need to import the ones you need explicitly.

The Python Console does import quite a few things by defaults and also sets some useful variables, like C to access bpy.context and D to access bpy.data with less typing:

PYTHON INTERACTIVE CONSOLE 3.9.4 (default, Apr 20 2021, 15:51:38)  [GCC 10.2.0]\n\nBuiltin Modules:       bpy, bpy.data, bpy.ops, bpy.props, bpy.types, bpy.context, \nbpy.utils, bgl, blf, mathutils\nConvenience Imports:   from mathutils import *; from math import *\nConvenience Variables: C = bpy.context, D = bpy.data\n\n>>> D.objects.values()\n[bpy.data.objects['Camera'], bpy.data.objects['Cube'], bpy.data.objects['Light']]\n
"},{"location":"api/10000_foot_view/#developer-settings","title":"Developer settings","text":"

When developing Python scripts in Blender it can be useful to enable a few extra settings:

  • The Python Tooltips under Interface > Display > Python Tooltips. When enabled a tooltip will show the corresponding Python command or a path to the data for a UI element.
  • The Developer Extras under Interface > Display > Developer Extras. When enabled this provides multiple things:
    • The 3D viewport overlay for a mesh in edit mode will now have an extra setting Indices to show the low-level indices of selected vertices/edges/faces. This can be very useful when debugging Python code that works on mesh geometry.
    • The right-click menu for a UI item, such as a button or menu entry, will now also contain an entry called Online Python Reference linking to the relevant Python documentation page.
    • It will enable Operator Search, which will add entries to the F3 search menu for operators. These will be listed after the regular menu entries in the search results.
    • It adds a new menu option Help > Operator Cheat Sheet that will create a new text area called OperatorList.txt, which contains all available operators (see Operators) and their default parameters. This list can give you a quick overview of the available operators, with the API documentation providing all the details.
"},{"location":"api/10000_foot_view/#info-area","title":"Info area","text":"

As mentioned in the video in the introductory chapter the Info area can be useful if you want to inspect which Python calls Blender performs for certain operations. This certainly will not provide all the details in all cases, but can give some insight. You can either switch to the default Scripting workspace (using the tabs at the top of the window) to check the output, or use the normal UI area operations to add/change an area to an Info area. The latter is shown below:

"},{"location":"api/10000_foot_view/#sources-of-examples","title":"Sources of examples","text":"

This chapter provides small snippets of code and serves mostly as a reference. Sometimes it can be useful to get more information or examples of how specific parts of the Blender Python API are used. Some good sources for other code are:

  • The add-ons included with Blender show many uses of the Python API. They can be found in the directory <blender-version>/scripts/addons in the Blender distribution directory.
  • A number of script templates are also included, in <blender-version>/scripts/templates_py, mostly examples of defining custom operators or UI elements.
"},{"location":"api/10000_foot_view/#data-blocks","title":"Data-blocks","text":"

The different types of data in Blender are stored in data-blocks. For example, there's Mesh, Object, Texture and Shader data-blocks, but there's quite a few more. One of the clever bits in the way Blender is programmed is that data-blocks written to file contain enough information about their content (i.e. metadata) to make them readable by both older and newer versions of Blender than the one they were written with. This metadata system also makes it possible to automatically provide the Python API for accessing those data-blocks without much manual work from Blender's developers.

Data-blocks are available through Python, per type, under bpy.data. For example, there's bpy.data.objects and bpy.data.meshes. The type of a data-block is the corresponding class under bpy.types:

>>> type(bpy.data.objects['Cube'])\n<class 'bpy_types.Object'>\n\n>>> bpy.types.Object\n<class 'bpy_types.Object'>\n

Each type of data-block has its own set of attributes and methods, particular to that type. Learning the Blender Python API involves getting to know the details of the data-block types you want to work with and how they interact.

Automatic data-block garbage collection

Blender keeps track of which data-blocks are no longer being referenced to decide when a data-block does not need to be saved (so-called garbage collection). Usually you don't need to explicitly interact with this system, but it is good to be aware that it is there, see this section for more details.

"},{"location":"api/10000_foot_view/#unique-data-block-names","title":"Unique data-block names","text":"

Per type of data all the data-blocks need to have a unique name. This is enforced automatically by Blender when a data-block is created by appending a number to make the name unique. For example:

>>> bpy.data.meshes.new('my object')\nbpy.data.meshes['my object']\n\n>>> bpy.data.meshes.new('my object')\nbpy.data.meshes['my object.001']\n\n>>> bpy.data.meshes.new('my object')\nbpy.data.meshes['my object.002']\n

This usually isn't an issue, but just something to be aware of when working with referencing objects by name, as the name of a data-block you created might sometimes be different than you expect.

"},{"location":"api/10000_foot_view/#objects-and-object-data","title":"Objects and object data","text":"

When we use the word \"Object\" in these pages we mean one of the object types that can be present in a 3D scene, e.g. camera, mesh or light. Such objects are of type bpy.types.Object and all have general properties related to their presence in the 3D scene. For example, their name, 3D transformation, visibility flags, parent, etc.

But a Light object needs to specify different properties than, say, a Camera object and these per-type properties are stored as \"object data\". The object data can be accessed through the data attribute of an Object:

# Both lights and cameras are Objects\n>>> type(bpy.data.objects['Light'])\n<class 'bpy_types.Object'>\n\n>>> type(bpy.data.objects['Camera'])\n<class 'bpy_types.Object'>\n\n# But their object data are of a different type\n>>> type(bpy.data.objects['Camera'].data)\n<class 'bpy.types.Camera'>\n\n>>> type(bpy.data.objects['Light'].data)\n<class 'bpy.types.PointLight'>\n\n# And have different attributes, relevant to that type\n>>> dir(bpy.data.objects['Camera'].data)\n[..., 'angle', ..., 'clip_start', ..., 'dof', ...]\n\n>>> dir(bpy.data.objects['Light'].data)\n[..., 'color', ..., 'distance', 'energy', ..., 'falloff_type', ...]\n
"},{"location":"api/10000_foot_view/#objects-of-a-specific-type","title":"Objects of a specific type","text":"

Sometimes you want to iterate over all objects in a scene, but only perform some operation on a specific type of object. You can use the type attribute for checking an object's type:

>>> bpy.data.objects['Camera'].type\n'CAMERA'\n\n>>> bpy.data.objects['Light'].type\n'LIGHT'\n\n>>> for obj in bpy.data.objects:\n    if obj.type == 'MESH':\n        # Do something\n
"},{"location":"api/10000_foot_view/#native-blender-data-structures","title":"Native Blender data structures","text":"

When working with the Python API will you frequently use internal Blender types that appear similar to regular Python types, like lists and dictionaries. However, the Blender types are not real native Python types and behave differently in certain aspects.

For example, the different collections of scene elements (such as objects or meshes) that are available under bpy.data are of type bpy_prop_collection. This type is a combination of a Python list and a dictionary, sometimes called an ordered dictionary, as it allows indexing by both array position and key:

>>> type(bpy.data.objects)\n<class 'bpy_prop_collection'>\n\n# Some of its methods match those of native Python data types\n>>> dir(bpy.data.objects)\n['__bool__', '__contains__', '__delattr__', '__delitem__', '__doc__', '__doc__', \n'__getattribute__', '__getitem__', '__iter__', '__len__', '__module__', \n'__setattr__', '__setitem__', '__slots__', 'bl_rna', 'find', 'foreach_get', \n'foreach_set', 'get', 'items', 'keys', 'new', 'remove', 'rna_type', 'tag', \n'values']\n\n# Index by position\n>>> bpy.data.objects[0]\nbpy.data.objects['Camera']\n\n# Index by key\n>>> bpy.data.objects['Camera']\nbpy.data.objects['Camera']\n\n# (key, value) pairs\n>>> bpy.data.objects.items()\n[('Camera', bpy.data.objects['Camera']), ('Cube', bpy.data.objects['Cube']), \n('Light', bpy.data.objects['Light'])]\n

Note that the position of an item in the collection, and hence its index, can change during a Blender session.

"},{"location":"api/10000_foot_view/#inspecting-values","title":"Inspecting values","text":"

One of the more annoying aspects when working in the Blender Python Console inspecting these kinds of values is that the elements in a bpy_prop_collection (or other Blender types) aren't printed by default, this in contrast to a regular Python dictionary. You need to, for example, cast to a list or call its values() method:

# Regular Python dict, prints both keys and values\n>>> d = dict(a=1, b=2, c=3)\n>>> d\n{'a': 1, 'b': 2, 'c': 3}\n\n# No items printed\n>>> bpy.data.objects\n<bpy_collection[3], BlendDataObjects>\n\n# values() returns a list, so gets printed in detail\n>>> type(bpy.data.objects.values())\n<class 'list'>\n\n>>> bpy.data.objects.values()           \n[bpy.data.objects['Camera'], bpy.data.objects['Cube'], bpy.data.objects['Light']]\n\n# Difference in list() result:\n>>> list(d)\n['a', 'b', 'c']\n# Returns dict *keys*\n\n>>> list(bpy.data.objects)\n[bpy.data.objects['Camera'], bpy.data.objects['Cube'], bpy.data.objects['Light']]\n# Returns collection *values*\n

The choice for not printing the values inside a bpy_prop_collection is (most likely) that in many cases the collection will contain large numbers of objects, so printing them all would not be too useful, or might even make the UI non-responsive for a short time.

"},{"location":"api/10000_foot_view/#data-organization","title":"Data organization","text":"

In certain cases Blender uses a more elaborate data structure in cases where you might except low-level values, like lists. For example, the set of vertices that make up a mesh are only accessible as a collection of MeshVertex objects:

>>> m\nbpy.data.meshes['Cube']\n\n>>> type(m.vertices)\n<class 'bpy_prop_collection'>\n\n>>> len(m.vertices)\n8\n\n>>> m.vertices[0]\nbpy.data.meshes['Cube'].vertices[0]\n\n>>> type(m.vertices[0])\n<class 'bpy.types.MeshVertex'>\n\n>>> dir(m.vertices[0])\n['__doc__', '__module__', '__slots__', 'bevel_weight', 'bl_rna', 'co', 'groups', \n'hide', 'index', 'normal', 'rna_type', 'select', 'undeformed_co']\n\n# Vertex coordinate (object space)\n>>> m.vertices[0].co\nVector((1.0, 1.0, 1.0))\n\n# Vertex normal\n>>> m.vertices[0].normal\nVector((0.5773491859436035, 0.5773491859436035, 0.5773491859436035))\n

The reason for this is that there's several types of data associated with a single vertex, which are all centralized in a MeshVertex object. In short, Blender uses a so-called array-of-structs design. The alternative design choice would have been to have separate arrays for vertex coordinates, vertex normals, etc (which would be a struct-of-arrays design).

"},{"location":"api/10000_foot_view/#vertices-and-matrices","title":"Vertices and matrices","text":"

The example above also shows that even a vertex coordinate is not accessed as a low-level Python data type, like a tuple, but by the Vector type (which is in the mathutils module). This has the advantage of providing many useful methods for operating on vector values:

>>> v = m.vertices[0].normal\n>>> v\nVector((0.5773491859436035, 0.5773491859436035, 0.5773491859436035))\n\n>>> v.length\n0.999998137353116\n\n# Return a new vector that's orthogonal \n>>> w = v.orthogonal()\n>>> w\nVector((0.5773491859436035, 0.5773491859436035, -1.154698371887207))\n\n# Dot product (should be zero as v and w are orthogonal)\n>>> v.dot(w)\n0.0\n\n# Note: v*w is element-wise product, not dot product!\n>>> v*w\nVector((0.3333320915699005, 0.3333320915699005, -0.666664183139801))\n\n# Cross product between two vectors\n>>> v.cross(w)\nVector((-0.9999963045120239, 0.9999963045120239, 0.0))\n\n# Swizzling (returning vector elements in a different order)\n>>> w\nVector((0.5773491859436035, 0.5773491859436035, -1.154698371887207))\n\n>>> w.zxy\nVector((-1.154698371887207, 0.5773491859436035, 0.5773491859436035))\n

The builtin mathutils module contains many useful data types and methods for working with 3D data, including vectors and matrices, but also different methods for working with transformations (like quaternion) and colors spaces.

# Transformation matrix for an object with uniform scale 2 and \n# translation in Z of 3. These values will match with the Transform UI area\n>>> o\nbpy.data.objects['Cube']\n\n>>> o.matrix_world\nMatrix(((2.0, 0.0, 0.0, 0.0),\n        (0.0, 2.0, 0.0, 0.0),\n        (0.0, 0.0, 2.0, 3.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# Create a rotation matrix\n>>> m = Matrix.Rotation(radians(90.0), 4, 'X')\n>>> m\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 7.549790126404332e-08, -1.0, 0.0),\n        (0.0, 1.0, 7.549790126404332e-08, 0.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n>>> v = Vector((1,2,3))\n\n# Transform the vector using the matrix. Note the different outcomes \n# depending on the multiplication order.\n>>> m @ v\nVector((1.0, -2.999999761581421, 2.000000238418579))\n\n>>> v @ m\nVector((1.0, 3.000000238418579, -1.999999761581421))\n\n# Also, a 3-vector is assumed to have a fourth element equal to *one* when \n# multiplying with a matrix:\n>>> m = Matrix.Translation((4, 5, 6))\n>>> m\nMatrix(((1.0, 0.0, 0.0, 4.0),\n        (0.0, 1.0, 0.0, 5.0),\n        (0.0, 0.0, 1.0, 6.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n>>> m @ Vector((1, 2, 3))\nVector((5.0, 7.0, 9.0))\n\n>>> m @ Vector((1, 2, 3, 0))\nVector((1.0, 2.0, 3.0, 0.0))\n
"},{"location":"api/10000_foot_view/#api-quirks","title":"API quirks","text":"

Working with the Blender Python API has some peculiarities compared to your average Python scripting. These have to do with the way the API is structured, but also how it interacts with the Blender internals. The API manual contains a lengthy page on some gotchas, but here we list some of the common ones.

"},{"location":"api/10000_foot_view/#object-modes","title":"Object modes","text":"

An object is always in one of several modes. These modes are the same ones you work with in the UI: Object mode, Edit mode, etc. The current mode for an object can be retrieved through the mode property:

>>> o = bpy.data.objects['Cube']\n>>> o.mode\n'OBJECT'\n\n# <enter edit mode with TAB>\n\n>>> o.mode\n'EDIT'\n

Depending on the current mode of a mesh object certain data might not be up-to-date, or even unavailable, when accessing it through the Python API. This is especially true when an object is in Edit Mode.

This is because the edit mode uses its own copy of the data to let you edit, which is synced with the underlying mesh data when going in and out of edit mode. See here for the relevant section in the Blender API docs.

An example continuing with the Cube mesh above:

>>> o.mode\n'OBJECT'\n\n>>> m = o.data\n>>> m\nbpy.data.meshes['Cube']\n\n# Check UV map data\n>>> len(m.uv_layers[0].data)\n24\n\n# <enter edit mode with TAB>\n\n>>> o.mode\n'EDIT'\n\n# UV map data now empty...\n>>> len(m.uv_layers[0].data)\n0\n

In most cases when working on low-level data such as mesh geometry you want the object to be in object mode (or use the bmesh module when you need the object be in edit mode). It's usually a good idea to add a check at the top of your script to verify the current mode is what you expect:

o = bpy.context.active_object\nif o.mode != 'OBJECT':\n    raise ValueError('Active object needs to be in object mode!')\n

There are alternatives for still allowing a mesh to be in edit-mode when accessing its data from a script, see the API docs for details.

"},{"location":"api/10000_foot_view/#interrupting-long-running-scripts","title":"Interrupting (long-running) scripts","text":"

During script development you might get in a situation where your code is stuck in a loop, or takes much longer than you like. Interrupting a running script can usually be done by pressing Ctrl-C in the terminal console window:

>>> while True:\n...     pass\n...     \n\n# Uh oh, execution stuck in a loop and the Blender UI will now have become unresponsive\n\n# Pressing Ctrl-C in the terminal console window interrupts script execution,\n# as it raises a KeyboardInterrupt\n\nTraceback (most recent call last):\n  File \"<blender_console>\", line 2, in <module>\nKeyboardInterrupt\n
"},{"location":"api/10000_foot_view/#interaction-with-the-undo-system","title":"Interaction with the Undo system","text":"

In some cases when you undo an operation Blender might re-create certain data, instead of going back to a stored version still in from memory. This might cause existing references to the original data to become invalid. This can be especially noticeable when working interactively in the Python Console.

For example, with a cube object as active object in the 3D viewport:

# The Cube is the active object\n>>> bpy.context.active_object\nbpy.data.objects['Cube']\n\n# Save a reference to it\n>>> o = bpy.context.active_object\n\n# <Grab the object in the 3D viewport and move it somewhere else>\n\n# Object reference still valid\n>>> o\nbpy.data.objects['Cube']\n\n# <Undo the object translation in the 3D viewport>\n\n# Uh oh, object reference has now become invalid\n>>> o\n<bpy_struct, Object invalid>\n\n# Reason: object referenced under name 'Cube' has changed\n>>> bpy.data.objects['Cube'] == o\nFalse\n\n>>> id(o)\n140543077302976\n\n>>> id(bpy.data.objects['Cube'])\n140543077308608\n\n# Will need to reacquire the active object, or consistently use bpy.data.objects['Cube'] \n>>> o = bpy.context.active_object\n>>> o\nbpy.data.objects['Cube']\n
"},{"location":"api/bpy_data_and_friends/","title":"A note on bpy.data, bpy.data.objects, ...","text":"

We have been using bpy.data.objects in most examples above to access objects in the scene. This is actually not completely clean, as bpy.data.objects holds all objects in the Blender file. Usually, the distinction doesn't matter as you only have one scene, but a Blender file can hold multiple scenes, each with their own set of objects:

# A file with two scenes, each with their own set of objects\n>>> bpy.data.scenes.values()\n[bpy.data.scenes['Scene'], bpy.data.scenes['Scene.001']]\n\n# Current scene\n>>> bpy.context.scene\nbpy.data.scenes['Scene']\n\n# And its objects\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Bottom cube'], bpy.data.objects['Top Cube']]\n\n# <Select different scene>\n\n# Different current scene\n>>> bpy.context.scene\nbpy.data.scenes['Scene.001']\n\n# And its objects\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Bottom cube.001'], bpy.data.objects['Top Cube.001']]\n\n# All objects in the file\n>>> bpy.data.objects.values()\n[bpy.data.objects['Bottom cube'], bpy.data.objects['Bottom cube.001'], \nbpy.data.objects['Top Cube'], bpy.data.objects['Top Cube.001']]\n

Although objects can also be shared between scenes:

# Two scenes\n>>> bpy.data.scenes.values()\n[bpy.data.scenes['Scene'], bpy.data.scenes['Scene.001']]\n\n# First scene, cubes are local to scene, torus is shared between scenes\n>>> bpy.context.scene\nbpy.data.scenes['Scene']\n\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Torus'], bpy.data.objects['Bottom cube'], \nbpy.data.objects['Top Cube']]\n\n# Second scene, different cubes, torus is shared\n>>> bpy.context.scene\nbpy.data.scenes['Scene.001']\n\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Bottom cube.001'], bpy.data.objects['Top Cube.001'], \nbpy.data.objects['Torus']]\n

The point here is that bpy.data.objects, and every other attribute under bpy.data, holds values of the complete Blender file. Per-scene values are available through attributes of a Scene object, e.g. bpy.context.scene.objects. For certain use cases this distinction matters.

"},{"location":"api/custom_properties/","title":"Custom properties","text":"

Sometimes it can useful to be able to control certain values that you use in a script from the UI. The most flexible, but also most complex, approach would be write an add-on. This allows full control over UI elements, but can be quite a bit of work to create.

However, in quite a few cases there's a simpler alternative if all you need to control are simple Python values, like an int, float, string or list. From Python you can set custom properties on pretty much any Blender Python data block (see here for more details) and then access those values from the UI:

>>> o\nbpy.data.objects['Cube']\n\n>>> o['My prop'] = 123.4\n>>> o['My 2nd prop'] = (1, 1, 0.5)\n

This works, of course, both ways: adding or editing a value from the UI will update the value(s) available through Python. You can then use these values in a script, for example to control a number of objects to create, set a 3D coordinate, etc. See here for more details and examples.

"},{"location":"api/data_block_users_and_gc/","title":"Data-block users and garbage collection","text":"

Blender uses a system based on reference-counting to decide when data-blocks have become unused and can get purged. In the short video below we show some of the details of this scheme:

The video shows the Orphan Data outliner mode, but there are several modes that can be used to get detailed insight into the current state of Blender internals:

  • The Blender File mode gives a high-level overview of a file's contents, including some of the more implicit data block types, such as Workspaces.
  • The Data API mode provides an even more detailed view. It is actually a great way to inspect all the gory details of Blender's internal data structures. It will show all data-blocks by type and their attributes. Some attributes can be even be edited in this outliner mode.
  • The Orphan Data mode shows data blocks that do not have any users and which will not be saved (unless they are marked to have a fake user). Some of the data-blocks you see here might not have been created by you, but are used by Blender internally, for example the Brushes.

Although the video only focused on materials, the way data-block lifetime is managed using the user counts is general to all types of data-blocks in Blender. But there are subtle differences in whether a data-block is really deleted or just has a link to it removed:

  • Whenever the term \"unlink\" is used it means that a link to that data-block is removed and its user count decreased, but the data-block itself will still be in memory. An example of this is clicking the X next to a mesh's material in the Material Properties.
  • If the UI uses the term \"delete\" it means the data-block is deleted immediately from memory. Any data-blocks linked from the deleted data-block will have their users count decreased. An example of this is deleting a Camera object in the 3D view: the Camera object's data-block is deleted from memory, but the Camera object data data-block (containing the actual camera settings) is still in memory, which you can check in the Orphan Data mode of the outliner.

The usage count of data-blocks can also be queried from Python:

# Two cube meshes using the same material\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Cube'], bpy.data.objects['Cube.001']]\n\n>>> bpy.data.materials['Material'].users\n2\n\n# Add a new material, set one of the cubes to use it\n>>> bpy.data.materials['Material'].users\n1\n\n>>> bpy.data.materials['Material.001'].users\n1\n\n# <Delete Cube.001 object in the UI>\n\n# Hmmm, still has a user?\n>>> bpy.data.materials['Material.001'].users\n1\n\n# The reason is we deleted the Cube.001 *object*, but\n# the Cube.001 *mesh* is still alive (as its usage count\n# was merely decremented) and it still references the material\n>>> bpy.data.objects['Cube.001']\nTraceback (most recent call last):\n  File \"<blender_console>\", line 1, in <module>\nKeyError: 'bpy_prop_collection[key]: key \"Cube.001\" not found'\n\n>>> bpy.data.meshes['Cube.001']\nbpy.data.meshes['Cube.001']\n\n>>> bpy.data.meshes['Cube.001'].users\n0\n\n>>> bpy.data.meshes['Cube.001'].materials.values()\n[bpy.data.materials['Material']]\n

The use_fake_user attribute of a data block controls whether a Fake user is set, similar to the checkbox in the UI.

Warning

In most cases you probably don't want to manually delete data blocks from a file and only use the normal UI operations for that. But it is possible for cases that need it. Truly purging a data block from Python can be done with the relevant remove() method, e.g.

>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Cube']]\n\n>>> o = bpy.context.active_object\n>>> o\nbpy.data.objects['Cube']\n\n>>> m = o.data\n>>> m\nbpy.data.meshes['Cube']\n\n# Remove the Mesh data-block from the file\n>>> bpy.data.meshes.remove(m)\n>>> bpy.data.meshes.values()\n[]\n\n>>> bpy.data.objects.values()\n[]\n

Note that in the case of deleting object data (in this case a Mesh) any Objects referencing that object data also get removed!

A second thing to note is the above code does not actually update the current Blender file on disk. That only happens on an explicit save action (e.g. through the File menu or using the relevant operator from Python).

"},{"location":"api/materials/","title":"Materials","text":"

As shown in one of the introductory exercises for the Python API it is possible to use Python to create a node-based shader. In most cases using the node-based editor in the UI is the preferred option due to its interactivity, but for certain cases it can be interesting to use Python.

The general workflow for this is to create the necessary shader nodes, connect them through links as needed and then set the material on the relevant mesh.

# Create a new material\nmat = bpy.data.materials.new(\"my material\")\n\n# Enable shader nodes on the material\nmat.use_nodes = True\n\n# Remove the default nodes\nnodes = mat.node_tree.nodes\nnodes.clear()\n\n# Add a Principled BSDF shader node and set its base color\nshader = nodes.new(type='ShaderNodeBsdfPrincipled')\nshader.location = 0, 300\nshader.inputs['Base Color'].default_value = (1,0,0,1)\n\n# Add a Material Output node\nnode_output = nodes.new(type='ShaderNodeOutputMaterial')\nnode_output.location = 400, 300\n\n# Add a link between the nodes\nlinks = nodes.links\nlinks.new(shader.outputs['BSDF'], node_output.inputs['Surface'])\n\n# Add material to the mesh's material slots\nmesh.materials.append(mat)\n

A node's inputs and outputs can be referenced by name. This can then be used to set values on inputs, or connect outputs to inputs, as shown. For example, for the Principled BSDF node above:

>>> shader.inputs.keys()\n['Base Color', 'Subsurface', 'Subsurface Radius', 'Subsurface Color', 'Metallic', \n'Specular', 'Specular Tint', 'Roughness', 'Anisotropic', 'Anisotropic Rotation', \n'Sheen', 'Sheen Tint', 'Clearcoat', 'Clearcoat Roughness', 'IOR', 'Transmission', \n'Transmission Roughness', 'Emission', 'Emission Strength', 'Alpha', 'Normal', \n'Clearcoat Normal', 'Tangent']\n\n>>> shader.outputs.keys()\n['BSDF']\n

The location attributes set above are not strictly needed if you're not going to work on the shader network in the Shader Editor in the UI. But they help to make the node network layout somewhat visually pleasing.

"},{"location":"api/materials/#material-slots","title":"Material slots","text":"

The last line in the Python code above adds the created material to the mesh's material slots. An object can have multiple materials assigned to it and each assigned material uses a so-called material slot. Each polygon in a mesh can only use a single material, by specifying the material index (i.e. slot) to use for that polygon. This allows different parts of a mesh to use different shaders.

By default all faces in a mesh will reference material slot 0. But here's an example of a cube mesh that uses 3 different materials:

Inspecting the underlying material data:

# Get the mesh, as the material is linked to the mesh by default\n>>> o = bpy.data.objects['Cube']\n>>> m = o.data\n\n# The material slots used\n>>> list(m.materials)\n[bpy.data.materials['red'], bpy.data.materials['black-white checkered'], \nbpy.data.materials['voronoi']]\n\n# Polygon -> slot index\n>>> m.polygons[0].material_index\n2\n>>> m.polygons[1].material_index\n0\n>>> m.polygons[2].material_index\n0\n>>> m.polygons[3].material_index\n0\n>>> m.polygons[4].material_index\n1\n>>> m.polygons[5].material_index\n0\n

Material indices can be set per polygon, or set as an array in one go:

# Material slot index for a single polygon \nm.polygons[0].material_index = 0\n\n# Set all polygon material indices\nface_materials = [0, 1, 2, 2, 1, 0]\nm.polygons.foreach_set('material_index', face_materials)\n# Force an update of the mesh, needed in this case\nm.update()\n
"},{"location":"api/meshes/","title":"Meshes","text":"

One of the more common scene data types to work with from Python are 3D meshes. Meshes in Blender can contain polygons of an arbitrary number of vertices (so-called N-gons), can contain wire edges and support extra layers of data, such as vertex colors and UV coordinates.

We go into a fair amount of detail on how to create and access mesh data, in several ways. As usual, the Blender API docs on the Mesh type contain many more details, but we feel the discussion below is a good summary to get you started for many use cases.

"},{"location":"api/meshes/#creating-a-mesh-high-level","title":"Creating a mesh (high-level)","text":"

As shown earlier the Mesh.from_pydata(vertices, edges, faces) method allows a simple and high-level way of creating a mesh. This method doesn't offer full control over the created mesh and isn't very fast for large meshes, but it can be good enough in a lot of cases.

It takes three lists of values, or actually, any Python iterable that matches the expected form:

  • vertices: a sequence of float triples, e.g. [(1.0, 2.0, 3.0), (4, 5, 6), ...]
  • edges: a sequence of integer pairs (vertex indices), that define edges by. If [] is passed then edges are inferred from polygons
  • faces: a sequence of one or more polygons, each defined as a sequence of 3 or more vertex indices. E.g. [(0, 1, 2), (1, 2, 3, 4), ...]

Info

The choice of how the mesh data is passed might incur an overhead in memory usage and processing time, especially when regular Python data structures, like lists, are used. An alternative would be to pass NumPy arrays.

For the examples below we assume that no explicit list of edges is passed. Edges will then be created implicitly based on the polygons specified, which is usually what is preferred. We discuss explicitly specifying edges below.

An example of creating a simple mesh:

# Create a mesh consisting of 3 polygons using 6 vertices\n\nvertices = [\n    (0, 0, 0),      (2,  0,  0),    (2,  2,  0.2),    \n    (0,  2,  0.2),  (1, 3, 1),      (1, -1, -1),    \n]\n\npolygons = [\n    (0, 1, 2, 3),   # Quad\n    (4, 3, 2),      # Triangle\n    (0, 5, 1)       # Triangle\n]\n\nm = bpy.data.meshes.new(name='my mesh')\nm.from_pydata(vertices, [], polygons)\n

At this point we have created a new Mesh object, which corresponds to Object Data of type Mesh. Object Data cannot be directly added to a scene, but needs to be referenced by a 3D Object:

# Create an Object referencing the Mesh data\no = bpy.data.objects.new(name='my mesh', object_data=m)\n\n# Add the Object to the scene\nbpy.context.scene.collection.objects.link(o)\n

The resulting mesh and outliner entry looks like this:

"},{"location":"api/meshes/#careful-invalid-data","title":"Careful: invalid data","text":"

Note that it is possible to set up a mesh with invalid/inconsistent data when setting the underlying arrays manually, as is the case here. This can cause weird behaviour or even crashes.

For example:

# 3 vertices\nvertices = [ (0, 0, 0), (1,  1, 1), (-1, 2, -1) ]\n\n# Invalid vertex index 3 used!\npolygons = [ (0, 1, 2, 3) ]   \n\nm = bpy.data.meshes.new(name='my invalid mesh')\nm.from_pydata(vertices, [], polygons)\n\no = bpy.data.objects.new(name='my invalid mesh', object_data=m)\nbpy.context.scene.collection.objects.link(o)\n

When executing the above code a new mesh is added to the scene, but it will show as a triangle in the 3D viewport, instead of a quad. And even though that doesn't appear to be unreasonable behaviour in this case Blender will crash if we subsequently enter edit mode on the mesh!

So the lesson here is to be careful when specifying geometry using these low-level API calls. This actually applies to all parts of the Blender Python API in general.

In this case, to make sure a created mesh has valid data we can use the validate() method on a Mesh. This will check the mesh data and remove any invalid values, e.g. by deleting the polygon using non-existent vertex index 3 above. This might not result in a mesh that matches what you want based on the data, but at least you can detect this situation and handle it without Blender crashing.

The validate() method has two issues to be aware of:

  • The method returns True in case the mesh does not validate, i.e. when it has issues. More specifically, it returns True when changes were made to the mesh data to remove invalid values.
  • It will only report on the specific issues found when called with validate(verbose=True) and then will only output to the console.

But it is still a good idea to always validate a mesh when creating it manually:

...\nm = bpy.data.meshes.new(name='my invalid mesh')\nm.from_pydata(vertices, [], polygons)\n\nif m.validate(verbose=True):\n    print('Mesh had issues and has been altered! See console output for details')\n

In the example of the invalid mesh data above this results in these message being printed in the console output:

ERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:351 BKE_mesh_validate_arrays:     Edge 0: v2 index out of range, 3\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:351 BKE_mesh_validate_arrays:     Edge 3: v2 index out of range, 3\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:605 BKE_mesh_validate_arrays:     Loop 3 has invalid vert reference (3)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:782 BKE_mesh_validate_arrays:     Loop 0 is unused.\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:782 BKE_mesh_validate_arrays:     Loop 1 is unused.\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:782 BKE_mesh_validate_arrays:     Loop 2 is unused.\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:782 BKE_mesh_validate_arrays:     Loop 3 is unused.\n

After validate() returns we can see in this case that invalid data was indeed removed:

>>> vertices = [ (0, 0, 0), (1,  1, 1), (-1, 2, -1) ]\n>>> polygons = [ (0, 1, 2, 3) ]   \n>>> m = bpy.data.meshes.new(name='my invalid mesh')\n>>> m.from_pydata(vertices, [], polygons)\n\n>>> len(m.polygons)\n1\n>>> len(m.edges)\n4\n>>> len(m.vertices)\n3\n\n>>> m.validate()\nTrue\n\n>>> len(m.polygons)\n0\n>>> len(m.edges)\n2\n>>> len(m.vertices)\n3\n

"},{"location":"api/meshes/#creating-a-mesh-low-level","title":"Creating a mesh (low-level)","text":"

A second, and more flexible, way of creating a mesh is using low-level calls for setting the necessary data arrays directly on a Mesh object. This is especially useful in combination with NumPy arrays, as this allows the creation of large meshes with relatively high performance and low memory overhead.

Meshes in Blender are stored using 4 arrays, as attributes of the bpy.types.Mesh type:

  • vertices: vertex locations, each specified by 3 floats
  • loops: contains the vertex indices used for defining polygons of a mesh, each polygon as a sequence of indices in the vertices array
  • polygons: defines the start index of each polygon as an index in loops, plus the length of each polygon in number of vertices
  • edges: defines the edges of the mesh, using two vertex indices per edge

So to create a mesh at this level we need to set up the necessary values for these arrays. Here, we create the same mesh as in the previous section, using NumPy arrays for storing the data.

# Vertices (8): x1 y1 z1 x2 y2 z2 ...\nvertices = numpy.array([\n    0, 0, 0,    2,  0,  0,    2,  2,  0.2,    0,  2,  0.2,\n    1, 3, 1,    1, -1, -1,    0, -2, -1,      2, -2, -1\n], dtype=numpy.float32)\n\n#\n# Polygons, defined in loops\n#\n\n# List of vertex indices of all loops combined\nvertex_index = numpy.array([\n    0, 1, 2, 3,                             # Quad\n    4, 3, 2,                                # Triangle\n    0, 5, 1                                 # Triangle\n], dtype=numpy.int32)\n\n# For each polygon the start of its indices in vertex_index\nloop_start = numpy.array([\n    0, 4, 7\n], dtype=numpy.int32)\n\n# Length of each polygon in number of vertices\nloop_total = numpy.array([\n    4, 3, 3\n], dtype=numpy.int32)\n

We additionally also specify texture coordinates and vertex colors. This is something that is not possible with the high-level from_pydata() API shown above. Note that we need to specify these values per vertex per polygon loop.

# Texture coordinates per vertex per polygon loop\nuv_coordinates = numpy.array([\n    0,   0,    1, 0,      1, 1,    0, 1,    # Quad   \n    0.5, 1,    0, 0,      1, 0,             # Triangle\n    0,   1,    0.5, 0,    1, 1              # Triangle\n], dtype=numpy.float32)\n\n# Vertex color (RGBA) per vertex per polygon loop\nvertex_colors = numpy.array([\n    1, 0, 0, 1,   1, 0, 0, 1,   1, 0, 0, 1,   1, 0, 0, 1,\n    0, 1, 0, 1,   0, 1, 0, 1,   0, 1, 0, 1,\n    1, 0, 0, 1,   0, 1, 0, 1,   0, 0, 1, 1,\n], dtype=numpy.float32)\n

Next, we create a new mesh using the above arrays:

num_vertices = vertices.shape[0] // 3\nnum_vertex_indices = vertex_index.shape[0]\nnum_loops = loop_start.shape[0]\n\nm = bpy.data.meshes.new(name='my detailed mesh')\n\n# Vertices\nm.vertices.add(num_vertices)\nm.vertices.foreach_set('co', vertices)\n\n# Polygons\nm.loops.add(num_vertex_indices)\nm.loops.foreach_set('vertex_index', vertex_index)\n\nm.polygons.add(num_loops)\nm.polygons.foreach_set('loop_start', loop_start)\nm.polygons.foreach_set('loop_total', loop_total)\n\n# Create UV coordinate layer and set values\nuv_layer = m.uv_layers.new(name='default')\nuv_layer.data.foreach_set('uv', uv_coordinates)\n\n# Create vertex color layer and set values\nvcol_layer = m.color_attributes.new(name='vcol', type='FLOAT', domain='CORNER')\nvcol_layer.data.foreach_set('color', vertex_colors)\n\n# Done, update mesh object\nm.update()\n\n# Validate mesh\nif m.validate(verbose=True):\n    print('Mesh data did not validate!')\n\n# Create an object referencing the mesh data\no = bpy.data.objects.new(name='my detailed mesh', object_data=m)\n\n# Add the object to the scene\nbpy.context.scene.collection.objects.link(o)    \n

Info

Passing a multi-dimensional NumPy array directly to foreach_set() will not work:

>>> vertices = numpy.array([\n...     (0, 0, 0),    (2,  0,  0),    (2,  2,  0.2),    (0,  2,  0.2),\n...     (1, 3, 1),    (1, -1, -1),    (0, -2, -1),      (2, -2, -1)\n... ], 'float32')\n>>> vertices.shape\n(8, 3)\n\n>>> m = bpy.data.meshes.new(name='my detailed mesh')\n>>> m.vertices.foreach_set('co', vertices)\nTraceback (most recent call last):\n  File \"<blender_console>\", line 1, in <module>\nRuntimeError: internal error setting the array\n

However, passing a flattened array does work:

>>> m.vertices.foreach_set('co', vertices.flatten())\n>>> [v.co for v in mesh.vertices]\n[Vector((0.0, 0.0, 0.0)), Vector((2.0, 0.0, 0.0)), Vector((2.0, 2.0, 0.20000000298023224)), Vector((0.0, 2.0, 0.20000000298023224)), Vector((1.0, 3.0, 1.0)), Vector((1.0, -1.0, -1.0)), Vector((0.0, -2.0, -1.0)), Vector((2.0, -2.0, -1.0))]\n
"},{"location":"api/meshes/#specifying-edges-when-creating-a-mesh","title":"Specifying edges when creating a mesh","text":"

In most cases we want to create a mesh consisting of only polygons and in that case don't need to specify edges. For certain mesh objects it can be of interest to also be able to specify edges explicitly, or even to create a mesh that consists only of vertices and edges between them. Edges can be used to add line segments that are not part of polygons.

We build upon the example mesh we created above by adding a set of 3 edges:

# Create a mesh consisting of 3 polygons using 8 vertices, with 3 extra edges\n# that are not part of the polygons\n\nvertices = [\n    (0, 0, 0),    (2,  0,  0),    (2,  2,  0.2),    (0,  2,  0.2),\n    (1, 3, 1),    (1, -1, -1),    (0, -2, -1),      (2, -2, -1)\n]\n\nedges = [\n    (5, 6), (6, 7), (5, 7)\n]\n\npolygons = [\n    (0, 1, 2, 3),   # Quad\n    (4, 3, 2),      # Triangle\n    (0, 5, 1)       # Triangle\n]\n\nm = bpy.data.meshes.new(name='my mesh with edges')\nm.from_pydata(vertices, edges, polygons)\n\no = bpy.data.objects.new(name='my mesh with edges', object_data=m)\nbpy.context.scene.collection.objects.link(o)\n

The resulting mesh and outliner entry looks like this:

Note that even though we specified only 3 edges explicitly the polygons in the mesh implicitly define 8 more. These are the edges making up those polygons, with shared edges being present only once. In total this results in 11 edges in the mesh:

>>> len(m.edges)\n11\n

For the second, low-level, method of mesh creation edges are handled slightly different. Edges can be set explicitly by using Mesh.edges:

# Vertices (8): x1 y1 z1 x2 y2 z2 ...\nvertices = numpy.array([\n    0, 0, 0,    2,  0,  0,    2,  2,  0.2,    0,  2,  0.2,\n    1, 3, 1,    1, -1, -1,    0, -2, -1,      2, -2, -1\n], dtype=numpy.float32)\n\n# Extra edges (3) not defined implicitly by polygons\nedges = numpy.array([\n    5, 6,    6, 7,    5, 7\n], dtype=numpy.int32)\n\n#\n# Polygons, defined in loops\n#\n\n# List of vertex indices of all loops combined\nvertex_index = numpy.array([\n    0, 1, 2, 3,                             # Quad\n    4, 3, 2,                                # Triangle\n    0, 5, 1                                 # Triangle\n], dtype=numpy.int32)\n\n# For each polygon the start of its indices in vertex_index\nloop_start = numpy.array([\n    0, 4, 7\n], dtype=numpy.int32)\n\n# Length of each polygon in number of vertices\nloop_total = numpy.array([\n    4, 3, 3\n], dtype=numpy.int32)\n\nnum_vertices = vertices.shape[0] // 3\nnum_edges = edges.shape[0] // 2\nnum_vertex_indices = vertex_index.shape[0]\nnum_loops = loop_start.shape[0]\n\nm = bpy.data.meshes.new(name='detailed mesh with edges')\n\n# Vertices\nm.vertices.add(num_vertices)\nm.vertices.foreach_set('co', vertices)\n\n# Edges\nm.edges.add(num_edges)\nm.edges.foreach_set('vertices', edges)\n\n# Polygons\nm.loops.add(num_vertex_indices)\nm.loops.foreach_set('vertex_index', vertex_index)\n\nm.polygons.add(num_loops)\nm.polygons.foreach_set('loop_start', loop_start)\nm.polygons.foreach_set('loop_total', loop_total)\n\n# Done, update mesh object\nm.update()\n\n# Validate mesh\nif m.validate(verbose=True):\n    print('Mesh data did not validate!')\n

Here, we only specify the extra edges and not the polygon edges. But when we try to validate the mesh errors will be reported:

ERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 0 needs missing edge (0, 1)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 0 needs missing edge (1, 2)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 0 needs missing edge (2, 3)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 0 needs missing edge (3, 0)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 1 needs missing edge (4, 3)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 1 needs missing edge (3, 2)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 1 needs missing edge (2, 4)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 2 needs missing edge (0, 5)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 2 needs missing edge (5, 1)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 2 needs missing edge (1, 0)\n

So the polygon edges, which we did not specify, are being reported. In this case the validate() method will correct this and add the missing edges. But having errors reported for regular polygon edges makes it harder to detect any other issues with the mesh data. So the Mesh.update() method provides the option calc_edges. By default this option is False, but when set to True all edges in the mesh will be recalculated to be consistent with the available vertex indices, polygons and extra edges set.

...\n\n# Done, update mesh object and recalculate edges\nm.update(calc_edges=True)\n

Validation now succeeds:

>>> m.validate(verbose=True)\nFalse\n
"},{"location":"api/meshes/#accessing-mesh-data-object-mode","title":"Accessing mesh data (object mode)","text":"

Inspecting or using mesh data is straightforward. Here we use one of the meshes created with the low-level methods above and retrieve some of its data. Note that Blender provides a few values derived from the original arrays, such as loop_indices and vertices per polygon, which can be useful for certain operations.

m = bpy.data.meshes['my detailed mesh']\n\nlen(m.vertices)            => 8                            \nlen(m.polygons)            => 3\n# 2 triangles + 1 quad = 2*3 + 1*4 = 10\nlen(m.loops)               => 10\n# 8 implicit edges (for 2 triangles and 1 quad), shared edges only listed once\nlen(m.edges)               => 8                \n\nm.vertices[7].co           => Vector((2.0, -2.0, -1.0))         # Coordinate\nm.vertices[7].normal       => Vector((0.6.., -0.6.., -0.3..))   # Normal\nm.vertices[7].select       => True              # Selected (edit mode)\n\nm.polygons[2].index        => 2                 # Useful in 'for p in m.polygons'\nm.polygons[2].loop_start   => 7                 # First index in loops array\nm.polygons[2].loop_total   => 3                 # Number of vertices in loop\nm.polygons[2].loop_indices => [7, 8, 9]         # Indices in m.loops\nm.loops[7].vertex_index    => 0\nm.loops[8].vertex_index    => 5\nm.loops[9].vertex_index    => 1\nm.polygons[2].vertices     => [0, 5, 1]         # Actual vertex indices\nm.polygons[2].select       => True              # Selected (edit mode)\nm.polygons[2].use_smooth   => False             # Smooth shading enabled\n\n# These are automatically computed\nm.polygons[2].area         => 1.4142135381698608\nm.polygons[2].normal       => Vector((0.0, -0.707..., 0.707...))   \nm.polygons[2].center       => Vector((1.0, -0.333..., -0.333...))  \n\nm.edges[0].vertices        => [2, 3]            # (bpy_prop_array)\n

Starting with Blender 3.1 there's new attributes vertex_normals and polygon_normals on Mesh objects to access normals directly from the underlying array they're stored in:

# Access per vertex, as above\n>>> m.vertices[0].normal\nVector((-0.5773503184318542, -0.5773503184318542, -0.5773503184318542))\n\n# Access from array of vertex normals\n>>> m.vertex_normals[0].vector\nVector((-0.5773503184318542, -0.5773503184318542, -0.5773503184318542))\n\n# Access per polygon, as above\n>>> m.polygons[0].normal\nVector((-1.0, -0.0, 0.0))\n\n# Access from array of polygon normals\n>>> m.polygon_normals[0].vector\nVector((-1.0, 0.0, 0.0))\n

The array-based normal access is more efficient that accessing the normal value of a MeshVertex. Note that vertex_normals and polygon_normals only provide read-only access.

"},{"location":"api/meshes/#vertex-colors","title":"Vertex colors","text":"

A mesh can have multiple sets of vertex colors. Each set has a name and for each vertex the associated color (but see below). By default meshes created in Blender do not have a vertex color layer, so it needs to be created explicitly.

>>> m\nbpy.data.meshes['Cube']\n\n>>> type(m.vertex_colors)\n<class 'bpy_prop_collection'>\n\n# Create a new vertex color layer\n>>> vcol_layer = m.vertex_colors.new(name='My vertex colors')\n>>> vcol_layer\nbpy.data.meshes['Cube'].vertex_colors[\"My vertex colors\"]\n\n>>> len(m.vertex_colors)\n1\n\n# Name shown under Object Data -> Vertex Colors \n>>> vcol_layer.name\n'My vertex colors'\n

The vertex colors themselves are accessed through the data member:

>>> type(vcol_layer.data)\n<class 'bpy_prop_collection'>\n\n>>> len(vcol_layer.data)\n24\n\n>>> type(vcol_layer.data[0].color)\n<class 'bpy_prop_array'>\n\n>>> list(vcol_layer.data[0].color)\n[1.0, 1.0, 1.0, 1.0]\n\n>>> len(m.polygons)\n6\n\n>>> len(m.vertices)\n8\n\n>>> len(m.loops)\n24\n

One thing to notice here is that the vertex color array has 24 entries. But the Cube object only has 8 vertices and 6 polygons. The reason for the higher number of vertex colors is that Blender stores separate vertex colors per polygon. So the Cube has 6 polygons, each defined using 4 vertices, hence 6*4=24 vertex colors in total (which is the same number as the length of the loops array).

This is more flexible than what most 3D file formats allow, which usually only store one color per vertex. During import Blender will duplicate those colors to set the same color for a vertex in all polygons in which it is used. An example of how to take advantage of the added flexibility is that we can set a random color per cube face by setting each of the 4 vertex colors of a face to the same color:

for i in range(6):\n    r = random()\n    g = random()\n    b = random()\n    for j in range(4):\n        vcol_layer.data[4*i+j].color = (r, g, b, 1)\n

A slightly more Blender-like (and robust) way to write the above code would be to take advantage of the polygon loop indices:

for p in m.polygons:\n    r = random()\n    g = random()\n    b = random()    \n    for i in p.loop_indices:\n        vcol_layer.data[i].color = (r, g, b, 1)\n

Vertex color space changed in 3.2+

In Blender 3.2 the interpretation of vertex colors values was changed. Previously, vertex color RGB values were assumed to be in sRGB color space. But from 3.2 onwards they are assumed to be in scene linear color space. Specifically, the vcol_attr.data[i].color attribute assumes linear values are passed, while `vcol_attr.data[i].color_srgb can be used to set sRGB values (the latter will use automatic conversion where needed).

When passing the wrong values, i.e. sRGB instead of linear, the difference in color can be subtle, but noticeable. Below is the same set of values, but one passed as sRGB (left), the other as linear (right):

To manually convert a color value between the two color spaces use the functions from mathutils.Color, specifically from_scene_linear_to_srgb() and from_srgb_to_scene_linear().

"},{"location":"api/meshes/#active-set","title":"Active set","text":"

As noted above a mesh can have more than one layer of vertex colors. Among the sets present on a mesh there can be only one that is active. The active vertex color layer set controls, for example, which vertex colors are visible in the 3D viewport and are edited in Vertex Paint mode.

When adding a vertex color layer (and similar for UV maps described below) through the UI the active layer is changed to the newly added layer. Also, clicking in the Vertex Color layer UI changes the active layer. Below is a list of 2 vertex color layers on a mesh shown, of which Col is the active one used in vertex paint mode.

The camera icon right of the vertex color names controls which layer is used during rendering by default (and which is set independently of the active status). But in most cases the shader used on an object will explicitly choose a vertex color layer using an Attribute node and so override the setting in the UI list.

XXX doesn't seem to work in 3.6?

Controlling the active vertex color (or UV map) layer can be done using the active property:

>>> m.vertex_colors.active_index\n1\n\n>>> m.vertex_colors.active\nbpy.data.meshes['Cube'].vertex_colors[\"Another layer\"]\n\n>>> m.vertex_colors.active = m.vertex_colors[0]\n>>> m.vertex_colors.active\nbpy.data.meshes['Cube'].vertex_colors[\"Col\"]\n
"},{"location":"api/meshes/#uv-coordinates","title":"UV coordinates","text":"

UV coordinates follow the same setup as vertex colors, but instead store a 2-tuple of floats per vertex per polygon. Note that just like for vertex colors UV coordinates are also specified per vertex per polygon.

Meshes created in Blender will already have a UV map called UVMap:

>>> m\nbpy.data.meshes['Cube']\n\n>>> len(m.uv_layers)\n1\n\n>>> m.uv_layers[0].name\n'UVMap'\n

The actual UV values are once again stored under the data member:

>>> uv_map = m.uv_layers[0]\n>>> uv_map\nbpy.data.meshes['Cube'].uv_layers[\"UVMap\"]\n\n>>> type(uv_map.data)\n<class 'bpy_prop_collection'>\n\n>>> len(uv_map.data)\n24\n\n>>> type(uv_map.data[0])\n<class 'bpy.types.MeshUVLoop'>\n\n>>> uv_map.data[0].uv\nVector((0.375, 0.0))\n

In general, UV maps are either set through importing or edited within Blender using the UV Editor, although there can be valid reasons for wanting to control them through the Python API.

"},{"location":"api/meshes/#bmesh","title":"BMesh","text":"

There is another method in Blender for creating meshes and accessing their data: the so-called BMesh, which is implemented by the bmesh module and its BMesh class. BMesh is especially interesting when you want to perform more complex geometric operations on an existing mesh, or build up a mesh polygon-by-polygon instead of providing the full mesh in one go as a set of arrays as shown above. Also, a large set of high- and low-level geometric operations on BMeshes is available, such as merging vertices within a given distance, face splitting, edge collapsing or generating a convex hull. These are provided in the bmesh.ops and bmesh.utils modules. These operations would be tedious and error prone to script manually.

In this section we only give a brief overview of BMesh and refer to the API docs for all the details.

The differences of BMesh compared to working with the native mesh data structure we showed above:

  • A BMesh holds extra data on mesh connectivity, like the neighbours of a vertex, which can be easily queried for geometric editing. The trade-off is that a BMesh will use more memory to store all this extra data, but that is usually only a limiting factor for very large meshes.
  • It is somewhat slower to create a (large) mesh using a BMesh, as each mesh element (vertex, edge, polygon) takes a Python call to create, plus needs extra calls and Python values to set up.
  • A BMesh cannot be used directly in a scene, it first needs to be converted (or copied back) to a Mesh. So mesh data is present twice in memory at some point in time, in the two different forms.

Here's a (verbose) example of create a BMesh from scratch that holds a single triangle and edge:

import bpy, bmesh \n\nbm = bmesh.new()\n\n# Create 4 vertices\nv1 = bm.verts.new((0, 0, 0))\nv2 = bm.verts.new((1, 0, 1))\nv3 = bm.verts.new((0, 1, 1))\nv4 = bm.verts.new((1, 1, 1))\n\n# Add a triangle\nbm.faces.new((v1, v2, v3))\n\n# Add a line edge\nbm.edges.new((v3, v4))\n\n# Done setting up the BMesh, now copy geometry to a regular Mesh\nm = bpy.data.meshes.new('mesh')\nbm.to_mesh(m)\n\n# Release BMesh data, bm will no longer be usable\nbm.free()\n\n# Add regular Mesh as object\no = bpy.data.objects.new('mesh', m) \nbpy.context.scene.collection.objects.link(o)\n

A BMesh can also be created from an existing Mesh, edited and then copied back to the Mesh:

o = bpy.context.active_object\nm = o.data\n\n# Create a new BMesh and copy geometry from the Mesh\nbm = bmesh.new()\nbm.from_mesh(m)\n\n# Edit some geometry\nbm.verts.ensure_lookup_table()\nbm.verts[4].co.x += 3.14\n\nbm.faces.ensure_lookup_table()\nbm.faces.remove(bm.faces[0])\n\n# Copy back to Mesh\nbm.to_mesh(m)\nbm.free()\n

If a Mesh is currently in edit mode you can still create a BMesh from it, edit that and the copy the changes back, while keeping the Mesh in edit mode:

o = bpy.context.active_object\nm = o.data\nassert m.mode == 'EDIT'\n\nbm = bmesh.new()\n# Note the different call, from_edit_mesh() instead of from_mesh()\nbm.from_edit_mesh(m)\n\n# <edit BMesh>\n\n# Update edit-mesh of Mesh (again, different call)\nbm.update_edit_mesh(m)\nbm.free()\n

This can be useful when you're working in edit mode on a mesh and also want to run a script on it that uses BMesh, but don't want to switch in and out of edit-mode to run the script.

Warning

There are some things to watch out for when synchronizing BMesh state to a Mesh, see here.

Some examples of the geometric queries that you can do on a BMesh (see docs for more):

bm.verts[i]                 # Sequence of mesh vertices (read-only)\nbm.edges[i]                 # Sequence of mesh edges (read-only)\nbm.faces[i]                 # Sequence of mesh faces (read-only)\n\nbm.verts[i].co              # Vertex coordinate as a mathutils.Vector\nbm.verts[i].normal          # Vertex normal\nbm.verts[i].is_boundary     # True if vertex is at the mesh boundary\nbm.verts[i].is_wire         # True if vertex is not connected to any faces\nbm.verts[i].link_edges      # Sequence of edges connected to this vertex\nbm.verts[i].link_faces      # Sequence of faces connected to this vertex\nbm.verts[i].index           # Index in bm.verts\n\nbm.edges[i].calc_length()   # Length of the edge\nbm.edges[i].is_boundary     # True if edge is boundary of a face\nbm.edges[i].is_wire         # True if edge is not connected to any faces\nbm.edges[i].is_manifold     # True if edge is manifold (used in at most 2 faces)\nv = bm.edges[i].verts[0]    # Get one vertex of this edge\nbm.edges[i].other_vert(v)   # Get the other vertex\nbm.edges[i].link_faces      # Sequence of faces connected to this edge\nbm.edges[i].index           # Index in bm.edges\n\nbm.faces[i].calc_area()     # Face area\nbm.faces[i].calc_center_median()    # Median center\nbm.faces[i].edges           # Sequence of edges defining this face\nbm.faces[i].verts           # Sequence of vertices defining this face\nbm.faces[i].normal          # Face normal\nbm.faces[i].index           # Index in bm.faces\n

Indices

The use of indices above, both to index the sequences of vertices/edges/faces as well as retrieving .index values, requires up-to-date indices. During operations on a BMesh the indices (and sequences) might become incorrect and need an update first.

To ensure the .index values of vertices, edges and faces are correct call the respective index_update() method on their sequence:

bm.verts.index_update()\nbm.edges.index_update()\nbm.faces.index_update()\n

To ensure you can correctly index bm.verts, bm.edges and bm.faces call the respective ensure_lookup_table() method:

bm.verts.ensure_lookup_table()\nbm.edges.ensure_lookup_table()\nbm.faces.ensure_lookup_table()\n

A Blender mesh can contain polygons with an arbitrary number of vertices. Sometimes it can be desirable to work on triangles only. You can convert all non-triangle faces in a BMesh to triangles with a call to bmesh.ops.triangulate():

bm = bmesh.new()\n\nv1 = bm.verts.new((0, 0, 0))\nv2 = bm.verts.new((1, 0, 1))\nv3 = bm.verts.new((0, 1, 1))\nv4 = bm.verts.new((1, 1, 1))\n\n# Add a quad\nbm.faces.new((v1, v2, v3, v4))\n\n# Ensure indices printed are correctly\nbm.verts.index_update()\n\nfor f in bm.faces:\n    print([v.index for v in f.verts])\n\n# Force triangulation. The list of faces can optionally be a subset of the faces in the mesh.\nbmesh.ops.triangulate(bm, faces=bm.faces[:])\n\nprint('After triangulation:')\nfor f in bm.faces:\n    print([v.index for v in f.verts])\n\n# Output:\n#\n# [0, 1, 2, 3]\n# After triangulation:\n# [0, 2, 3]\n# [0, 1, 2]\n
"},{"location":"api/object_transformations/","title":"Transforms and coordinates","text":""},{"location":"api/object_transformations/#object-to-world-transform","title":"Object-to-world transform","text":"

The matrix_world attribute of an Object contains the object-to-world transform that places the object in the 3D scene:

>>> o = bpy.context.active_object\n>>> o\nbpy.data.objects['Cube']\n\n>>> o.matrix_world\nMatrix(((1.3376139402389526, 0.0, 0.0, 0.3065159320831299),\n        (0.0, 1.3376139402389526, 0.0, 2.2441697120666504),\n        (0.0, 0.0, 1.3376139402389526, 1.2577730417251587),\n        (0.0, 0.0, 0.0, 1.0)))\n

Comparing this matrix with the values set in the Transform panel, you can see the Location value is stored in the right-most column of the matrix and the scaling along the diagonal. If there was a rotation set on this object some of these values would not be as recognizable anymore.

The location, rotation (in radians) and scale values can also be inspected and set separately:

>>> o.location\nVector((0.3065159320831299, 2.2441697120666504, 1.2577730417251587))\n\n>>> o.rotation_euler\nEuler((0.0, 0.0, 0.0), 'XYZ')\n\n>>> o.scale\nVector((1.3376139402389526, 1.3376139402389526, 1.3376139402389526))\n\n>>> o.location = (1, 2, 3)\n# Rotations are set in radians\n>>> o.rotation_euler.x = radians(45)\n>>> o.scale = (2, 1, 1)\n>>> o.matrix_world\nMatrix(((2.0, 0.0, 0.0, 1.0),\n        (0.0, 0.7071067690849304, -0.7071067690849304, 2.0),\n        (0.0, 0.7071067690849304, 0.7071067690849304, 3.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

See the section on parenting for some subtle effects on transformations in cases object parenting is used.

"},{"location":"api/object_transformations/#geometry-coordinates","title":"Geometry coordinates","text":"

Mesh geometry in Blender stores vertex coordinates (and other geometric information) in object-space coordinates. But a mesh (or object in general) will usually get transformed to a specific position, scaling and orientation in the scene. As described above the net transform from object-space to world-space coordinates, also called the object-to-world transform, is available through matrix_world. In cases where you need to have access to geometric data in world-space, say vertex coordinates, you need to apply the matrix_world transform manually.

For example, given the cube transformed as shown above, with vertex 7 selected (visible bottom-left in the image below):

>>> o\nbpy.data.objects['Cube']\n\n>>> m = o.data\n>>> o.matrix_world\nMatrix(((1.3376139402389526, 0.0, 0.0, 0.3065159320831299),\n        (0.0, 1.3376139402389526, 0.0, 2.2441697120666504),\n        (0.0, 0.0, 1.3376139402389526, 1.2577730417251587),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# The object-space coordinate of this vertex\n>>> m.vertices[7].co\nVector((-1.0, -1.0, -1.0))\n\n# The world-space coordinate of this vertex, which matches\n# what the Transform UI shows. Note the Global display mode\n# select in the UI, if we select Local if will show (-1, -1, -1).\n>>> o.matrix_world @ m.vertices[7].co\nVector((-1.0310980081558228, 0.9065557718276978, -0.07984089851379395))\n
"},{"location":"api/often_used_values_and_operations/","title":"Often used values and operations","text":"

Here, we list some frequently used parts of the API, for varying types of data.

"},{"location":"api/often_used_values_and_operations/#scene","title":"Scene","text":"
  • Current scene: bpy.context.scene (read-only)
"},{"location":"api/often_used_values_and_operations/#objects","title":"Objects","text":"
  • Active object: bpy.context.active_object (read-only)
  • Selected objects: bpy.context.selected_objects (read-only)
  • Delete selected objects: bpy.ops.object.delete()
"},{"location":"api/often_used_values_and_operations/#camera","title":"Camera","text":"
  • Active camera object: Scene.camera (this is the camera object, not camera object data)
  • Type: Camera.type (\"PERSP\", \"ORTHO\", ...)
  • Focal length: Camera.lens (in mm)
  • Clipping distances: Camera.clip_start, Camera.clip_end
"},{"location":"api/often_used_values_and_operations/#rendering","title":"Rendering","text":"
  • Image resolution:
    • Width: Scene.render.settings.resolution_x
    • Height: Scene.render.settings.resolution_y
    • Percentage: Scene.render.settings.resolution_percentage
  • Output file: Scene.render.filepath
  • Image output type: Scene.render.image_settings.file_format (\"PNG\", \"JPEG\", ...)
  • Number of samples per pixel (Cycles): Scene.cycles.samples
  • Render current scene: bpy.ops.render.render(). See parameters how to control the specific type of render (still image versus animation) and whether to save output
"},{"location":"api/often_used_values_and_operations/#animation","title":"Animation","text":"
  • Current frame Scene.frame_current
  • Frame range: Scene.frame_start, Scene.frame_end
  • Frame rate: Scene.render.fps
"},{"location":"api/often_used_values_and_operations/#file-io","title":"File I/O","text":"
  • Save the current session to a specific file: bpy.ops.wm.save_as_mainfile()
  • Open Blend file bpy.ops.wm.open_mainfile()
  • Import a file (call depends on file type): bpy.ops.import_scene.obj() (OBJ scene), bpy.ops.import_scene.gltf (glTF scene), bpy.ops.import_mesh.ply (PLY mesh), etc. See here and here for more details.
  • Export a file (call depends on file type) follows the same call names, see here and here
"},{"location":"api/operators/","title":"Operators","text":"

A special class of important API routines are the so-called operators. These are usually higher-level operations, such as adding a new cube mesh, deleting the current set of selected objects or running a file importer. As noted above many parts of the Blender UI are set up with Python scripts and in a lot of cases the operations you perform in the UI through menu actions or shortcut keys will simply call the relevant operator from Python to do the actual work.

The Info area will show most operators as they get executed, but you can also check what API call is made for a certain UI element (this requires Python Tooltips to be enabled, see developer settings. For example, adding a plane mesh through the Add menu will call the operator bpy.ops.mesh.primitive_plane_add(), as the tooltip shows:

You can simply call the operator directly from Python to add a plane in exactly the same way as with the menu option:

>>> bpy.data.objects.values()\n[]\n\n>>> bpy.ops.mesh.primitive_plane_add()\n{'FINISHED'}\n\n# A plane mesh is now added to the scene\n>>> bpy.data.objects.values()\n[bpy.data.objects['Plane']]\n

Many of the operators take parameters, to influence the results. For example, with bpy.ops.mesh.primitive_plane_add() you can set the initial size and location of the plane (see the API docs for all the parameters):

>>> bpy.ops.mesh.primitive_plane_add(size=3, location=(1,2,3))\n{'FINISHED'}\n

Info

Note that operator parameters can only be passed using keyword arguments.

"},{"location":"api/operators/#operator-context","title":"Operator context","text":"

This is all very nice and powerful, but operators have a few inherent properties that can make them tricky to work with.

An operator's execution crucially depends on the context in which it is called, where it gets most of the data it needs. As shown above simple parameter values can usually be passed, but values like the object(s) to operate on are retrieved implicitly. For example, to join a set of mesh objects into a single mesh you can call the operator bpy.ops.object.join(). But the current context needs to be correctly set for the operator to work:

# We have no objects selected\n>>> bpy.context.selected_objects\n[]\n\n>>> bpy.ops.object.join()\nWarning: Active object is not a selected mesh\n{'CANCELLED'}\n\n# With 3 objects selected\n>>> bpy.context.selected_objects\n[bpy.data.objects['Cube'], bpy.data.objects['Cube.001'], \nbpy.data.objects['Cube.002']]\n\n# Now it works\n>>> bpy.ops.object.join()\n{'FINISHED'}\n

As can be seen above an operator only returns a value indicating the execution status. When calling the operator in the Python Console as above some extra info is printed. But when calling operators from scripts the status return value is all you have to go on, as the extra message isn't printed when the script is executed. And in some cases the reason an operator fails can be quite unclear:

>>> bpy.context.selected_objects\n[bpy.data.objects['Cube'], bpy.data.objects['Camera']]\n\n>>> bpy.ops.mesh.intersect_boolean()\nTraceback (most recent call last):\n  File \"<blender_console>\", line 1, in <module>\n  File \"/usr/share/blender/3.6/scripts/modules/bpy/ops.py\", line 113, in __call__\n    ret = _op_call(self.idname_py(), None, kw)\n          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nRuntimeError: Operator bpy.ops.mesh.intersect_boolean.poll() failed, context is incorrect\n

This shows that the so-called poll function failed, but what does that mean? The poll function is used by operators to determine if they can execute in the current context. They do this by checking certain preconditions on things like the selected object(s), the type of data or an object mode. In this case the bpy.ops.mesh.intersect_boolean() operator can't perform a boolean intersection on multiple meshes, but only on the faces of a single object in edit mode, but this is not something you can tell from the error message (nor does the documentation make that clear):

To actually perform a boolean intersection on two objects from a Python script requires us to do what we would be do in the UI: add a Boolean modifier on one of the objects and set its parameters. We could take advantage of the Python Tooltips to see which operator we need:

This would suggest that using bpy.ops.modifier_add(type='BOOLEAN') would be what we need, but then setting the required parameters on the modifier (i.e. the object to subtract) would become tricky.

So for a boolean operation, and setting object modifiers in general, there's an easier way:

>>> o = bpy.data.objects['Cube']\n# Add a modifier on the object and set its parameters\n>>> mod = o.modifiers.new(name='boolmod', type='BOOLEAN')\n>>> mod.object = bpy.data.objects['Cube.001']\n>>> mod.operation = 'DIFFERENCE'\n\n# At this point the modifier is all set up. We hide\n# the object we subtract to make the boolean result visible.\n>>> bpy.data.objects['Cube.001'].hide_viewport = True\n

Unfortunately, certain operations can only be performed by calling operators. So there's a good chance that you will need to use them at some point when doing Python scripting. Hopefully this section gives some clues as how to work with them. See this section for more details on all the above subtleties and issues relating to working with operators.

The bpy.ops documentation also contains useful information on operators, including how to override an operator's implicit context with values you set yourself.

"},{"location":"api/parenting/","title":"Parenting","text":"

An object's parent can be queried or set simply through its parent attribute, which needs to reference another Object (or None).

But when parenting is involved the use of transformation matrices becomes somewhat more complex. Suppose we have two cubes above each other, the top cube transformed to Z=5 and the bottom cube to Z=2:

Using the 3D viewport we'll now parent the bottom cube to the top cube (LMB click bottom cube, Shift-LMB click top cube, Ctrl-P, select Object) and inspect the values in Python:

>>> bpy.data.objects['Bottom cube'].parent\nbpy.data.objects['Top cube']\n\n# The bottom cube is still located in the scene at Z=2, \n# even after parenting, as is expected\n>>> bpy.data.objects['Bottom cube'].matrix_world\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 2.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

If an object has a parent its matrix_local attribute will contain the transformation relative to its parent, while matrix_world will contain the resulting net object-to-world transformation. If no parent is set then matrix_local is equal to matrix_world.

Let's check the bottom cube's local matrix value:

# Correct, it is indeed -3 in Z relative to its parent\n>>> bpy.data.objects['Bottom cube'].matrix_local\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, -3.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

As already shown above the parent attribute can be used to inspect and control the parenting relationship:

>>> bpy.data.objects['Top cube'].parent\n# None\n>>> bpy.data.objects['Bottom cube'].parent\nbpy.data.objects['Top cube']\n\n# Remove parent\n>>> bpy.data.objects['Bottom cube'].parent = None\n

At this point the two cubes are no longer parented and are at Z=2 (\"Bottom cube\") and Z=5 (\"Top cube\") in the scene. But when we restore the parenting relationship from Python something funny happens 1:

# Set parent back to what it was\n>>> bpy.data.objects['Bottom cube'].parent = bpy.data.objects['Top cube']\n

The reason for the different position of the cube called \"Bottom cube\" (which is now on top) is that when using the UI to set up a parenting relationship it does more than just setting the parent attribute of the child object. There's also something called the parent-inverse matrix. Let's inspect it and the other matrix transforms we've already seen for the current (unexpected) scene:

# Identity matrix, i.e. no transform\n>>> bpy.data.objects['Bottom cube'].matrix_parent_inverse\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 0.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# Hmmm, this places the \"Bottom cube\" 2 in Z *above* its parent at Z=5...\n>>> bpy.data.objects['Bottom cube'].matrix_local\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 2.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# ... so it indeed ends up at Z=7 as we saw (above \"Top cube\")\n>>> bpy.data.objects['Bottom cube'].matrix_world\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 7.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

So what happened here? Apparently the matrix_local matrix changed from its value of Z=-3 as we saw earlier. The answer is that when you set up a parenting relationship using the UI the parent-inverse matrix is set to the inverse of the current parent transformation (as the name suggests) while matrix_local is updated to inverse(parent.matrix_world) @ to_become_child.matrix_world.

If we clear the parent value from Python and redo the parenting in the UI we can see this in the resulting transform matrices:

>>> bpy.data.objects['Bottom cube'].parent = None\n\n# <parent \"Bottom cube\" to \"Top cube\" in the UI>\n\n# Was identity, is now indeed the inverse of transforming +5 in Z\n>>> bpy.data.objects['Bottom cube'].matrix_parent_inverse\nMatrix(((1.0, -0.0, 0.0, -0.0),\n        (-0.0, 1.0, -0.0, 0.0),\n        (0.0, -0.0, 1.0, -5.0),\n        (-0.0, 0.0, -0.0, 1.0)))\n\n# Was Z=2, is now 2-5\n>>> bpy.data.objects['Bottom cube'].matrix_local\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, -3.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# Was Z=7\n>>> bpy.data.objects['Bottom cube'].matrix_world\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 2.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

The reason for this behaviour is that when doing parenting in the 3D viewport you usually do not want the object that you are setting as the child to move. So the parenting matrices are adjusted accordingly when the parenting relationship is set up. But when we simply set parent from Python, the matrix_local value is used as is, causing our bottom cube to suddenly move up, as it is used as the transform relative to its parent, while it actually would need a different value to stay in place.

There's actually quite a bit more going on with all the different parenting options available from the UI. See this page for more details.

"},{"location":"api/parenting/#children","title":"Children","text":"

To retrieve an object's children (i.e. the objects it is the parent of) can be done through its children property. This only returns the direct children of that object, and so not children of its children, etc. Getting to the set of all children of an object (direct and indirect) was made slightly easier in Blender 3.1 with the addition of the children_recursive attribute.

For example, given a Cube, Suzanne and Torus object, where Suzanne is parented to Cube, and the Torus is parented to Suzanne:

>>> list(bpy.data.objects)\n[bpy.data.objects['Cube'], bpy.data.objects['Suzanne'], bpy.data.objects['Torus']]\n\n>>> bpy.data.objects['Suzanne'].parent\nbpy.data.objects['Cube']\n\n>>> bpy.data.objects['Torus'].parent\nbpy.data.objects['Suzanne']\n\n>>> bpy.data.objects['Cube'].children\n(bpy.data.objects['Suzanne'],)\n\n>>> bpy.data.objects['Suzanne'].children\n(bpy.data.objects['Torus'],)\n\n>>> bpy.data.objects['Cube'].children_recursive\n[bpy.data.objects['Suzanne'], bpy.data.objects['Torus']]\n

These attributes are also available for collections.

  1. The same thing happens when setting the parent in the UI using Object Properties > Relations > Parent \u21a9

"},{"location":"api/selections/","title":"Selections","text":"

In a lot of cases you want to operate on a set of selected objects. You can access (read only) the current selection with bpy.context.selected_objects:

>>> bpy.context.selected_objects\n[bpy.data.objects['Cube'], bpy.data.objects['Plane']]\n

Changing the current selection can be done in several ways. Selection state per object can be controlled with the select_get() and select_set() methods:

>>> bpy.context.selected_objects\n[]\n\n>>> bpy.data.objects['Camera'].select_get()\nFalse\n\n>>> bpy.data.objects['Camera'].select_set(True)\n>>> bpy.context.selected_objects\n[bpy.data.objects['Camera']]\n

The full selection set can also be changed:

# Select all visible objects\n>>> bpy.ops.object.select_all(action='SELECT')\n\n# Deselect all objects\n>>> bpy.ops.object.select_all(action='DESELECT')\n\n# Toggle the selection state for each object\n>>> bpy.ops.object.select_all(action='TOGGLE')\n

Note that the default mode for bpy.ops.object.select_all() when not specified is TOGGLE.

Also note that the selection methods above operate only on objects that are currently visible objects in the scene (in terms of the outliner eye icon), just like for the selection hotkeys (like A) in the 3D viewport.

"},{"location":"basics/animation/everything/","title":"Animating everything","text":"

Here, we'll show how generic and powerful the Blender animation system is.

"},{"location":"basics/animation/example_flipbook_animation/","title":"\ud83d\udcbb Flipbook animation","text":"

Here are the steps needed to import a set of animated meshes and make them play as an animation within Blender. The approach we use here is to have a single mesh object on which we change the associated mesh data each frame. So even though all timesteps are loaded only one of them is visible at a time.

Here we take advantage of the Blender scene organization, where each object (a mesh object in this case) refers to object data (one of the meshes in the animation). We use a small Python script, called a frame handler, to respond to a change of the current frame time.

Warning

The method below will import all meshes in the animation into the current scene. This uses quite a bit of memory (around 1GB in our tests).

Info

The data for this example is part of our advanced course and you can find the data on https://edu.nl/hrvbe under data/animation.

The animated_ply_imports.blend scene file contains two Python scripts, in the Text Editor called 1. import ply files and 2. register anim handler.

The dambreak.tar.gz file contains a set of animated meshes in binary PLY format and so is quite large when extracted.

  1. Extract dambreak.tar.gz in the same directory as animated_ply_imports.blend. This will create a directory dambreak which contains the PLY files.
  2. Load animated_ply_imports.blend

As noted above, this blend file not only contains a 3D scene, but also two Python scripts we use to set up the flipbook animation.

The first step is to load all the timesteps in the dataset using one of the scripts. This might take a bit of time, depending on the speed of your system. By default, only the first 100 steps are loaded. You can increase the number of files to the full 300 if you like by updating the variable N in both the import script and the animation handler script.

  1. Execute the script that imports the time step meshes from the PLY files. To do this step make sure the script called 1. import ply files is selected in the text editor panel. Then press the play button to the right of it, which will execute the script (an alternative is press Alt-P in the editor).

  2. The cursor will change to an animated circle, indicating the import is running. In case you get the idea something is wrong check the console output in the terminal where you started Blender.

  3. After all PLY files are loaded execute the script that installs the frame change handler. This script is called 2. register anim handler. Make sure the text editor is switched to this script, then press the play button.

  4. Verify that the flipbook animation works with Space and/or moving the time slider in the Timeline with Shift-RMB. You should see the fluid simulation evolve with each frame. You can also check the object data associated with the Fluid sim object in the Outliner to see that it changes.

The playback speed will depend on your system's performance, but also on the framerate setting chosen.

  1. Change the Frame Rate value (in the Output properties tab at the right side of the screen, icon ) to different values to see how your system handles it. Is 60 fps feasible?

  2. The Fluid sim object is still transformable as any normal object. Experiment with this, to see how it influences the flipbook animation.

  3. If you like, you can add a Camera to the scene and make it follow the wave of fluid in a nice way and then render this into animation.

"},{"location":"basics/animation/example_flipbook_animation/#mesh-sequence-cache","title":"Mesh (Sequence) Cache","text":"

Specifically for the Alembic and USD file formats Blender has support to animate a set of meshes stored in a single file. When importing such an animation cache file a Mesh Sequence Cache modifier is automatically added and the animated mesh will work as expected in the scene.

Although it is possible to convert you animated meshes to such a single-file animation cache there's a few downsides:

  • Storing a large number of mesh animation steps in a single file will potentially lead to a very large file, plus it doubles the disk storage needed if you keep the original mesh files around.
  • Both Alembic and USD are complex binary formats, and you need some form of library support in order to easily write them.

There is also the Mesh Cache modifier, which has a similar function. Yet, this modifier only supports MDD and PC2 files.

"},{"location":"basics/animation/exercise_manual_camera_orbit/","title":"\ud83d\udcbb Orbiting an object manually","text":"

Info

The steps in this exercise were partly shown in the presentation as well, but that was mostly to illustrate keyframe animation. Here, you can redo those steps in detail and experiment with them.

To orbit an object the camera needs a circular path around the object's location.

  1. Load orbit.blend

The scene contains a single monkey (centered at the origin) and a camera. Note that the animation has a length of 100 frames, starting at frame 0.

As a first way of doing an orbit we're going to add keyframes for the camera position, as it rotates around the monkey, using the 3D cursor pivot mode.

  1. Set the Pivot Point mode to 3D cursor (bring up the Pivot Point pie menu with period ., select 3D Cursor).
  2. Make sure the 3D cursor is located in the origin by resetting its position with Shift-C. This will also change the view to fit the scene extents. In general, you can check the current position of the 3D cursor in the sidebar (N to toggle) on the View tab under 3D Cursor
  3. Select the camera. Verify that as you rotate it around the Z axis the camera indeed orbits the 3D cursor, and therefore also orbits around the monkey head.
  4. Add 4 keyframes at intervals of 25 frames and 90 degrees rotation around Z to complete a 360 degree rotation of the camera around the object over the full animation of 100 frames
  5. Play the animation with Spacebar. Is the camera orbit usable? Why not? Also check the camera view during playback.
  6. Check the graphs in the Graph Editor. See if you can improve the camera orbit, either by changing the graphs, inserting more keyframes, or both. One way to influence the shape of the curves is to edit the handles attached to each control point, or to change the keyframe interpolation for a control point with T.

Tip

If you have only a single object in front of the camera around which you want to orbit, an alternative approach is to simply rotate the object itself while keeping the camera in a fixed position. However, this might not always be feasible or preferable.

"},{"location":"basics/animation/exercise_parented_camera_orbit/","title":"\ud83d\udcbb Camera orbiting using parenting","text":"

We will try another way of doing a camera orbit. This method involves parenting the camera to an empty. Parenting is creating a hierarchical relation between two objects. An empty is a special 3D object with no geometry, but which can be placed and oriented in the scene as usual. It is shown as a 3D cross-hairs in the 3D view. It is often used when doing parenting.

  1. Load orbit.blend.
  2. If you happened to have saved the file in the previous assignment with some keyframes set on the camera you can delete these by selecting the Camera. Then go into the Timeline editor at the bottom and select all keyframes (diamond markers) with A, press X, choose Delete Keyframes.
  3. Reset the 3D cursor to the origin with Shift-C
  4. Add an Empty to the scene: Shift-A > Empty > Arrows
  5. Select only the camera, then add the Empty to the selection by clicking Shift-LMB with the cursor over the empty (or using Ctrl-LMB in the outliner). The camera should now have a dark orange selection outline, while the empty should have a light orange outline, as the latter is the active object.
  6. Press Ctrl-P and pick Object to add a parent-child relationship

A black dotted line from the camera to the empty should now be visible in the scene. This means the camera is now parented to the empty. Any transformation you apply to the empty will get applied to the camera as well.

Bad Parenting

If you made a mistake in the parenting of step 6 then you can clear an object's parent by selecting that object, pressing Alt-P and picking Clear Parent.

  1. Verify in the outliner that the Camera object is now indeed a child of the Empty (you might have to use the little white triangles to open the necessary tree entries)

  2. Make the empty the single selected object. Enter Z rotation mode by pressing R followed by Z. Note that as you move the mouse both the empty and camera are transformed. Exit the rotation mode with Esc, leaving the Z rotation of the empty set to zero.

  3. Add key frames at the beginning and end of the animation to have the empty rotate 360 degrees around Z over the animation period

  4. Check the camera orbit, including how it looks in the camera view. Is this orbit better?

You might have noticed that, even though we now have a nice circular rotation of the camera around the object, the rotation speed actually isn't constant. If you select the empty and look at the Graph Editor you can see that the graph line representing the Z rotation value isn't straight, but looks like an S. This is due to the default interpolation mode that is used between key frames.

  1. To make the rotation speed constant make sure the empty is selected. Then in the Graph Editor select all curve points with A and press V to set the handle type, pick Vector. The curves should now have become straight lines. Check the animation to see the rotation speed has become constant.

  2. Depending on how exactly you set up the animation you might notice a hickup at the moment that the animation wraps around from frame 99 to frame 0. This happens in case you set the same visible rotation of the empty for frame 0 and 99 (e.g. 0 degrees for frame 0 and 360 degrees for frame 99). You can fix this by changing the animation length to 99 frames by setting End to 98 in the Output properties panel (the value is directly below Frame Start). Now, the animation should wrap around smoothly.

"},{"location":"basics/animation/exercise_track_to/","title":"\ud83d\udcbb Track To constraint","text":"
  1. Load track_to.blend

This scene contains two moving cubes and a single camera.

We would like to keep the camera pointed at one of the cubes as it moves across the scene. We could animate the camera orientation ourselves, but there is an easier way using a constraint. A constraint operates on an object and can influence things like orientation or scale amount based on another object's properties.

We will be using a Track To constraint here, which keeps one object pointing at another object.

  1. Select the camera
  2. Switch the Properties panel to the Object Constraints tab using the icon
  3. In the Add Object Constraint menu pick Track To under Tracking

The Track To constraint will keep the object, in this case our camera, oriented at another object all the time. The other object is called the Target object (in this case one of the cubes).

  1. In the constraint settings under Target (the top one!) pick Cube

If you had the 3D View set to view through the active camera (the view will be named Camera Perspective) one of the cubes should now be nicely centered in the view.

  1. Check that when playing the animation the cube indeed stays centered in the camera view.
  2. Orient the 3D view so you can see the camera's orientation in relation to the scene, specifically the targeted cube.

There is a blue dotted line indicating the constraint between the camera and the cube. To understand how the Track To constraint works in this case we need to understand the basic orientation of a Blender camera.

  1. Add a new Camera (Shift-A > Camera)
  2. Select it and clear its rotation with Alt-R.
  3. Zoom in on the new camera so you can see along which axis it is looking. Also note which axis is the Up direction of the camera (i.e. pointing towards the top of the view as seen by this camera).
  4. Select the original camera we wanted to animate and which has the Track To constraint.
  5. Change the 3D view so you can see the whole scene, including the selected camera. Change the Track Axis value of the Track To constraint to different values. Also experiment with different values for the Up setting. Compare these settings against what you concluded from step 10.
"},{"location":"basics/animation/introduction/","title":"Introduction","text":"

Animation is a very broad topic and we will only cover a very small part of what is possible in Blender. We'll begin with an introduction into animation and then focus on basic keyframe animation.

"},{"location":"basics/animation/introduction/#summary-of-basic-ui-interaction-and-shortcut-keys","title":"Summary of basic UI interaction and shortcut keys","text":""},{"location":"basics/animation/introduction/#all-3d-view-timeline-graph-editor","title":"All (3D View, Timeline, Graph Editor)","text":"
  • Shift-Left for moving time to the first frame in the animation, Shift-Right for the last frame
  • Left key for 1 frame back, Right for 1 forward
  • Up key for 1 keyframe forward, Down for 1 back
  • Spacebar for toggling animation playback
"},{"location":"basics/animation/introduction/#3d-view","title":"3D view","text":"
  • I in the 3D view for inserting/updating a keyframe for the current frame (pick the type)
  • Alt-I in the 3D view for deleting the keyframe data for the current frame
"},{"location":"basics/animation/introduction/#timeline","title":"Timeline","text":"
  • Changing current frame (either click or drag):
    • LMB on the row of frame numbers at the top
    • OR Shift-RMB within the full area
  • Change zoom with mouse Wheel, zoom extent with Home
  • LMB click or LMB + drag for selecting keyframes (the yellow diamonds)
  • The usual shortcuts for editing keyframes, e.g. A for selecting all keyframes, X for deleting all selected keyframes, G for grabbing and moving, etc
"},{"location":"basics/animation/introduction/#graph-editor","title":"Graph editor","text":"
  • Change current frame with Shift-RMB
  • Change zoom with Ctrl-MMB drag, or mouse Wheel
  • Translate with Shift-MMB (same as in 3D view)
  • Zoom graph extent with Home (same as in 3D view)
  • The usual shortcuts for editing curve control points, e.g. A for selecting all, X for deleting all selected points, G for grabbing and moving, etc

Tip

If one or more curves in the graph editor don't seem to be editable (and they show as dotted lines) then you might have accidentally disabled editing. To fix: with the mouse over the graph editor select all curves with A and press TAB to toggle editability.

"},{"location":"basics/animation/introduction/#further-reading","title":"Further reading","text":"
  • This section in the Blender manual contains many more details on keyframing, particularly with respect to the curves in the Graph Editor.
  • The proper definitions of the colors of keyframed values is described here
"},{"location":"basics/animation/tradeoffs_settings_output/","title":"Trade-offs, settings and output","text":"

Here, we look into trade-offs that you can make in terms of chosen frame rate, animation length, quality, etc.

Secondly, we will look in detail into the different settings available for an animation, including the type of output (images or video file) and strategies to handle long render times. We also describe how to do command-line rendering.

"},{"location":"basics/animation/tradeoffs_settings_output/#easy-command-line-rendering","title":"Easy command-line rendering","text":"

If you have set up the animation and its settings (e.g. frame rate, start/end frame, output name, etc) as you like in the Blender file then rendering from the command-line usually doesn't involve anything more than running this command:

blender -b file.blend -a

The -b option makes sure Blender renders in the background without opening a window. You only need to add extra options if you want to override values set in the Blender file.

"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/","title":"\ud83d\udcbb Interaction, selections, outliner","text":"

Here it's time for a first exercise! Follow the steps given below, which will let you work with Blender yourself and get to know the different methods of 3D scene interaction.

Tip

Summary of 3D view navigation:

  • MMB = rotate view
  • Scrollwheel or Ctrl+MMB = zoom view
  • Shift+MMB = translate view
  • Home = zoom out to show all objects

See the cheat sheet to refresh your memory w.r.t. other view interaction and shortcut keys and mouse actions.

"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/#viewpoints","title":"Viewpoints","text":"
  1. Load motorbike.blend

    Hint

    This file will be in the data share under data/basics/blender_basics

  2. In one of the two 3D views (your choice) manipulate the view to the following viewpoints:

    • Alongside the motorbike, amongst the streamlines, looking in the direction of travel.
    • From the rider's point of view, just in front of the helmet, looking ahead.
    • An up-close point of view clearly showing the two streamlines that cross near the rider's helmet on his/her right side, one going under the arm, the other going over it.
  3. There is a single streamline that goes between the two rods of the steering column. Does that streamline terminate on the bike or does it continue past the bike? Try to get really close with the view so you can see where the streamline goes.

"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/#individual-selection","title":"Individual selection","text":"
  1. Select all objects using the A key. As you've seen earlier this will introduce orange outlines surrounding selected objects.
  2. Check the outliner, specifically the color of the object names, to see how the current selection is represented.
  3. In the 3D view deselect only the motorbike using Shift-LMB with the mouse cursor at the appropriate position
  4. Again check the outliner status, do you notice a difference in the name for the motorbike object?
  5. Add the motorbike back to the selection by using Shift-LMB over the bike in the 3D view.
  6. Check the orange outline color of the motorbike (or the corresponding entry in the outliner) to verify that it is now the active object. It should be the only object with a light orange color.
  7. Use Shift-LMB with the mouse over the \"floor and walls\" object. What changed in the selection? Specifically, what is now the active object?
  8. Once more use Shift-LMB on the \"floor ans walls\" object. What changed this time in the selection status of the object?
"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/#box-selection","title":"Box selection","text":"
  1. Clear the selection with Alt-A (or double click the A key).
  2. Use box select (LMB drag) to select all objects in the scene.
  3. Clear the selection with the Alt-A key.
  4. Now try to select ONLY the motorbike using box select. Check the outliner to make sure you're selecting just one object. You can also check the status line at the bottom of the Blender window, specifically the part that reads Objects: #/#, meaning selected / total.
"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/#outliner-selection","title":"Outliner selection","text":"
  1. Make sure no objects are currently selected.
  2. Test with following actions in the outliner to get a good idea of what actions it supports and how this influences the visual state of the items in the outliner tree:

    • Left-clicking on an item (possibly holding the Shift or Ctrl key)
    • Using the keys A and Alt-A (note how these are similar in functionality to what they do in the 3D view, but in the context of the outliner items)
    • Right-clicking on an item and choosing Select or Deselect
  3. How does the blue highlight of a line in the outliner relate to the selection status of an object in the 3D view?

"},{"location":"basics/blender_fundamentals/2_assignment_transformations/","title":"\ud83d\udcbb Transformations","text":"

Hint

  • You can clear an object's translation to all zero with Alt-G
  • You can clear an object's rotation to all zero with Alt-R
  • You can clear an object's scale to all zero with Alt-S
  • You can undo a transformation with Ctrl-Z (or reload the file to reset completely)
  • See section Object Actions of the cheat sheet for more shortcut keys
"},{"location":"basics/blender_fundamentals/2_assignment_transformations/#basic-transformations","title":"Basic transformations","text":"
  1. Load axes.blend
  2. The Axes object in the scene is a 3D object just like any other. Note that the axes object shows the local axes of the object.

  3. Try translating, rotating and scaling the axes object with the different methods shown:

    • The transform widgets (accessible from the toolbox on the upper-left)
    • Using the G, R or S keys
    • Entering values in the properties region in the upper-right of the view, under Transform
  4. Activate one of the transform modes (e.g. G for translation) and experiment with limiting a transformation to an axis with X, Y or Z keys,

  5. Activate one of the transform modes (e.g. G for translation) and experiment with limiting a transformation to a plane with Shift-X, Shift-Y or Shift-Z.

  6. Reload the axes.blend file to get back the original scene.

  7. Rotate the axes 30 degrees around (global) X.
  8. Now rotate the axes 45 degrees around the local Z axis.
"},{"location":"basics/blender_fundamentals/2_assignment_transformations/#pivot-point-modes","title":"Pivot point modes","text":"
  1. Load transformations.blend
  2. Select the cone, monkey, torus and sphere

  3. Set pivot mode to Median Point (using the Pivot Point pie menu, which opens with the . key, i.e. period), if it isn't already.

  4. Press S to start scaling, then move the mouse to scale the objects apart

  5. Notice that as you scale up the objects increase in size and move apart, but only the torus' center point (the orange dot) moves below the plane. Why?

  6. Cancel the scale operation with Esc or a RMB click

  7. Enable the Only Locations option in the Pivot Point pie menu. When this is enabled it will cause any transformation to be applied to the locations of the objects (shown as orange circles), instead of to the objects themselves.

  8. Repeat the scaling of the four objects. Do you notice how the objects now transform differently?

  9. Change the pivot mode to Individual Origins and disable the Only Locations option. Do the scaling again, notice the difference.

  10. Enable the Only Locations setting. When you try to rotate the objects around Z nothing happens. Why not?

  11. Change the pivot mode to Median Point, leave Only Locations enabled.

  12. Rotate the objects around the Z axis.

  13. Now disable the Only Locations option and rotate the objects once again around the Z axis. Do you notice the subtle difference in transformation?

  14. Experiment some more with different selections of objects and the different Pivot Point modes, until you feel you get the hang of it.

"},{"location":"basics/blender_fundamentals/2_assignment_transformations/#rubiks-cube","title":"Rubik's cube","text":"

Hint

  • You can add a cube object with Shift-A > Mesh > Cube
  • You can duplicate selected objects with Shift-D. This will also activate grab mode after the duplication.
  1. Start with an empty scene (File > New > General)

  2. Model a Rubik's cube: 3x3x3 Cube objects (minus the center cube) on a rectangular grid. Try to get the spacing between the Cube objects the same in all directions.

  3. Now select one face of the Rubik's cube (i.e. 3x3 cubes) and rotate it 30 degrees just like the real thing.

"},{"location":"basics/blender_fundamentals/2_assignment_transformations/#bonus-2001-a-space-odyssey","title":"Bonus: 2001 - A Space Odyssey","text":"
  1. Start with an empty scene (File > New > General)

  2. Remember the scene from Space Odyssey 2001, with our primate ancestors looking up at the monolith? Recreate that scene :)

  • Add 4 or more monkey heads, surrounding a thin narrow box for the monolith
  • Make the monkeys look up at the monolith
  • If you want to go crazy add bodies to the monkeys using some scaled spheres
  • Add a sun object + corresponding light somewhere in the sky.
"},{"location":"basics/blender_fundamentals/3_assignment_camera_and_views/","title":"\ud83d\udcbb Cameras and views","text":"
  1. Open cameras.blend

    This scene contains a bunny object, a sun light and two cameras: \"Close-up\" near the bunny's head and \"Overview\" further away.

  2. Select the Overview camera object, by either left-clicking on it in the 3D view or in the Outliner.

  3. Make this camera the active camera with either the outliner (click on the green camera icon right of the name), View > Cameras > Set Active Object as Camera or use Ctrl-Numpad0. Notice that the 3D view changes to the camera's viewpoint.
  4. Rotate the 3D view with MMB to exit the camera view. You are now back in the normal 3D view interaction.
  5. Select the Close-up camera
  6. Switch to camera view by bringing up the View pie menu with ` (backtick, usually below the ~), then pick View Camera.
  7. What camera view are you now seeing, Close-up or Overview?
  8. So one thing to remember is that selecting a camera does not make it the \"active camera\" (even though it can be the active object, confusingly).
  9. Change the active camera to Close-up
  10. Rotate away from the camera view to the normal 3D view
  11. For switching back to the active camera view there's two more methods apart from the pie menu, try them:
    • Using the View menu at the top of the 3D view area: View > Cameras > Active Camera
    • Press Numpad0
  12. Experiment with the different camera controls until you find the ones you're comfortable with
  13. Rotate away from the camera view to a 3D view that shows both cameras.
  14. In the Scene properties tab on the right-hand side of the window (and not the similar icon in the top bar left of Scene) there's a drop-down menu Camera which lists the active camera. Change the active camera using that selection box. Apart from the name listed under the Scene properties do you notice how you can identify the active camera in the 3D view? Hint: it's subtle and unrelated to the yellow/orange color used for highlighting selected objects.
"},{"location":"basics/blender_fundamentals/3_assignment_camera_and_views/#camera-transformation","title":"Camera transformation","text":"
  1. Make sure the Overview camera object is the only selected object
  2. Make the Overview camera the active camera and then switch to its view
  3. In the camera view use regular object transformations to point the camera at the rabbit's tail. To refresh, in camera view with only the camera selected:
    • Press G to translate, then move the mouse to change the view
    • While still in move mode press MMB (or Z twice) to enter \"truck\" mode: this moves the camera forward/backward along the view axis. Pressing X twice will allow moving the camera sideways.
    • Press R to rotate around the view axis
    • In rotate mode press MMB to \"look around\"
    • LMB to confirm, Esc to cancel
  4. Another useful feature is when you like the current viewpoint in the 3D view and want to match the active camera to this viewpoint. For this you can use Ctrl-Alt-Numpad0 (or with View > Align View > Align Active Camera To View in the header of the 3D view)
"},{"location":"basics/blender_fundamentals/3_assignment_camera_and_views/#quad-view","title":"Quad view","text":"
  1. Switch the 3D View to the so-called Quad View with Ctrl-Alt-Q. You now have orthogonal 2D views along the three axes (Top, Front and Right Orthographic), plus a 3D view (Camera Perspective). Note: the three axis views can only be translated and zoomed, not rotated
  2. Change the upper-right quad to a camera view, if it isn't already
  3. Press N to show the sidebar on the right
  4. On the View tab, under View Lock there's a Lock option called Camera to View. Enable that option. You should now see a dotted red outline around the orange camera rectangle in the Camera Perspective view.
  5. Hide the sidebar again (N), leaving the Lock option enabled
  6. Change the view in the Camera Perspective view using the regular 3D view mouse interaction (MMB to rotate, Shift-MMB to translate, Ctrl-MMB to move forward/backward). Observe what happens to the active camera in the other quadrants when you alter the view.
  7. Use the sidebar again to disable the Lock Camera to View option
"},{"location":"basics/blender_fundamentals/3_assignment_camera_and_views/#fly-mode","title":"Fly mode","text":"
  1. Add a camera to the scene (Shift-A > Camera). It will be placed at the position of the 3D cursor (the small red-white striped circle and axes).
  2. Change the upper-right view to this camera
  3. Activate fly mode with Shift-` (backtick). Use the ASDWXQE keys to move this camera close to the two bunny ears and look between them. You can change the fly speed with the mouse Wheel. In fly mode you can confirm the current view with LMB or press Enter. Press Esc to cancel and go back to the original view.
"},{"location":"basics/blender_fundamentals/avoiding_data_loss/","title":"\u26a0\ufe0f Avoiding data loss","text":"

There are some things to be aware of when working with Blender that might behave a little different from other programs, or general expectations, and that can potentially cause you to loose work.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#the-file-overwrite-prompt-is-very-subtle","title":"The file overwrite prompt is very subtle","text":"

Suppose we have saved our work to a file scene.blend. We then make some more changes in Blender to create a second version of our scene and save this as scene2.blend. Finally, we make a third version and intend to save this as scene3.blend, but we forget to change the file name in the save dialog and it stays at the current scene2.blend. The Blender way of warning you that you are about to overwrite an existing file is really subtle:

Notice the red color behind the file name? That's the signal that the file name you entered is the same as an existing file in the current directory. If we change the file name to something that doesn't exist yet the color becomes gray again:

The File > Save As workflow (and similar for related file dialogs) is a somewhat double-edged sword:

  • If you're aware of the above signal and intend to quickly overwrite the current file you can simply press Enter once in the dialog, and the file will be saved with no \"Overwriting are you sure?\" prompt is shown. So in this respect the UI stays out of your way and avoids an extra confirmation dialog.
  • But if you miss the red prompt or are unaware of its meaning then it's easy to accidentally overwrite existing work.
"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#easy-file-versions","title":"Easy file versions","text":"

A nice way to save to successive versions of a file is using the + button right of the file name, as shown in the pictures above. Using the + (and -) you can easily change the version number at the end of a file name, e.g. scene2.blend to scene3.blend. The red overwrite indicator will also update depending on the existence of the chosen file name.

Warning

Using the + button merely increments the number in the file name. It does not guarantee that the file does not exist yet (i.e. no check is made with what's on disk).

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#unused-data-blocks-in-the-scene-are-not-saved","title":"Unused data-blocks in the scene are not saved","text":"

Suppose you have a 3D scene and have created a material A that you use on some object. You then create a material B and assign it to the same object, causing material A to now be unused in the scene. If you save your scene to file at this point material A will not get saved to the file, as it is not referenced by anything in the scene. This automatic \"garbage collection\" feature of Blender is somewhat controversial, and it is definitely good to be aware of this behaviour.

For most scene elements used in the Basics part of this course garbage-collection-on-save does not really cause concern, except for the case of Materials (as described in the example above). For materials, and other scene elements, you can see if they are unused by checking for a 0 next to their name when they appear in a list:

The quick fix in case you have a material that is currently not used in the scene, but that you definitely want to have saved to file, is to use the \"Fake User\" option by clicking the shield icon (be sure to enable this option for the right material!):

You can verify the material now has a fake user as intended by checking for an F next to its name:

Note that you can use the same Fake User option for some other types of scene elements as well.

We have a more detailed discussion of the garbage collection system in a section in the Python scripting reference. The behaviour described relates to the data block system that Blender uses internally and for normal use the description above should be sufficient, but can also be influenced from Python.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#recovering-lost-work","title":"Recovering lost work","text":"

Murphy's Law usually strikes when you least expect it. Fortunately, there are several layers of defense in case something goes unexpectedly wrong when saving files, or in case Blender crashes. It depends on the situation you're trying to recover from which one of the options below provides the best results, if applicable.

Please check what each of these features does, to make sure you don't accidentally make things worse by using one of the recover options within Blender in the wrong way.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#those-blend1-files","title":"Those .blend1 files?","text":"

You might notice that when you overwrite an existing file, say file.blend, another file called file.blend1 will now have appeared next to it in the same directory. This is Blender's method for automatically keeping around the previous version of the file you overwrote: it first moves the existing file.blend to file.blend1, and only then saves the new file.blend.

So if you accidentally overwrite a file you can still get to the previous version (the .blend1 file), as long as you haven't overwritten more than once.

More than 1 previous version

You can actually have multiple previous versions kept around if you like. The preference setting for this is Save & Load > Save Versions, which defaults to 1. If you would increase it then files with extensions .blend2, .blend3 and so on would be kept around.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#auto-save","title":"Auto save","text":"

By default, Blender will automatically save your current scene to a file in a temporary directory every few minutes (2 minutes by default). The settings that control this are Save & Load > Auto Save and Save & Load > Auto Save > Timer (Minutes).

This auto-save file is stored in your system's temporary directory, and uses the process ID of Blender in the file name, as well as the string _autosave. Here is an example from a Linux system, where /tmp is used and Blender's process ID is 66597:

melis@juggle 22:13:/tmp$ ps aux | grep blender\nmelis      66597  1.2  5.7 1838680 463920 ?      Sl   21:54   0:14 blender\n\nmelis@juggle 22:13:/tmp$ ls 66597*\n66597_autosave.blend\n

See this section of the Blender manual on recovering a session from an auto-save file from the File manual (you can also copy or load the file manually, of course, there is nothing special about it).

Edit mode data not saved

If you happend to be in edit (or sculpt) mode at the time Blender does an auto-save, then the current updated state of the mesh will not get saved. This is a limitation of the auto-save feature.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#last-session-accidental-quit-without-saving","title":"Last session (accidental quit without saving)","text":"

Whenever Blender quits normally (i.e. not a crash) it will save the current session state to a file called quit.blend in your system's temporary directory. You can easily load this file with the File > Recover > Last Session option (or copy it to a different location and load it as any Blender file).

One of the cases where this feature might come in handy is if you quit Blender, have unsaved changed, but accidentally click the Don't Save button in the Save changes before closing? dialog. The quit.blend file in this case will contain those unsaved changes. But be sure to make a copy of it before quitting Blender again, as that will overwrite it.

Info

Note that there currently is no option to disable this Save-on-Quit feature. So for large scenes this will incur a (usually short) delay when exiting.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#blender-crash","title":"Blender crash","text":"

In case Blender crashes it usually does not manage to save the current scene to a recovery file. So in this case you are hopefully able to recover any lost work using the data available saved through the features described above.

"},{"location":"basics/blender_fundamentals/cameras_and_views/","title":"Cameras and views","text":"

This section shows cameras and how to work with them. In the exercise after this section you get to try a lot of the operations shown, so following the video along isn't strictly needed. But if you do want to then the file used is data/blender_basics/cameras.blend.

"},{"location":"basics/blender_fundamentals/first_steps/","title":"First steps in the user interface","text":"

Hint

A lot of new concepts and UI elements will be introduced in the upcoming videos. It probably works best to watch video(s) limited to a certain topic, try the operations shown and corresponding exercise(s) in Blender yourself, before moving on to the next topic.

"},{"location":"basics/blender_fundamentals/first_steps/#starting-blender","title":"Starting Blender","text":"

In general Blender isn't different to start than any other GUI application.

However, warning and error messages will be printed to the console window. It depends on the operating system you're working on how the console window is available:

  • (All operating systems) If you start Blender from a terminal window, e.g. xterm or Windows Command window, then Blender output will be printed in that window
  • (Windows only) If Blender was started from the Start menu, or using a desktop icon, then you can toggle the associated console window using the Window > Toggle System Console option

See this section in the Blender manual for more details on starting Blender from the command line and details specific for each operating system.

"},{"location":"basics/blender_fundamentals/first_steps/#user-interface-fundamentals","title":"User interface fundamentals","text":"

We will go over fundamentals of the user interface in terms of interaction and areas, specifically the 3D view and Outliner. We also touch on a number of often-performed operations, such as rendering an image and changing the set of selected objects. We also look a bit closer into keyboard shortcuts and menus.

It's probably best to follow along in Blender on your own system while viewing the video. The files used in the video can be found under data/blender_basics.

Slow 3D interaction

If the interaction in the 3D view isn't smooth (as seen in the video) on your PC/laptop something might be wrong in the setup of your system. Please contact us if this appears to be the case.

Accidental 'Edit mode'

If the 3D view (or some of the other areas) suddenly appear to behave strangely, or you now see your mesh with all kinds of dots or lines then you might have accidentally entered the so-called \"Edit Mode\" or any of the other modes available (Tab and Ctrl-Tab are used for this). Check the menu in the upper left of the 3D view, which should read Object Mode:

In this course we will use only Object Mode (and briefly use Vertex Paint mode in one of the exercises). You can use the drop-down menu shown above (or the Ctrl-Tab menu in the 3D view) and pick Object Mode to get back to the correct mode.

Accidental workspace switch

Another thing that might happen is that you accidentally click one of the tabs at the top of the screen, which then completely changes the layout of your user interface. These tabs are used to switch between workspaces, where each workspace allows a different layout to focus on a certain task (e.g. 3D modeling, versus shader editing, versus animation). The default workspace is Layout and you might have to switch back to that one:

"},{"location":"basics/blender_fundamentals/first_steps/#some-user-interface-tips","title":"Some user interface tips","text":"
  • To bring up the relevant section of the official Blender manual for (almost) any user interface element, e.g. button, setting or menu, right-click on that element and click Online Manual. This will start a web browser showing the relevant manual page.
  • You can hover with the mouse over pretty much any UI element to get a tooltip with a short description, including shortcut key(s) if available.
  • The keyboard and mouse shortcuts for object selection, editing, view interaction, etc work mostly the same in all Blender editors. So G to grab, X to delete, LMB to select, Shift-MMB to translate, Wheel to zoom, etc.
  • The mouse controls the current area in focus and any keyboard actions are applied in the active area first.
  • You can maximize a user interface area by using Ctrl+Spacebar having the mouse in the area you want to maximize. This can sometimes be useful to temporarily get a larger area to work with. You can use the same shortcut to toggle the area back to its original size, or use the Back to Previous button at the top of the screen.
"},{"location":"basics/blender_fundamentals/first_steps/#changes-to-default-preference-settings","title":"Changes to default preference settings","text":"

Here we suggest some preferences settings to change from their default value.

Optional

It's not required to change these defaults, but we find they help us in working with Blender, and so might be useful for you as well

Under Edit > Preferences, in the Interface tab:

  • Under Display disable Splash Screen. This will save you a click to get rid of the splash screen each time you start Blender. If you ever want to look at the splash again you can use the Blender logo icon in the top-level of the window and use Splash Screen.
  • Under Editors > Status Bar enable Scene Statistics, System Memory and Video Memory. This will show extra scene statistics in the status bar. Another way to do this is to right-click on the status bar and enable the same options.
  • Under Editors > Temporary Editors set Render In to Image Editor. This will cause the rendered image to be displayed as a replacement of the 3D view, instead of in a separate window. After rendering press Escape to get back the 3D view that was replaced by the rendered output.
  • In case you find that Blender's user interface elements, such as buttons or menu text, are too small you can scale up the UI with a single setting under Display > Resolution Scale. If you change the value you can see the changes in the UI immediately.

Orbit around selection

Another option which you might consider enabling is Orbit Around Selection. By default this is turned off and in that mode any rotation of the 3D viewport will be around the center of the view, which might cause selected objects to go out of view. When the option is turned on viewport rotation will be around the selected object(s), always keeping them in view. You can find this option on the Navigation tab under Orbit & Pan.

"},{"location":"basics/blender_fundamentals/introduction/","title":"Introduction","text":"

This first part of the course is meant to introduce you to Blender, its user interface and basic features. We'll start with a brief look into some of the background of Blender and challenges in learning it.

"},{"location":"basics/blender_fundamentals/objects_3d_cursor_undo/","title":"Objects, 3D cursor, Undo","text":"

A short section on how to add, duplicate or delete objects. What the 3D cursor is and what role it plays, plus the undo system.

"},{"location":"basics/blender_fundamentals/scene_hierarchy/","title":"Scene hierarchy","text":"

We briefly look a the way a scene is organized and how this interacts with the properties panel.

The above actually isn't the full story, as we only briefly mention collections. In the official Blender manual you can find more detail on collections here, in case you want to know more.

"},{"location":"basics/blender_fundamentals/transformations/","title":"Transformations","text":"

This might be a bit more of a technical subject and deals with the way 3D objects can be transformed in a scene. The transformations exercise will allow you to try most of these operations yourself. But if you wanto to follow along with the video then the file used is data/blender_basics/three_objects.blend.

"},{"location":"basics/blender_fundamentals/transformations/#summary-of-shortcut-keys","title":"Summary of shortcut keys","text":"
  • G to enter translation mode (\"grab\")
  • S to enter scale mode
  • R to enter rotation mode
  • LMB or Enter to confirm the current transformation, Escape to cancel while still one of the transformation modes
  • While in a transformation mode press X, Y or Z to constrain the transformation to the X, Y or Z axis, respectively.
  • While in a transformation mode press Shift+X, Shift+Y or Shift+Z to constrain the transformation to the plane perpendicular to the X, Y or Z axis, respectively.
"},{"location":"basics/blender_fundamentals/ui/","title":"User interface configuration","text":"

A short section on how the Blender user interface system works and how to configure it to your liking. This is useful to know as the current UI layout is saved in a Blender file, so files you get from some other source might look very different.

"},{"location":"basics/importing_data/exercise_vertex_colors/","title":"\ud83d\udcbb Vertex colors","text":"

This exercise uses a file exported from the ParaView scientific visualization package, and uses some of the workflow needed to get it into Blender.

X3D Importer

Check if you have a menu option to import X3D format. For this, go to File -> Import and check if there is an entry X3D Extensible 3D (.x3d/.wrl).

If you do NOT have the X3D import option then perform the following steps to enable the X3D add-on (otherwise continue to step 3):

  • Open the preferences window with Edit -> Preferences
  • Switch to the Add-ons tab
  • In the search field (with the little spyglass) enter \"X3D\", the list should get reduced to just one entry
  • Enable the checkbox left of \"Import-Export: Web3D X3D/VRML2 format\"
  • Close the preferences window (it saves the settings automatically)
  • Under File -> Import there should now be a new entry X3D Extensible 3D (.x3d/.wrl)
  1. Importing data always adds to the current scene. So start with an empty scene, i.e. delete all objects.

  2. Make sure Blender is set to use Cycles as the renderer. For this, switch to the Render tab in the properties area. Check the Render Engine drop-down, it should be set to Cycles.

  3. Import file glyphs.x3d using File > Import > X3D Extensible 3D. In the importer settings (on the right side of the window when selecting the file to import) use Forward: Y Forward, Up: Z Up.

  4. This X3D file holds a scene exported from ParaView. Check out the objects in the scene to get some idea of what it contains.

  5. Delete all the lights in the scene to clear everything up a bit. Add a single sun light in return.

"},{"location":"basics/importing_data/exercise_vertex_colors/#inspecting-the-vertex-colors","title":"Inspecting the vertex colors","text":"

This 3D model has so-called \"vertex colors\". This means that each vertex of the geometry has an associated RGB color, which is a common way to show data values in a (scientific) visualization.

There are a few ways to inspect if, and what, vertex colors a model has. First, there is the so-called Vertex Paint mode. In this mode vertex colors are shown when they are available and can even be edited (\"painted\").

To enable Vertex Paint mode:

  1. Select the 3D arrows in the scene (as the only single selected object)
  2. Open the Mode pie menu with Ctrl-TAB and switch to Vertex Paint. An alternative is to use the menu showing Object Mode in the upper-left of the 3D view header and select Vertex Paint there.
  3. The 3D View should now show the arrow geometry colored by its vertex colors. The colors shown are velocity values from a computational flow simulation, using the well-known rainbow color scale (low-to-high value range: blue \u2192 green \u2192 yellow \u2192 orange \u2192 red)
"},{"location":"basics/importing_data/exercise_vertex_colors/#altering-vertex-colors","title":"Altering vertex colors","text":"

You might have noticed two things have changed in the interface: 1) the cursor is now a circle, and 2) the tool shelf on the left now shows color operations (paint brush = Draw, drop = Blur, ...)

As this is Vertex Paint mode you can actually alter the vertex colors. This works quite similar to a normal paint program, like Photoshop or the GIMP, but in 3D. Although it may not make much sense to change colors that are based on simulation output (like these CFD results) it can still be interesting to clean up or highlight vertex-colored geometry in certain situations.

  1. Experiment with vertex painting: move the cursor over part of the arrow geometry, press and hold LMB and move the mouse. See what happens.
  2. Switch to the Active Tool and Workspace settings tab in the properties area on the right-hand side of the window
  3. You can change the color you're painting with with the colored box directly right of the Draw in the bar at the top of the 3D view area. Click the color to bring up the color chooser. You can also change the radius and strength settings to influence the vertex painting.
  4. Change back to Object Mode using the Ctrl-TAB mode menu when you're done playing around. Note that the arrows no longer show the vertex colors.
"},{"location":"basics/importing_data/exercise_vertex_colors/#rendering","title":"Rendering","text":"

The second way to use vertex colors is to apply them during rendering.

  1. If you've screwed up the vertex colors really badly in the previous steps you might want to reimport the model...
  2. Make the 3D arrows in the scene the single selected object
  3. Switch to the Object Data tab in the properties
  4. Check that there is an entry \"Col\" in the list under Color Attributes. A model can have multiple sets of vertex colors, but this file has only one set called \"Col\", which has domain Face Corner and type Byte Color.

Now we will set up a material using the vertex colors stored in the \"Col\" layer.

  1. Go to the Material tab
  2. Select the material called \"Material\" in the drop-down list left of the New button. This set the (grey) material \"Material\" on the arrows geometry.
  3. Press F12 (or use interactive render) to get a rendered view of the current scene.

You'll notice that all the geometry is grey/white, i.e. no vertex colors are used. We'll now alter the material to use vertex colors.

  1. In the settings of the material there is a field called \"Base Color\" with a white area right of it. This setting controls the color of the geometry.
  2. Click the button left of the color area (it has a small yellow circle in it)
  3. Pick Attribute from the left-most column labeled Input. This specifies that the material color should be based on an attribute value.
  4. Base Color is now set to Attribute | Color. Directly below the entry there is a Name field. Enter \"Col\" here, leave Type set to Geometry. This specifies that the attribute to use is called \"Col\" and comes from the mesh geometry (i.e. our vertex colors).
  5. Now render the scene again
  6. The rendered image should now be showing the arrow geometry colored by vertex colors

"},{"location":"basics/importing_data/exercise_your_data/","title":"\ud83d\udcbb Your own data","text":"

Info

If you do not have data that you want to import in Blender then you can skip this part.

  1. Think about your own data

    • What is the goal for importing the data?
    • What visual representation(s) of the data do you aim for?
    • What scene object types do you need for this?
    • What approach would you use to get it into Blender?
    • Challenges?
    • Problems?
  2. Try to import your own data, or a representative subset, using your chosen approach.

"},{"location":"basics/importing_data/introduction/","title":"Introduction","text":"

This chapter will present a lot of information on getting data into Blender through importing. It will describe the overall approach, available file formats and their relative strengths/weaknesses and look closer into handling specific types of data, specifically point data and volumetric data.

Most of this chapter consists of the video presentation below, which covers quite a few subjects. After you are done viewing the video there is a first exercise on vertex colors, which uses data we provide. While the second exercise is more of a guideline for when you want to import your own data.

As mentioned in the presentation the PDF slides for this chapter contain some more reference material on getting data from ParaView, VisIt and VTK.

Point cloud primitive (3.1+)

As shown in the video, one way to render point data is to use instancing for placing a simple primitive like a sphere at each point location. Working with such instanced geometry is somewhat limited, as it introduces a hit on performance and memory usage, both for interactive work in the user interface, as well as rendering in Cycles.

Starting with Blender 3.1 Cycles now has dedicated support for rendering large numbers (millions) of points as spheres directly. However, there is currently in 3.1 no way to directly create a point cloud primitive by importing a file, and the only alternative is using Geometry Nodes to generate a point cloud primitive from a vertex-only mesh. But Geometry Nodes are not a topic in this Basics part of the course.

Availability of importers/exporters (Linux distributions)

When using the official Blender binaries from https://www.blender.org all supported importers and exporters will be included.

But especially when using a Linux distribution's Blender package some features might not be available, usually to libraries not being enabled when the package was built. For example, currently (May 2022) on Arch Linux the USD import/export support is not available in the Arch Blender package.

If you run into such issues, please download and use the official binaries instead.

"},{"location":"basics/rendering_lighting_materials/composition/","title":"Composition","text":"

Below you'll find supplementary video on image composition. It is supplementary in a way that you won't need it to do the exercises but it might help you with your future Blender renders. This video will give you some practical guidelines that could give your final renders the extra edge it needs to stand out:

"},{"location":"basics/rendering_lighting_materials/gpu_rendering/","title":"GPU-based rendering","text":"

In general using Cycles with GPU-based rendering is a lot faster than rendering on a multi-core CPU. For example, here are render times on one of our workstations for the scene with the 3 monkey heads used in the video from the last chapter (showing camera settings and depth-of-field):

Type Device Render time* CPU Intel Core i5 @ 3.20 GHz 50.16 s GPU NVIDIA GTX970 6.59 s * 960x540 pixels, 128 samples per pixel

On this particular scene, with these settings and on this hardware using GPU rendering is roughly 7.6x faster! However, only by making a comparison on your particular system can you really find out if GPU rendering is a good option for you (for example, you might not have a very powerful GPU in your laptop or workstation).

Apart from performance there are some other aspects to consider with GPU rendering:

  • When doing a GPU render your desktop environment might become less responsive, although this has become less of a problem with recent Blender versions
  • A GPU usually has less memory available, which might cause problems with really large scenes

In case you want to try enabling GPU rendering go to the Preferences window (Edit > Preferences) and then the System tab. The settings available under Cycles Render Devices are somewhat dependent on the hardware in your system but should look a little like this:

GPU rendering in Blender has slightly different support depending on whether you're on Windows, Linux or macOS. Below, we summarize the options you can encounter. The most up-to-date official reference for this information is this page from the Blender manual.

"},{"location":"basics/rendering_lighting_materials/gpu_rendering/#windows-linux","title":"Windows, Linux","text":"

By default None will be active, meaning no GPU acceleration is used for rendering and it all happens on the CPU.

In general, on a PC/Laptop with an NVIDIA GPU the CUDA option is available and to be preferred, although OptiX might work well as an alternative (but will only be available on more recent NVIDIA GPUs).

On Windows systems with an AMD GPU the option HIP might be available and is then definitely worth a try.

"},{"location":"basics/rendering_lighting_materials/gpu_rendering/#macos","title":"macOS","text":"

macOS GPU rendering is still under development

The Blender 3.1 release notes warn that the GPU rendering implementation on macOS is in an early state. Performance optimizations and support for Intel GPUs are under development.

For macOS systems only the Metal option will be available, apart from the default None.

In Blender 3.1 the GPU rendering support in Cycles is based on the Metal API, which is not supported on all macOS systems (also depending on the system version). Only for the following two configurations is GPU rendering support currently available:

  • Apple M1 computers running macOS 12.2 or newer
  • Apple computers with AMD graphics cards running macOS 12.3 or newer

GPU rendering versus acceleration

This section is about GPU rendering in Cycles, which is different from GPU acceleration for the Blender user interface and EEVEE (see below) rendering. So even though your macOS system might not provide GPU rendering in Cycles, it might still work fine for Blender usage with a GPU-accelerated 3D viewport, while using CPU-based rendering.

"},{"location":"basics/rendering_lighting_materials/gpu_rendering/#a-thing-called-eevee","title":"A thing called EEVEE?","text":"

When consulting other Blender materials, specifically on rendering, you may see references to EEVEE. This is another render engine available in Blender, which is different from the Cycles engine we will be using in this course.

Even though EEVEE is meant for fast and highly interactive rendering work, even more so than the Cycles preview render we showed so far, we do not use EEVEE in this course. The reasons for this are:

  • We personally find Cycles to be more intuitive to work with and explain, as it is built around the path tracing algorithm, which is easy to understand while providing a very versatile set of rendering and lighting features. EEVEE's rendering setup is somewhat more complex, as it uses a combination of different techniques that needs more separate controls.
  • Cycles can render both on CPU and GPU, whereas EEVEE can only render on a GPU (more specifically, it needs OpenGL)
  • EEVEE doesn't support headless rendering, i.e. when starting a Blender render from the command-line without showing the user interface. This is especially relevant when rendering long animations on an HPC system, or other cluster environment without a GPU-accelerated display environment.
  • Cycles is more feature-complete, whereas EEVEE has some limitations compared to Cycles, although that situation improves with each Blender release
  • Although Cycles and EEVEE are getting closer in features with every Blender release they are still not fully equivalent. They also use separate controls in the UI for certain features. This would mean having to dedicate extra course material on these differences

If you do like more information on EEVEE then please check this section in the Blender manual.

"},{"location":"basics/rendering_lighting_materials/introduction/","title":"Introduction","text":"

This part of the course is all about the aesthetics, the last part of the pipeline. You know now of the basics and by now you are able to import some scientific data within Blender, the final thing that is left is how the final image will look like. What does the surface of your 3D-model will look like, what tangible texture and colors it has, how it will be illuminated and finally how it will be composed. All these things go hand-in-hand and these things need to be in balance to create an aesthetically pleasing image.

Before you start with the exercises the following video will give you the theoretical and practical background to make these exercises. In this video there are some Blender walk-throughs, if you want to follow along you can use the walk-through files in the walkthroughs/basics/06-rendering-lighting-and-materials directory.

Cycles X

Due to the 3.0 update of Blender and the introduction of Cycles X some details have changed when it comes to rendering with Blender. The video and exercises have been updated to accommodate this but some of these changes might have been missed, please inform us when you find one of those discrepancies.

"},{"location":"basics/rendering_lighting_materials/rlm_assignment/","title":"\ud83d\udcbb Rendering, lighting and materials","text":"

Open the rlm_assignment.blend file and you'll see several objects in the scene: a ground plane, a plateau, Suzanne (the monkey head) and 3 knots.

The goal of this assignment is to place some lights, set the camera parameters to your liking, add materials to the objects and render the final image. We'll do this in steps.

Tip

To view your result with realistic lighting and materials use the Shading pie menu, which opens with the Z key:

  • Option Rendered shows realistic lighting and materials, with slower interaction
  • Option Solid shows simple colors and lighting, with faster interaction
"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#lighting-creating-light-sources","title":"Lighting - Creating light sources","text":"

To see what we are doing in Rendered shading (Z-Rendered) we first need to add the lighting.

  1. Add one or two sun lights by either using the 3D view menu in the header (Add > Light > Sun) or use Shift-A > Light > Sun in the 3D view
  2. Try to position and rotate the lights so that they light the objects under a bit of an angle (G and R keys).
  3. Before we change the appearance of the lights we need to switch to Rendered using the Shading pie menu (Z > Rendered)
  4. Now adjust the Color and Strength settings under the Object Data properties tab in the properties panel, perhaps try to give one the lights a warm yellowish sun-like color and the other a more less strong blueish and cold color.
  5. In the same properties panel tab, try to adjust the Angle (or Radius or Size for the other light types) of the sun light and see how it affects the shadows. Small angles (or radii or sizes) create hard shadows, which are ideal to see minor details and large angles (or radii or sizes) create soft shadows, which is more suited to reduce the overall contrast and make it less straining on the eye.
  6. Now in the same properties editor tab, try out some different lamp types (Point, Sun, ...) to experiment with the different lighting effects they produce.

Bonus: If setting up the lamps is too cumbersome, you can go to the World tab in the properties editor and click the little globe drop-down menu button at the top and select the HDRIWorldLighting. This will enable predefined environment lighting using a 360 image of somebody's living room. Do make sure that you de-activate ( in the Outliner) or remove the lamps to see the full effect.

"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#camera-setting-the-starting-point-of-the-light-paths-or-rather-camera-paths","title":"Camera - Setting the starting point of the light paths (or rather camera paths)","text":"

With the lighting setup, we can now see what each of the camera settings does. Or from the light ray paths perspective: configure the starting point of the light rays.

  1. First you need to be in the camera view to be able to see the changes of the camera settings by selecting with the View pie menu (`-button) the View Camera option or through the 3D view menu in the header (View > Viewpoint > Camera). The former way is a toggle interaction so when you are already in the camera view you will toggle it off.
  2. Try changing the camera's focal length. For this, select the camera Camera and go the Lens settings in the Object Data properties tab in the properties panel. There you can find the Focal Length setting, try for example values 18, 50 and 100 and see what effect this has. Notice that when you set the Focal Length to a lower value you might see clipping (the scene is cut off from a certain distance). This can be changed by setting the Clip Start in the same Lens section to a lower value, e.g. 0.01. Finally set the focal length to the desired value.
  3. Next we are going to bring the focus to a chosen object in the scene with the depth of field settings. For this, select the camera, scroll down in the Object Data properties tab in the properties panel to the Depth of Field settings. Check the check-box before Depth of Field to activate the depth of field. Now try to set the Focus on Object value to the Suzanne object and test different values for the Aperture > F-Stop setting.
  4. When you are done, disable depth of field again. This makes the material editing easier.

If the lighting gives the desired effect looking through the configured camera then you can give the objects the look you want with materials in the next section.

"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#materials-how-will-the-light-paths-bounce","title":"Materials - How will the light paths bounce?","text":"

To design how the light is reflected or refracted off the objects you are going to give each object a different material.

  1. For each object (including ground plane and plateau): - Select the object and go to the Material tab in the properties editor. - In the Material tab click the New button. - Then under the Surface section set the Surface parameter to either Diffuse BSDF, Glossy BSDF or Principled BSDF.
  2. Try to play with the material settings Roughness and Color (the latter is called Base Color for the Principled BSDF)

Bonus: When you feel that the roughness and the color alone didn't give you the look that you want with the Principled BSDF then also try and have a look at the other, in the slides, mentioned settings, Metallic, Transmission, IOR and Subsurface.

"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#rendering-creating-your-final-image","title":"Rendering - Creating your final image","text":"

Lights, camera, (materials,) set aaaaaaand action!... Now you will set the desired render settings to generate the final image!

  1. Go to the properties editor and set the following settings: - Render properties tab
    • Set Device to GPU Compute. If your device doesn't have a (powerful) GPU set it to CPU.
    • Sampling section: set Render > Samples to 128
    • Light Paths section: set Clamping > Indirect Light to 1.0
    • Output tab
    • Format section: set Resolution to 1920x1080, 100%.
  2. If everything is set, press F12.

Now the Image editor will replace the 3D view and your image will slowly be rendered in parts called \"tiles\".

  1. Finally when the image looks the way you want don't forget to save it! In the Image editor go to the Image menu and click on Save As... and choose a location and save the image.
"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#performance-speed-up-those-renders","title":"Performance - Speed up those renders","text":"

Now that we know how to improve the look of the scene and save the final render we will improve the speed of the render.

  1. Write down the render time shown in the upper left corner of the Image editor (Example upper right corner: Frame:1 | Time:00:09:84 | Mem:6.09M, Peak: 164.29M).
  2. Close the Image editor if it is still open.
  3. Change the following settings in the Render properties tab : * Sampling section: set Render > Samples to 32 * Sampling section: turn on the denoiser with, Render > Denoise.
  4. Now press F12 again to render another image.

As you can see when comparing the render time of our previous render and this one, this one is significantly faster.

Render quality when using denoise features

One thing to keep in mind is that when you are using the denoise feature you will lose a little detail.

Noise Threshold

Blender 3.0 introduced another feature to reduce the render times called Noise Threshold. Turning it on and giving it a value between 0.1 to 0.001 will prematurely terminated the sampling when a pixel reaches a certain noise threshold and by doing so reduces the render time.

"},{"location":"basics/simple_mesh_editing/introduction/","title":"Introduction","text":"

This chapter will introduce the basic mesh editing tools available within Blender. The basic mesh editing will be performed with the so called modifiers, these modifiers make it relatively easy to do large mesh editing operations that can greatly impact the visual representation of your 3D-models. Below you'll find a video that will give you a theoretical introduction followed by a practical walk-through in Blender. If you want to follow along with the walk-through you can find the Blend files in the walk-through directory walkthroughs/basics/04-simple-mesh-editing.

After you watched the video about simple mesh-editing you are ready for the exercises!

"},{"location":"basics/simple_mesh_editing/sme_assignment/","title":"\ud83d\udcbb Simple mesh editing","text":"

In this exercise you will use some mesh modifiers on an iso-surface of a CT scan of a fish and try to see if you can reveal the insides.

Once you opened the exercise blend file sme_assignment.blend you'll see the fish iso-surface above a plane.

Info

This exercise uses a somewhat large 3D model, at around 155,000 triangles. On most modern PCs and laptops this should not pose a problem, so it is a good test to see if your system is able to handle this (which might indicate some limitation).

"},{"location":"basics/simple_mesh_editing/sme_assignment/#decimate-reducing-the-triangles","title":"Decimate - Reducing the triangles","text":"

The fish 3D model, for your convenience, has been divided into two parts: the fishskin and fishbones. Combined, this model has a large amount of triangles (155k for the fishskin and 573k for the fishbones). On lower end devices this can slow everything down to a crawl. In order to be able to add modifiers or edit the meshes with reasonable interactivity you first need to decimate the meshes. The decimation is for reducing the number of triangles, by merging adjacent triangles together into one, iteratively.

  1. Select the fishskin by clicking on the fishskin with LMB.
  2. Once selected go the Modifiers tab in the properties editor.
  3. Click Add Modifier and add the Decimate modifier (it's in the Generate column).
  4. Keep the decimation type set to Collapse, set the Ratio to 0.5 and press Enter. The mesh processing will take a couple of seconds but will immediately reduce the number of triangles to ~77k, which is visible in the modifier under Face Count. You can even reduce it to a lower number but it might affect the appearance and shape of the model negatively by creating hard edges on the surface.
  5. Once you are satisfied with the results press Apply, under the dropdown-menu arrow to the right of Decimate , or by pressing Ctrl-A, while focused on the Decimate modifier, to make the changes permanent. Again, this can take a few seconds.
  6. Now that the fishskin triangles have been reduced, select it and press H to hide it or click the icon in the Outliner. This simultaneously hides the fishskin and reveal the fishbones.
  7. Preform the same steps the fishbones and try to reduce the triangle count significantly without affecting the appearance of the model.
  8. Now unhide the fishskin again for the next assignment by clicking the icon.
"},{"location":"basics/simple_mesh_editing/sme_assignment/#smooth-ironing-the-creases","title":"Smooth - Ironing the creases","text":"

The geometry of the fishskin and the fishbones both look a bit rough because of the iso-surface extraction algorithm. If that is not desired, the rough edges can be smoothed out with the Smooth Modifier.

  1. Select the fishskin model by clicking on the fishskin with LMB.
  2. Go to the Modifiers tab in the properties editor.
  3. Click Add Modifier and add the Smooth modifier (it's in the Deform column).
  4. Keep the Factor at 0.5 but increase the Repeat to 5. Watch out with using the slider, every change re-triggers the modifier and when you accidentally slide to a high number it will take a while to calculate.

Unfortunately you will notices that the Smooth modifier creates tears along the skin model, this conveniently revealed that the underlying mesh triangles are not fully connected and are present in connected patches. These patches stems from the creation of this model where the calculation of the geometry was done in multiple processors and each patch was created by a separate process. This can be fixed in the Edit-mode but that will be covered in the advanced course.

  1. The Factor is good as it is, but changing the value shows what kind of drastic effect it has.
  2. Once you are satisfied with the smoothness of the fishskin press Apply and try to do the same with the fishbones.
"},{"location":"basics/simple_mesh_editing/sme_assignment/#boolean-slicing-the-geometry","title":"Boolean - Slicing the geometry","text":"

If you wanna both show the inside of the fish with the context of the outside you can use slice through the fishskin model and reveal the insides of the fish by using a Boolean Modifier.

  1. Before you add the Boolean Modifier you first need to reveal the fishskin mesh object again by clicking the icon in the Outliner.
  2. Select the fishskin mesh object and go to the Modifiers tab in the properties editor to add a Boolean modifier (it's in the Generate column).

Now that the Boolean modifier is added we still miss another 3D mesh object to perform the Boolean operation with. You are now gonna prepare the other mesh object.

  1. Move the mouse into the 3D view and add a new UV sphere with Shift-A > Mesh > UV Sphere
  2. Scale and translate (S and G keys) the UV sphere so that it overlaps a part of the fish which you want to clip away.
  3. The UV sphere is now shown as a solid surface, which is not desirable when you want to use it for clipping because you want to see through it. You can change the representation of an object in the 3D view using the Object properties under Viewport Display: set Display As to Wire.
  4. Also when you want to look at the results in Rendered mode you need to make the sphere invisible using the Ray Visibility settings under Visibility: disable all check-boxes (Camera, Diffuse, Glossy, Transmission, Volume Scatter and Shadow)

Now that you prepared the mesh object to preform the Boolean operation with, you can continue setting up the Boolean modifier.

  1. Select the fishskin mesh object and go to the Modifiers tab in the properties editor to reveal the already added Boolean modifier.
  2. Now under Object, select the Sphere mesh object.
  3. Before we want to start moving the clipping Sphere around we want to change the Solver to Fast. This is more simpler and better performing solver and in our case, with the underlying broken patched mesh, also a better option since this solver is able to handle this type of geometry.
  4. Now if you select the Sphere object and translate and scale it over the fishskin mesh object you can clip away any desired part as the Boolean modifier updates in real time.

As you might have noticed, this Boolean modifier does have some problems with this current mesh and placement of the clipping sphere must be precise. This off course is not always the cause but it should be kept in mind when working with the Boolean modifier.

Finally you can view your results with Cycles with Rendered shading (Z > Rendered) for better lighting and materials. Or you can give the camera a better position and make a nice final render.

"},{"location":"news/","title":"News","text":""},{"location":"news/2023/09/19/new-courses-being-planned-for-q4-2023/","title":"New courses being planned for Q4 2023","text":"

We are in the process of finalizing dates for a new set of Basics and Advanced Blender courses at the end of 2023. These will be held online. Watch this news section, or the schedule.

"},{"location":"news/2023/11/20/basics-course-starting-4-december-2023/","title":"Basics course starting 4 December 2023","text":"

A new Basics course will be held starting 4 December 2023. The course is self-paced using our online materials, supported by a kick-off meeting followed by weekly check-in moments. The course runs over a 3-week period and will be held online.

See the schedule for precise dates and times. You can register for the course through this page.

"},{"location":"overview/about/","title":"About us","text":"

We are members of the High-Performance Computing & Visualization (HPCV) group at SURF, and are based in Amsterdam. SURF is a cooperative association of Dutch educational and research institutions in which the members combine their strengths to acquire or develop digital services, and to encourage knowledge sharing through continuous innovation.

Within the HPCV group we support users of the Dutch National compute infrastructure with visualization expertise and software development, on topics such as data visualization, remote visualization, 3D modeling and rendering and use of eXtended Reality (XR) for research and education.

Part of our jobs is to provide courses on topics related to visualization in HPC. This Blender course was created for the PRACE Training Center and first provided (in-person) in 2018, and has since been repeated at least once a year.

"},{"location":"overview/about/#paul-melis","title":"Paul Melis","text":"

Paul Melis has an MSc in Computer Science from the University of Twente in The Netherlands and worked on topics in scientific visualization and VR at the University of Groningen and University of Amsterdam before joining SURFsara in 2009 (which has since become part of SURF).

At SURF he is involved in several activities related to visualization, including realizing visualization projects for end-users, teaching courses and providing user support for visualization tasks on our HPC systems. As part of the SURF innovation portfolio he is involved in the use of extended reality (XR) for research and education. He likes to use Blender for all things 3D, but also works with ParaView, and sometimes develops a bit of code in Python, C++ or Julia.

"},{"location":"overview/about/#casper-van-leeuwen","title":"Casper van Leeuwen","text":"

Casper has a MSc in Computer Science from Delft University of Technology where he graduated on the topic of medical visualization. He has been at SURFsara since 2014.

He mainly works on web-based 2D/3D visualization, including Jupyter Notebooks and loves to work on Blender projects when the goal is to make something look aesthetic! Besides that he also knows his way around Unity and Unreal Engine.

"},{"location":"overview/about/#ben-de-vries","title":"Ben de Vries","text":"

Ben de Vries has a PhD in Astrophysics from KU Leuven. He joined SURF in 2019. He focuses on 2D/3D visualization projects using Blender, Unity and general 3D programming.

"},{"location":"overview/conventions/","title":"Text conventions","text":"

The conventions on these pages follow those used in the official Blender documentation as much as possible:

  • Keyboard and mouse actions, menu names, literal text to enter, etc are shown in monospaced bold, e.g. X or Shift-MMB
  • LMB = left mouse button, MMB = middle mouse button, RMB = right mouse button, Wheel = scrolling the mouse wheel
  • Menu actions are shown as View > Cameras > Set Active Object as Camera, for View menu, Cameras submenu, \"Set Active Object as Camera\" option.
"},{"location":"overview/conventions/#exercises","title":"Exercises","text":"

We highlight exercise sections by prefixing their titles with a \ud83d\udcbb symbol.

"},{"location":"overview/introduction/","title":"Introduction","text":"

This Blender course consists of two parts, that are each taught separately online over the course of a number of weeks:

  • In the Basics part we assume no prior knowledge of Blender. We will introduce Blender from the ground up, starting with the user interface and basic functionality. We cover the 3D scene, cameras, lights and materials and some basic mesh editing and animation.

    It helps to have some familiarity with basic 3D graphics concepts, such as 3D geometry, transformations and rendering. But if not, you will probably pick those up quite quickly during the course.

  • In the Advanced part of the course, we assume participants already have basic knowledge of Blender, preferably by following our basics course. We assume participants are familiar with the Blender user interface, basic functionality and concepts like the 3D scene, cameras, lights, materials and some basic mesh editing and animation.

    The advanced part goes into detail on the Python API for scripting, node-based materials, mesh editing and animation. The main goal of the Advanced course is for you to realize your own project with Blender, based on data you choose.

"},{"location":"overview/introduction/#context","title":"Context","text":"

This course is aimed at scientists and researchers of all levels. We don't make many assumptions on use cases for Blender, but do assume the context to be an academic setting. So we won't go into creating visual effects for putting a massive CGI tornado in your backyard that scoops up your neighbours. But if you happen to write a tornado simulation for your research we will be more than happy to see how we can use Blender to make attractive visuals of the data.

This doesn't mean that we only assume to apply Blender to existing scientific data. Sometimes certain concepts are best explained by creating a 3D scene, say to produce a nice looking cover image for your PhD thesis, or to illustrate or visualize a certain concept.

From previous editions of the course we know many participants bring their own data and want to apply Blender to it. We encourage you to do that as well, as it will also help in providing some focus to your use of Blender.

"},{"location":"overview/introduction/#blender-version","title":"Blender version","text":"

Update in progress

We are currently (Q4 2023) in the process of updating all the course material to Blender 3.6

We currently use Blender 3.1 for this course and the materials provided.

Blender as a software package is a fast moving target, usually with lots of shiny new features and bug fixes in each release (and multiple releases per year). This is great, of course, but with each release usually also a lot of small tweaks and improvements are made, especially in the user interface and workflow.

We originally planned to only base this course on the Blender LTS (Long-Term Support) releases, which remain more-or-less unchanged regarding UI and features for roughly 2 years. But there have been some major improvements in certain versions that would only become available in the next LTS release much later. Hence, we chose to update the course to more regularly.

Course videos using previous Blender versions

Some of the videos used in the course might still show an earlier Blender version. In those cases we have estimated that the video is still (largely) up-to-date and have chosen not to update the video, as this is quite time-consuming.

Specifically for Linux users that use their Linux distribution's package of Blender

Sometimes a the blender package from a distro gets built with slightly different versions of software libraries, compared to the official Blender distribution. This is known to sometimes cause different behaviour or even bugs, for example in the handling of video files by the FFmpeg library. In case you find strange issues or bugs with your distro's Blender you might want to try downloading the official Blender binaries to see if that fixes those issues.

"},{"location":"overview/introduction/#issues-with-course-materials","title":"Issues with course materials","text":"

We try to keep this course up to date to match the specific version mentioned above. But we might have missed small things. If so, please let us know through Github by reporting an issue.

If you don't have a Github account, or would rather not create one, then telling us through Discord is fine as well.

"},{"location":"overview/introduction/#prerequisites","title":"Prerequisites","text":"

You will need:

  • A system (PC or laptop) to work on. This can be a Linux, macOS or Windows system. It is preferred to use a system with a somewhat recent GPU (or at most 10 years old) with working OpenGL 4.3 support. See the section \"Hardware Requirements\" on this page for the official requirements for running Blender.
  • Blender 3.6 installed on the above system. You can download it from here, or you can use your system package manager to install it.

    Warning

    It is in general not recommended to use a wildly different Blender version for this course, due to possible mismatches in the user interface and functionality with the course material. A different minor version, e.g. 3.1.1 when it becomes available, should not cause issues, but a future 3.2 release might have some major changes.

  • Please test the Blender installation before the course starts using the instructions sent by e-mail. This will tell you if Blender is working correctly and can save you (and us) time fixing any system-related issues during the course period.

Recommended:

  • Using a 3-button mouse is preferred, as not all Blender functionality is easily used through 2-button mouse or laptop track-pad.
"},{"location":"overview/introduction/#feedback","title":"Feedback","text":"

We will ask for feedback on this in the online sessions, but if you have remarks then please let us know. You can do this either through Github by reporting an issue., or in the Discord sessions.

"},{"location":"overview/schedule/","title":"Schedule","text":"When What Where Purpose Mon 04-12-23 \u2022 10:00 - 11:30 Basics session #1 Online Intro to the course, getting to know each other Mon 11-12-23 \u2022 10:00 - 11:30 Basics session #2 Online Feedback on first week, Q&A Mon 18-12-23 \u2022 10:00 - 11:30 Basics session #3 Online Feedback on course, Q&A, closing"},{"location":"overview/setup/","title":"Course setup","text":"

Course period

Although this course material is available online at any time, we only provide the support mentioned at scheduled course periods throughout the year. Please check the EuroCC Training Agenda when the next Blender course is scheduled.

We use a combination of different media within the course, but the basis is for you to follow the training in your own pace over a period of two weeks. During this period we provide support where needed.

The online material consists of:

  • Videos that introduce and demonstrate new concepts and features within Blender.
  • Slides (also presented as part of the videos) for explanations. These are basically presentations we would otherwise do plenary.
  • Exercises for you to explore new topics and to train your skills

We have scheduled a few short plenary online sessions in the course period to provide general feedback and/or guidance.

"},{"location":"overview/setup/#support","title":"Support","text":"

During the course period we provide support through our Discord server, see this page. On Discord there's a plenary chat channel, but also the possibility to have a 1-on-1 video chat in cases where we need to look more closely over your shoulder to solve a particular issue.

"},{"location":"overview/setup/#data-files","title":"Data files","text":"

Most of the exercises require you to load a Blender scene file that we provide. These files can be found at https://edu.nl/8n7en.

It is best to download the full content of the share to your local system using the Download button in the upper-right.

This share contains:

  • data - Blender files (and other data) for the assignments, split into basics and advanced parts, with a sub-directory per chapter
  • slides - The slides (in PDF)
  • walkthroughs - Some of the files used in the videos, again split split by basics and advanced
  • cheat-sheat-3.1.pdf - A 2-page cheat sheet with often used operations and their short cuts
"},{"location":"overview/setup/#time-investment","title":"Time investment","text":"

The precise amount of time needed to follow this course depends for a large part on how much effort you devote to each topic, your available time, your learning pace, etc. However, the in-person course setup we used in previous years was a full-day course (with quite a high pace).

For the Basics course the time spent on the different subjects and their assignments in that setup is shown below. This might give you some idea on the relative depth of the topics.

Topic Time in schedule (previous in-person course) Videos (this course) Introduction 30 minutes 5 minutes Blender basics 120 minutes 45 minutes Importing data 30 minutes 30 minutes Rendering, lighting & materials 105 minutes 65 minutes Simple mesh editing 30 minutes 20 minutes Basic animation 45 minutes 35 minutes

For the Advanced course it is hard to give a general indication of the expected time investment needed for the course. It depends partially on your own goals and ambitions for the main task: the project of visualizing your own data in the way you see fit.

In terms of topics the Advanced materials and Animation chapters are relatively straightforward and can probably be completed in a day. In contrast, Python scripting in Blender is a very extensive topic and can end up taking a lot of time if you want to work with the more complex parts of the API.

"},{"location":"overview/support/","title":"Support","text":"

Support hours

We will be active on Discord during office hours (CET time zone) and will try to also be on-line outside of those hours. Note that this is all on a best-effort basis.

Detailed interaction and support during the course period is provided through our Discord server. Here you can ask questions by chat, upload an image or (if needed) start a video session or share your screen with one of us.

Depending on the course you're following (basics or advanced) you need to use the category called BASICS BLENDER COURSE or ADVANCED BLENDER COURSE. Within these categories you will find:

  • A shared text chat channel (e.g. 2022-04-blender-basics-chat) for interacting with the course teachers and other course participants. Here you can ask questions, show your work, or anything else you feel like sharing.
  • A video channel (video channel), in case we want to share something through Discord

For one-on-one contact, including the option for screen sharing, right-click on one of our names as shown in the picture above and pick either the button for voice chat or video chat.

"},{"location":"references/cheat_sheet/","title":"Cheat sheet","text":"

With this course we provide a 2-page cheat sheet that lists basic and often-used operations and their shortcut keys. It also includes a summarize of major interface elements.

The cheat sheet can be found here as a double-sided PDF, which can easily be printed.

"},{"location":"references/community/","title":"Community resources","text":"

On blenderartists.org lots of Blender users and artists are hanging out. There you can ask questions or feedback, show off your work or check out the vast amount of knowledge, tips and Blender renderings in the forums.

BlenderNation gathers information on different topics and includes video tutorials, blog posts on art created with Blender and a lot more.

The Blender subreddit contains many different posts, ranging from simple questions to artists show off their amazing work.

Well-known artists and gurus working with Blender are:

  • Jan van den Hemel shares many tips and tricks through Twitter, both on Blender usage as well as making a scene look a certain way. He also publishes these tricks in an e-book.
  • Andrew Price (twitter) aka \"Blender Guru\" provides many cool tutorials on https://www.blenderguru.com/ and his YouTube channel. He is well-known for a multi-part tutorial series on modeling a realistic donut!
  • Glex Alexandrov (twitter and twitter) aka \"Creative shrimp\" has some very creative and inspirational tutorials on his YouTube channel.
  • Ian Hubert (YouTube and twitter), famous for his Lazy tutorials (very efficient 1 minute tutorials), has videos on advanced green screen techniques and VFX in Blender.
  • Simon Thommes (twitter and YouTube) is a materials wizard, he is able to create complex geometry out of one cube or sphere with just the Shader editor.
  • Steve Lund has some great Blender tutorials on his YouTube channel.
  • Zach Reinhardt has some great modeling, texturing and VFX tutorials on his YouTube channel
  • Peter France is the Blender artist at the Corridor Crew which just started his own YouTube channel with some instructive tutorials.
  • YanSculpts does not fit this course material perse but it goes to show how versatile Blender can be, this artist creates some amazing sculptures in Blender of which he shows the process on his YouTube channel.
  • Josh Gambrell shares a lot of tips and tricks for advanced mesh editing on his youtube channel (mostly hard surface modeling).
"},{"location":"references/interface/","title":"User Interface elements","text":"

The default layout of the Blender user interface is shown below. Note that the layout is fully configurable.

* Scene statistics

By default the status bar at the bottom only shows the Blender version number. You can add extra statistics, such as the number of 3D objects in the scene and memory usage in the preferences.

You can either right-click on the status bar to enable display of extra values. Or use the application menu Edit > Preferences, select the Interface tab, in the Editors > Status Bar section and check all marks (Scene Statistics, Scene Duration, System Memory, Video Memory, Blender Version).

"},{"location":"references/interface/#editor-type-menu","title":"Editor type menu","text":"

The yellow highlight indicates often used ones for this course

"},{"location":"references/official/","title":"Official sources","text":"

The official home for Blender is blender.org

"},{"location":"references/official/#manuals","title":"Manuals","text":"

The Blender Reference Manual for version 3.6 can be found here. The documentation on the Python API is here.

Access help from within Blender

You can open the Blender documentation pages from within Blender itself, using the options in the Help menu.

"},{"location":"references/official/#demo-files","title":"Demo files","text":"

Official demo files showing off lots of cool features and scenes can be found here, including the scene files used to render the splash images of different Blender versions.

"},{"location":"references/official/#blender-development-and-news","title":"Blender development and news","text":"

If you are interested in following recent development in Blender then the weekly Blender Today Live sessions on YouTube are a good resource.

Videos on lots of different topics, including videos from the yearly Blender Conference, can be found on the official Blender YouTube channel.

Blender has official accounts on Mastodon an Twitter/X. The hashtag to use for Blender is #b3d (although sometimes also #blender).

"},{"location":"references/official/#mastodon","title":"Mastodon","text":"

On Mastodon the official account is @blender@mastodon.social.

"},{"location":"references/official/#twitterx","title":"Twitter/X","text":"

On Twitter you can follow @Blender for official Blender news or @BlenderDev for more in-depth development information.

"},{"location":"references/scene/","title":"Scene resources (3D models, materials, textures)","text":"

Here we list a number of online resources for 3D models, textures, shaders, etc.

In general certain 3D models might be free for download, while others might only be available paid (usually for a small amount). Usually, the nicer the 3D model the higher the cost. Also, different licenses are used for the models and these will describe how you can use the models and any attribution you might need to give when using it.

"},{"location":"references/scene/#examples","title":"Examples","text":"
  • Blender provides a set of demo files, either made by artists or to demonstrate new features. They can be found here.
"},{"location":"references/scene/#3d-models","title":"3D Models","text":"
  • Released together with Blender 3.6 an asset bundle with various human base meshes was made available. The assets can be found here.
  • Turbosquid is one of the oldest 3D model websites and provides models in all sort of topics, some free, some paid.
  • Sketchfab hosts a large collection of 3D models from many different categories. Many 3D models are textured and some are even animated.
  • 3D Model Haven distributes freely usable 3D models, many of them textured. It is not as extensive as other websites, but the upside is that all models can be freely used.
  • CGTrader also hosts many 3D models, some of them free, some paid
  • There's a section on BlenderNation where Blender models are shared. Again, some of these might be free, others will involve some payment.
  • BlenderMarket contains a section with 3D models
  • Quixel's Megascans is a great \"paid\" source for 3D models as well as textures which can be used for free when it's attached to an Epic account and the assets are only used for an Unreal Engine application. It's great for personal use but if you publish anything containing an asset from Quixel without Unreal Engine attached to it you have to pay for the asset.
"},{"location":"references/scene/#textures-and-images","title":"Textures and images","text":"
  • Texture Haven provides textures to be used in materials and shaders. All textures available are free.
  • CC0 Textures has many high-quality textures
  • BlenderMarket has a section with shaders, materials and textures.
  • HDRI Haven is similar to Texture Haven, but contains many freely available HDRI 360 images that can be used for realistic environment lighting in Blender
  • Poliigon, where the CEO is the Blender Guru himself, has some great looking free samples and otherwise high quality paid textures.
  • textures.com has some high quality, high resolution, movie grade textures under a paid subscription or credit-based payment model.
"},{"location":"references/scene/#blenderkit","title":"BlenderKit","text":"

BlenderKit is an online repository of materials, 3D models and a few other things. It used to come bundled with Blender as an add-on, but since Blender 3.0 this is no longer the case. You need to download and install the add-on yourself, for which instructions can be found here.

When the add-on is installed and enabled it provides some extra elements in the Blender interface for searching, say a material or 3D model, by name, which can then be easily used in a Blender scene:

Note that many of the assets in BlenderKit are free, but some are only available by buying a subscription.

The add-on has quite a few options and performs certain operations that you would otherwise do manually or maybe not use at all. As such, it can set up the scene in more exotic ways, for example by linking to another Blender file. Also, the materials provided by BlenderKit can use pretty complex shader graphs, involving multiple layers of textures, or advanced node setups.

Warning

When applying a BlenderKit material on your own object the rendering might not look like the material preview in all cases. Especially use of displaced materials involves specific settings for the Cycles renderer and use of subdivision on the object.

Warning

Textures from BlenderKit are by default stored in a separate directory on your system (~/blenderkit_data on Linux). There is an option to pack the textures within the Blender file, making it larger in size but also completely independent of any external files, which is useful if you want to transfer the Blender file to a different system. The option for packing files is File > External Data > Pack All into .blend.

"},{"location":"news/archive/2023/","title":"2023","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome!","text":"

Info

The material on these pages is being updated in preparation for the upcoming December 2023 course

These pages form the main online content for the two modules we provide for the course Introduction to Scientific Visualization with Blender. In this course you will learn how to use the 3D rendering and animation package Blender for creating images and animations from (scientific) data.

This course consists of two parts: Basics and Advanced. The Basics part assumes no knowledge of Blender, while the Advanced part builds upon the skills and knowledge of the Basics part.

In specific periods during the year we provide support for this course, which is otherwise self-paced. Please check the Schedule and News pages for upcoming dates. Or search the Euro CC course agenda for the course modules:

  • Introduction Scientific Visualisation with Blender: Data, Lights, Camera, Action!
  • Advanced topics in scientific visualization with Blender: geometry, scripts, animation, action!

This course is created and maintained by the visualization team of the SURF High-Performance Computing and Visualization group. This course is provided by SURF within the context of the EuroCC Netherlands NCC. We have been providing this course since 2018, usually twice a year, and initially in-person. Due to the restrictions during the COVID-19 lock-down period we decided to turn this course into a fully online version, based on positive experiences with the first advanced Blender course we provided online in 2020.

"},{"location":"privacy/","title":"Privacy and cookie statement","text":""},{"location":"privacy/#privacy","title":"Privacy","text":"

No personal information is gathered by SURF of visitors to this course website.

"},{"location":"privacy/#cookies","title":"Cookies","text":"

No cookies are used for the content published by SURF on this website, nor is any personal information about visits tracked by SURF.

The underlying MkDocs content generation system uses the browser's session storage for storing general site-map data (called /blender-course/.__sitemap), which is sometimes reported as a cookie.

"},{"location":"privacy/#third-party-cookies","title":"Third-party cookies","text":"

The embedded videos are hosted on YouTube, but using its privacy-enhanced mode and the \"www.youtube-nocookie.com\" domain. YouTube might ask for placement of third-party cookies, in which case explicit permission needs to be granted by the user. For more information, see the privacy controls of YouTube and the information linked from that page.

This website is hosted through GitHub Pages, which might set third-party cookies in which case explicit permission needs to be granted by the user. See here for the GitHub privacy policy.

"},{"location":"advanced/introduction/","title":"Introduction","text":"

Warning

The material in the Advanced module is being updated for Blender 3.6

The Advanced part of the course consists of a number of separate topics, each with a number of assignments:

  • Python scripting for performing all kinds of using code
  • Advanced materials using the node-based shaders
  • Using more complex Animation techniques
  • Mesh edit mode for cleaning up and/or improving your (imported) meshes

The final assignment is your own personal project of your choosing. If you want you can also work with a dataset we provide.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/","title":"\ud83d\udcbb The Shader Editor and advanced materials","text":"

In these two exercises in this chapter you will use the Blender Shader Editor on the familiar iso-surface of a CT scan of a fish from the basic course and try to make a visualization by using an advanced node setup. After that you will make a render of the moon with the high resolution textures of NASA with adaptive subdivision.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#the-fish","title":"\ud83d\udcbb The fish","text":"

When you opened the exercise blend file advanced_materials_assignment.blend you'll see the white fish iso-surface above a plane white plane. We are going to pimp this scene with advanced materials.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#shader-editor-materials-coloring-the-scene","title":"Shader editor materials - Coloring the scene","text":"

First we will add materials and give each object a different color.

  1. First activate the Rendered shading to see what kind of materials we are actually applying by pressing Z in the 3D Viewport panel and selecting Rendered from the radial pie-menu.
  2. Select the fishskin object and add a new material by clicking the New button in the middle of the top bar of the Shader Editor panel.
  3. Now we see a graph appearing with 2 nodes a Principled BSDF-node and a Material output-node also in the side panel you will see the familiar material settings. Change the Base Color to an appropriate color of a fish.
  4. Repeat step 2 and 3 for each 3D object in the scene (see Outliner) and give them a color of your choice.
"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#texture-mapping-placing-the-fish-on-a-picknick-table","title":"Texture mapping - Placing the fish on a picknick table","text":"

Now that the scene has some color we can start applying some realistic colors and texture to the ground plane or should we say table? We will do that by adding wood textures to the ground plane and connecting those textures to their appropriate parameters of the Principle BSDF.

  1. Select the groundplane 3D object.
  2. Add a Image texture-node to the Shader Editor graph of the groundplane with Shift-A > Texture > Image Texture.
  3. Connect the Color output of this node to the Base color input of the Principled BSDF-node.
  4. Now the groundplane doesn't look anything like a picknick table, its pink. This pink color comes from the fact that an image is missing from the Image Texture-node. Open an image by pressing the Open-button on the Image Texture-node, this will open a file browser window. Now select the blue_painted_planks_diff_4k.png image from the data/wood_textures/ directory and press Open Image.

Now we have our first image mapped on an object! Although you might have noticed that the fish is really small or rather the planks are very big. We are gonna solve that by scaling the texture coordinates.

  1. Before we can do that we first need to add the texture coordinates to the graph with Shift-A > Input > Texture Coordinates and connect the UV output to the Vector input of the Image Texture-node.
  2. Nothing changed because we didn't apply the scaling yet. Now add a Mapping node with Shift-A > Vector > Mapping and drag it on top of the edge between the Texture Coordinate-node and the Image Texture-node and let it go. As you can see it is automatically connected in between the nodes.
  3. Now on the Mapping-node change the Scale parameter x,y and z to 2. As you can see that reduced the planks to a smaller and better size.

Tip!: With the Node Wrangler Blender add-on you can just select a texture node and press CTRL+T to automatically add the Texture Coordinate and Mapping node. Node Wrangler can be added with: Menu-bar Edit > Preferences > Add-ons tab > Type 'Node Wranger' in search > check Node Wrangler add-on to activate.

Now we'll roughen the planks a bit with a Roughness map, a texture that will be use to change the Roughness parameter of the Principled BSDF.

  1. Select the previously added Image Texture-node and press SHIFT-D and place the new duplicated node underneath the other Image Texture-node.
  2. Connect its Vector input to the Vector output of the Mapping-node just like the other Image Texture-node and connect the Color output to the Roughness input of the Principled BSDF-node.
  3. As you can see became shiny, which wood is not (rotate the view around the object in the 3D Viewport to see the plane from different angles). This is because we haven't changed the texture yet. In this new Image Texture-node Open the blue_painted_planks_rough_4k.png from data/wood_textures.
  4. Now it is still a bit too shiny for wood. This is because the output is interpreted as an sRGB value. We need to change the Color Space parameter of this Image Texture-node to Non-color. Now the ground plane has the right rough look like wood.

The look of the wood is still very \"flat\" (the light still bounces of it at a straight angle), this is because we didn't add a normal map to the material yet. This normal map will accentuate all the nooks and crannies naturally present in wood which normally catch light to.

  1. As the previous Image Texture-node we again need to make a new one by duplicating (see step 8).
  2. Again the Mapping-node Vector output needs to be connected to the new Image Texture-node Vector input. The Color output however needs to go to a Normal Map-node.
  3. Add a Normal Map-node with Shift-A > Vector > Normal Map and connect the Image Texture-node Color output to the Normal Map-node Color input and connect the Normal Map-node Normal output to the Principled BSDF-node Normal input.
  4. Again this is also not a color so the Color Space needs to be set to Non-color.

Now you have a fully textured wooden ground plane! To see the full effect, rotate the view around it and see the light bounce off the surface based on the different texture types you just applied.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#multiple-materials-one-object-window-to-the-inside-of-the-fish","title":"Multiple materials one object - Window to the inside of the fish","text":"

We only see the fish, not the fish bones. In the Blender Basics course we learned how to reveal the bones on the inside by using a Boolean modifier, but we can achieve the same with just materials!

  1. Select the fishskin 3D object.
  2. If everything in the first couple of assignments the fish should already have one material called Material. For administrative reasons lets rename the material by clicking its name Material in the middle of the top bar of the Shader Editor panel and typing the new name called fishskinmat.
  3. Now left next to the rename box you have drop-down menu called Slot 1 when you click this you will see the material slots menu. In our case its only one material called fishskinmat.
  4. Now add a new Material slot by clicking the plus icon in this menu. The added material slot is still empty and needs a second material.
  5. Add a new material by clicking the New button in the middle of the top bar of the Shader Editor panel.
  6. Rename this material to fishskintransparentmat.

Now as you can see adjusting any value on the Principled BSDF-node doesn't seem to do anything. This is because there aren't any vertices assigned to this material slot yet (by default all vertices are assigned to the first material slot).

  1. To assign vertices we need to be able to select them and this can be done in the Edit Mode of the 3D Viewport-panel. With the fishskin 3D object selected and the focus on the 3D Viewport-panel (hovering over the 3D Viewport panel with your mouse) press TAB.
  2. First press 1 to see the vertices and then select a window of vertices on the side of the fish with the Border select tool by pressing B in the 3D Viewport-panel and dragging over the area you want to select.
  3. With these vertices selected press the Material slots button, select the fishskintransparentmat-material and press the Assign-button.

Now you can see the selected faces in that selection look different! This is because they are assigned to the second material. Now we'll make the fishskintransparentmat actually transparent with a combination of the Transparent BSDF and Principled BSDF through a Mix Shader. That way we can control the amount of transparency!

  1. In the Shader editor add a Mix Shader-node with Shift-A > Shader > Mix Shader.
  2. Drag this Mix Shader-node over the edge connecting the Principled BSDF-node and the Material Output-node to place it connected in between.
  3. Now add a Transparent BSDF with Shift-A > Shader > Transparent BSDF.
  4. Connect the BSDF output to the Mix Shader-node Shader input.
  5. Now the material is half shaded by the Transparent BSDF-node and half by the Principled BSDF-node. Experiment with the Mix shader-node's fac parameter to see how it changes the transparency of the fishskintransparentmat.

Now you have a window looking inside the fish! Now it's time to give the fish some actually fishy colors with the Project from view UV-mapping!

Bonus (Only when you have time left): As you can see the bones also contain the swim bladder which looks the same as the bones because the same material is assigned to it. Try to select the swim bladders vertices and assign a different more fitting material to the swim bladder.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#project-from-view-uv-mapping-add-actual-skin-to-the-fish","title":"Project from view UV-mapping - Add actual skin to the fish.","text":"

To add a real fish texture, or actually a photo from a carp, to the fishskin 3D object you can use the technique called Project from view UV-mapping. For this we introduce a new panel called the UV Editor. Before we go to the UV Editor we need to add a Image Texture-node to the fishskinmat.

  1. In the Shader Editor select the fishskinmat (slot 1) from the Material slot menu in the middle left of the top bar of the Shader Editor.
  2. Add a Image Texture-node to the material with Shift-A > Texture > Image Texture and connect the Color output to the Principled BSDF-node Base Color input and open the carp.jpg texture from the data/ directory.
  3. Next add a Texture Coordinate node with Shift-A > Input > Texture Coordinates and connect the UV output to the Image texture-node Vector input.

This fish is now black because the UV coordinates are not defined yet. That is what we will do in the UV Editor.

  1. Now that we do not need the Shader editor anymore we can replace it with the UV Editor. In the corner of the panel click the Editor Type-button and select the UV Editor from the list.
  2. Before we can start UV-mapping we need to be in Edit mode in the 3D viewport. In the 3D viewport panel press TAB to enter edit mode.
  3. Now select all geometry by pressing A.

To properly project from view you have to choose the right view to project from. We are gonna map a photo of a carp which has been taken from the side. In order to properly map the photo on the 3D object we also need to look at it from the side.

  1. Press BACK-TICK to open the view radial pie-menu and select Right or through the 3D Viewport menu in the header (View > Viewpoint > Camera).
  2. Now press U to open the UV-mapping-menu and select Project from view.

Now you can see the UV coordinates are mapped in the UV Editor but they are not properly scaled to fit the photo of the carp.

  1. Make sure that everything is still selected and then within the UV Editor press S and scale the UV-coordinates until they aligns with the photo of the carp.
  2. Scaling it alone is not enough. The UV-coordinates need to be moved a bit, use G to grab the UV-coordinates and translate them to better match the photo.

As you might have noticed it is not possible to completely match the photo without deforming the UV-coordinates.

  1. Before we start deforming parts of the UV-coordinates you need to activate Proportional editing by pressing the Proportional editing button in the top bar of the UV Editor. This proportional editing moves all UV-coordinates in the adjacent defined radius along with the currently selected UV-coordinates.
  2. Now select a UV-coordinate in the UV Editor that needs to be moved and press G.
  3. While grabbing, scroll with you mouse wheel to decrease or increase the Proportional editing radius and move your mouse to see the effect.
  4. Now with this Proportional editing try to match the UV-coordinates to the photo of the carp as good as possible.

Tip!: Whenever you are editing the UV-map in the UV editor it can be difficult to see how the texture is mapped on the 3D-object because of the visibility of all vertices, edges and faces because of the activated Edit mode. You can toggle between Edit mode and Object mode in the 3D Viewport panel to have a better look at the mapped texture.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#the-moon","title":"\ud83d\udcbb The moon","text":"

This moon exercise doesn't have a prepared blend file because you are gonna make it all by yourself! So open a new blend file and start to make the moon.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#the-basic-scene-sphere-sun-and-the-darkness-of-space","title":"The basic scene - Sphere, sun and the darkness of space","text":"

To create the moon we first need to prepare a very simple scene.

  1. First off we need to remove the Default cube (the cube that comes with a new blend file which only function is to be removed :'( ).
  2. Add a UV Sphere instead with Shift-A > Mesh > UV sphere.
  3. Set the UV Sphere's shading to smooth through the 3D Viewport menu in at the top of the 3D Viewport (Object > Shade Smooth).
  4. Select the default Light object in the Outliner and change it to a Sun light in the Light-tab in the Properties-panel on the right.
  5. Now change the shading in the 3D viewport to Rendered by pressing Z and then select Rendered. This rendered view is by default set to Eevee, to change that to Cycles for more realistic lighting go to the Render Properties-tab in the Properties-panel and change the Render Engine to Cycles.
  6. As you can see the sun is now way too bright. Lower the Strength of the sun from 1000 to 10 in the Light-tab in the Properties-panel. No need to have the power of a 1000 suns.
  7. Now that we have the sun we need to disable the World-lighting (the grey ambient light) since we only need the sun as a direct light source like it is in space. Go to the World properties-tab in the Properties-panel and set the Color in the Surface-section all the way to black.

Now we have the basic scene of a sphere in space, now we are gonna make it look like the moon by adding textures.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#applying-a-material-and-texturing-the-moon-thats-one-small-step","title":"Applying a material and texturing the moon - That's one small step...","text":"

Before we can edit the material we need to open the Shader Editor. For this we need to slightly modify the interface.

  1. Grab the edge between the 3D viewport-panel and the Timeline-panel by hovering above the edge until you see resize cursor then click and drag the edge until half of the Blender window.
  2. Now click the upper left Editor type dropdown menu (now the Timeline-icon ) and select the Shader Editor.
  3. In the Shader Editor add a new material.
  4. In this material add 2 Image Texture-nodes, 1 Texture Coordinate-node and 1 Displacement-node (Shift-A > Vector > Displacement).
  5. Connect the Texture Coordinate-node UV output to both Image Texture-nodes Vector inputs.
  6. Connect one of the Image Texture-nodes Color output to the Principled BSDF-node Base Color input and the others Color output to the Displacement-node Height input.
  7. Finally connect the Displacement-node Displacement output to the Material output-node Displacement input.
  8. Open the data/moon_textures/lroc_color_poles_8k.tif in the Image Texture-node that is connected to the Principled BSDF-node Base Color.
  9. Open the data/moon_textures/ldem_16.tif in the Image Texture-node that is connected to the Displacement-node Height input.
  10. Then finaly set the Image Texture-node Color Space-parameter of the node with the displacement texture to Non-Color.
  11. Initially the Displacement-node Scale parameter is set way too high making the moon look horrible. Set this parameter to 0.001.

As you can see it already looks quite like the moon but with some final tweaking you will get even more realism.

"},{"location":"advanced/advanced_materials/advanced_materials_assignment/#adaptive-displacement-revealing-the-craters-mooore-details","title":"Adaptive displacement - Revealing the craters! Mooore details!","text":"

Everything we have seen until now has been rendered in the default EEVEE rendering engine, which is for visualization purposes very powerful, but if you want to add that extra little realism with adaptive displacement you have to use the Cycles rendering engine.

  1. Active the Cycles rendering engine with the Render Engine setting in the Rendering properties-tab of the Properties-panel.

While we are there, to be able to use adaptive displacement, we need to activate the Cycles experimental feature set.

  1. Set the Feature Set to Experimental.
  2. This Experimental feature set added an extra section in the current properties panel tab called Subdivision. In this section set Viewport to 2.

Now we need to add a Subdivision modifier that also got a new setting from the Experimental feature set that enables the adaptive displacement.

  1. Add a Subdivision modfier in the Modifier properties-tab of the Properties-panel.
  2. Enable the Adaptive Subdivision setting in this modifier.

Until now you only saw some slight differences because there is only one setting that has to be changed to make all of this worth it.

  1. Change the Displacement setting to Displacement Only in the Properties-panel > Material properties-tab > Settings-section > Surface-subsection.
  2. Now zoom in and toggle to the Edit mode and back, which re-triggers the adaptive subdivision computations, and see the craters in their full glory.

Bonus: For an artists rendition of the moon change the Displacement-node Scale parameter to a higher value and see how the craters get more noticeable (although less realistic).

"},{"location":"advanced/advanced_materials/introduction/","title":"Introduction","text":"

This chapter will introduce the Shader Editor and UV Editor of Blender which lets you create advanced materials to improve the look of your visualizations. The Shader editor and UV editor go hand in hand, with the UV-editor (and 3D viewport) you'll learn how to UV-unwrap your meshes and manipulate the UV-coordinates and with the Shader editor you'll project procedural or image textures based on the created UV-coordinates.

You'll learn how to apply PBR (Physically based rendering) style textures and where to find them, to make your objects look photo real.

And lastly a commonly used experimental feature called Adaptive Subdivision will be combined with vertex displacement to create some great looking micro-displacement details on the surfaces of your objects.

Before you start with the exercises the following video will give you the theoretical and practical background to make these exercises. In this video there are some Blender walk-throughs, if you want to follow along you can use the walk-through files in the walkthroughs/advanced/advanced_materials directory.

After you watched the video about advanced materials you are ready for the exercises!

"},{"location":"advanced/advanced_materials/node-wrangler/","title":"Node-wrangler reference","text":"

The node-wrangler add-on brings a wide variety of new features and hot-keys to automate steps within the Shader Editor to make life easier. In the walk-through only 2 features where shown, the 'Shader viewer' (Ctrl+Shift+LMB) and 'Add Texture Setup' (Ctrl+T), 2 very useful hot-keys but this is only the tip of the iceberg.

To see the full set of features/hotkeys that node-wrangler provides you need to go to Menu bar 'Edit' > Preferences... > Tab 'Add-ons' > Search for 'Node wrangler' > Show Hotkey List (see image below). For additional information on what each individual feature does please refer to the official documentation.

Warning

The hotkeys in the official documentation are not updated yet to 2.8+ therefor please refer only for the information of each feature and use the \"Show Hotkey List\" for the current hotkeys.

"},{"location":"advanced/advanced_materials/vertex_colors/","title":"Visualizing vertex colors with the Attribute node","text":"

In the basics course we already introduced the use of vertex colors with the Material-tab in the Properties-panel. What happens under the hood is that you basically add an Attribute-node to the node-network and attached its Color-output to the Base Color-input of the Principled BSDF shader-node (see images below).

Shader Editor node-network

3D viewport result

The blend file for the image above, vertex-color.blend, can be found among the walk-through files in the walkthroughs/advanced/advanced_materials directory.

"},{"location":"advanced/animation/2_assignment_cars/","title":"\ud83d\udcbb \"Cars\": the movie","text":"

In this exercise you can do some more complex keyframe animation by having multiple objects move to create a city full of driving cars. You will need basic keyframing skills and use of the Graph Editor.

  1. Load cars.blend

This scene has a very simple city with some building and some cars. An animation of 250 frames has been set up in the file, starting at frame 1, ending at frame 250.

Tip

All the geometry of the buildings is in the so-called collection \"Collection 2\". You can hide all these objects by clicking the eye icon right of \"Collection 2\" in the outliner.

  1. Change to the first frame in the animation with Shift-Left. Note that you can see the current frame you're working in by the blue vertical line in the Timeline at the bottom. Also, in the 3D view there's a piece of text in the upper-left that reads (1) Scene Collection | Plane: the current frame is listed between the parentheses.
  2. In the scene there's two cars behind each other. Select the front car of the two.
  3. Enter a keyframe for the car's location and rotation: press I followed by picking LocRot
  4. Change to the last frame in the animation with Shift-Right
  5. Move the car to the end of the road it's on, along the Y axis
  6. Enter another LocRot keyframe with I
  7. Check the car movement by playing back the animation with Space, or by changing the time in the Timeline editor with Shift-RMB

The car's speed currently is not constant: it speeds up near the beginning of the animation and slows down starting somewhere halfway. We can edit the curve for the Y location channel in the Graph Editor to influence this behaviour.

  1. In the Graph Editor on the left of the screen show all the location and rotation values being animated for the selected car by using the little triangle left of the name Object Transforms. Below the Object Transforms you should now see the 6 channels for which you created keyframes in steps 4 and 7: X, Y and Z Location, and X, Y and Z Euler Rotation.
  2. Click the eye icon next to Object Transforms to hide all the channels. Then click the eye next to Y Location to only show the graph for the Y location. Note that you can use the Home key to zoom to the full extent of the graph.

You should now see a curved line in green with two orange filled circles at the times of the beginning and end of the animation, i.e. frames 1 and 250. Attached to the squares are \"handles\" (the lines that end in open circles) that influence the shape of the curve.

  1. Select the open circular endpoints of the handles and move them around. See what this does for the shape of the curve and the subsequent behaviour of the car in the animation.

The two curve points are selectable with Shift-LMB, but also, for example, border select (B key). This works just like you normally select objects. Deleting keyframes can then be done with X.

  1. Select both curve points with A, Press V to bring up the Keyframe Handle type. This menu allows you to change how the curve is shaped based on the position of the handles.
  2. Select Vector. Notice how the curve's shape changes. See what happens when you move the handle endpoints.
  3. Press V again and choose Free. Again change the handle endpoints.
  4. Try out how the different curve shapes you can produce influence the car behaviour.

Now let's animate another car: the one at the start of the road with the bend in it.

  1. Animate the second to move over the bended road all the way to the end.
"},{"location":"advanced/animation/2_assignment_cars/#bonus","title":"Bonus","text":"

Make the cars drive over the road, choosing yourself which cars goes in what direction, how fast, which turns are made, etc. But don't make cars go through each other and have them wait if needed.

Add a camera that shows the busy streets in action :)

"},{"location":"advanced/animation/3_assignment_flipbook/","title":"Flipbook animation","text":"

As mentioned in the animation chapter's video flipbook animation is a simple animation technique in which a mesh is changed over time. Such a changing mesh occurs quite frequently in (scientific) simulations.

In general there's two different situations when it comes to an animated mesh:

  • The mesh topology stays fixed over time, but its vertex positions change each time step
  • The mesh topology and its vertices change over time

The exercise below shows a general technique how to handle any set of animated meshes (so for both types above), which are loaded individually from files. This technique has no restrictions on changing mesh topology, but is somewhat involved as it uses a Python script to set up the animation.

Below we also describe two modifiers that are available in Blender, each usable for one of the types above.

"},{"location":"advanced/animation/3_assignment_flipbook/#using-python-to-set-up-an-animated-mesh","title":"\ud83d\udcbb Using Python to set up an animated mesh","text":"

Here, we'll get more familiar with the flipbook animation approach, in which a series of meshes is animated over time by switching a single object's mesh data each frame.

  1. Extract dambreak.tar.gz in the same directory as animated_ply_imports.blend. These files are located in the data/advanced/animation directory.
  2. Load animated_ply_imports.blend

    This blend file contains not only a 3D scene, but also some Python scripts we use to set up the flipbook animation.

  3. The first step is to load the whole dataset of timesteps using one of the scripts. This might take a bit of time, depending on the speed of your system.

    Execute the script that imports the PLY files for the time steps. To do this step make sure the script called 1. import ply files is shown in the text editor panel. Then press the button in the top bar to run the script.

    Tip

    By default, only the first 100 steps are loaded. You can increase the number of files to the full 300 if you like by updating the variable N in both the import script and the animation handler script.

  4. The cursor changes to a numbered black square indicating the percentage of loading that has been completed. In case you get the idea something is wrong check the console output in the terminal where you started Blender, to see if there are any error messages.

  5. After all PLY files are loaded execute the script that installs the frame change handler. This script is called 2. register anim handler. Make sure the text editor is switched to this script and press the play button.

  6. Verify that the flipbook animation works with Space and/or moving the time slider in the Timeline with Shift-RMB.

    The playback speed will not only depend on the framerate setting, but also on your system's performance

  7. Change the Frame Rate value (in the Output properties tab at the right side of the screen, icon ) to different values to see how your system handles it. Is 60 fps feasible?

Use your skills with keyframe animation to do one of the following things (or both if you feel like it ;-)):

  • Have a camera follow the moving water in some cool way
  • Place a surfer on the moving wave of water. You can import the PLY model silver_surfer_by_melic.ply to use as 3D model. You can load it in Blender with File > Import > Stanford (.ply).
"},{"location":"advanced/animation/3_assignment_flipbook/#alternatives-using-modifiers","title":"Alternatives using modifiers","text":"

The above method uses a bit of a hack with Python to set up mesh changes over time. Although it's flexible (it can work with any type of file format by editing the import code), it is also a bit fragile, needs to load all meshes in memory all at once, etc.

In recent versions of Blender two modifiers were introduced that can be used for similar animation setups, although they each have their limitations. We describe them here in case they are useful for certain situations you might encounter.

"},{"location":"advanced/animation/3_assignment_flipbook/#mesh-sequence-cache-modifier","title":"Mesh Sequence Cache Modifier","text":"

The Mesh Sequence Cache Modifier takes one or more Alembic or USD files and sets up a time-varying mesh from those. The animated mesh data can either come from a single file (containing multiple time steps), or from multiple files (each containing a single time step).

The limitation of only supporting Alembic and USD file formats is somewhat unfortunate, but understandable, since those formats support storing animated meshes in a single file and they are used extensively in visual effects and animation.

If you want to use this modifier then you need to create an Alembic or USD file (or set of files) containing your animated mesh. If you then import that file the Mesh Sequence Cache modifier will be added automatically to set up the animation.

Tip

An example USD file to load can be found in data/advanced/animation/animated_plane.usdc. The file was created by exporting the example animation described below (involving gen_pc2_anim.py) from Blender to a USD file.

"},{"location":"advanced/animation/3_assignment_flipbook/#mesh-cache-modifier","title":"Mesh Cache Modifier","text":"

The Mesh Cache Modifier works somewhat differently in that it is applied to an existing mesh object and will animate vertex positions (only) of that mesh. The modifier supports reading the animated vertex data from a MMD or PC2 file.

Fixed mesh topology

The animated vertex data in the MMD or PC2 file is assumed to use the same vertex order over all time steps. The animated mesh can also not have a varying number of vertices, or a changing topology.

This means that, for example, the animated wave dataset from the exercise above cannot be represented as a series of .pc2 files, as the mesh size in vertices and its topology changes.

The MDD file format is mostly used to exchange data with other 3D software, while the PC2 is a general and simple point cloud caching format. Blender contains add-ons for exporting MDD and PC2 files, but they are not enabled by default. When enabled you can use them to convert a mesh sequence in a different format to one of these.

The PC2 file format is very simple, and can easily be written from, say, Python or C++. The format looks like this (based on information referenced here, and example Python code here):

  • The start of a .pc2 file is a 32-byte header containing:

    char    cacheSignature[12];   // 'POINTCACHE2' followed by a trailing null character.\nint32   fileVersion;          // Currently 1\nint32   numPoints;            // Number of points (i.e. vertices) per sample\nfloat   startFrame;           // Frame number where animation starts\nfloat   sampleRate;           // Duration of each sample *in frames*\nint32   numSamples;           // Defines how many samples are stored in the file.\n
  • Following the header, each set of point positions (collectively called a \"sample\") is stored consecutively. Each sample is stored one after the other as a flat array of x/y/z 32-bit floats for each point. So each sample uses numPoints * sizeof(float) * 3 bytes.

All in all, a .pc2 file provides a fairly compact method of storing a set of animated mesh vertices. Together with the Mesh Cache modifier they can be used to easily set up a mesh animation, for cases where only vertex positions need to be animated.

Tips

  • Note that the topology of the animated mesh is not stored in the .pc2 file and needs to be defined by creating a mesh in Blender first. After that, apply the Mesh Cache modifier and set the .pc2 file to use.
  • You can update the .pc2 file without having to re-apply the modifier. Blender will re-read the file when the frame number changes.
  • See data/advanced/animation/gen_pc2_anim.py for a simple example of generating and using a .pc2 file.
"},{"location":"advanced/animation/introduction/","title":"Introduction","text":"

The basic of (keyframe) animation in Blender were ready discussed in the Basics course, but if you need to refresh your memory then you can use this video:

"},{"location":"advanced/animation/shape_keys/","title":"Shape keys","text":""},{"location":"advanced/animation/shape_keys/#overview","title":"Overview","text":"

Shape keys can be used for a very specific type of animation: to morph one mesh into another over time, or to blend multiple meshes together into one result. This can be used, for example, to show the time-evolution of some object or the highlight differences between two meshes. Although this is a fairly specific use case, shape keys aren't too difficult too understand and use, hence we include this section.

There are some limitations to using shape keys:

  • The two meshes must have the same number of vertices
  • Preferably the two meshes should have the same topology (i.e. the way in which the vertices are connected to form polygons). If the topology doesn't match then strange results during morphing can occur.

The above are fairly annoying limitations, but there is no easy way around it in Blender currently.

"},{"location":"advanced/animation/shape_keys/#poor-bunny","title":"\ud83d\udcbb Poor Bunny","text":"
  1. Load bunny_shape_keys.blend
  2. This scene contains the Stanford Bunny and a completely flattened version of the Bunny
  3. Verify that these meshes have the same number of vertices. Do a visual comparison in wireframe mode (Z > Wireframe)

We'll now add some shape keys:

  1. Select the regular Bunny.
  2. Add a shape key under Shape Keys in the Mesh properties using the + button. The new shape keys will be called Basis.
  3. Add a second shape key, it will be called Key 1 and have a default influence of 0.000.
  4. Select the Key 1 shape key and enter mesh edit mode in the 3D view with TAB and make sure you're in vertex mode by pressing 1
  5. Select parts of the Bunny mesh and transform them as you like. The changes should be clearly visible.
  6. Exit mesh edit mode with TAB. You should notice that the mesh returns to its normal shape.
  7. Change the influence Value of Key 1 to see what happens to the resulting mesh. You can either click on it and enter a number, of click and drag the value.

Let's add another shape key:

  1. Add a third shape key, it will be called Key 2.
  2. Select Key 2 and apply a second set of mesh changes in edit mode.
  3. Once again exit edit mode.
  4. Play around with the influence values of both shape keys, as well as the checkboxes next to the influence values.

Checking the difference between relative and absolute shape keys:

  1. Uncheck the Relative checkbox to switch to absolute shape keys. Notice that the influence values have now disappeared.
  2. Change the Evolution Time value to understand how the morphing of the meshes is done now.

Using another mesh to define a shape key:

  1. Delete shape keys Key 1 and Key 2 using the - button and change back to relative shape keys by checking the Relative checkbox.
  2. Select the flattened mesh and the Shift-click the Bunny mesh to add it to the selection and make it the active object.
  3. Open the shape key menu using the downwards arrow below the + and - buttons. Select Join as Shapes.
  4. There should now be a new shape key called flattened mesh. Note that this shape key is only set on the bunny mesh, not on the flattened mesh mesh.
  5. Vary the influence of the shape key called flattened mesh to see the Bunny melt.
  6. Delete the flattened mesh object in the Outliner. Does the shape key that morphs the Bunny to its melted flat shape still work?

Looking closer at the behaviour of the mesh morphing:

  1. Try to reason why the head of the Bunny is the last part to melt.
  2. Zoom in a bit to see if you can spot the twisting motion that mesh makes as it melts.
  3. Try to transform the mesh in the melted shape key in such as way as to minimize the twist. Or toy around with other mesh transforms to see what morphs come out. Note that you need to make changes in edit mode.
"},{"location":"advanced/final_project/final_project/","title":"\ud83d\udcbb Final project: making a visualization of your own data","text":"

We would like you to spend the remainder of your time in this course on doing this little project. We have two options for you to choose from. The first and recommended one is making a visualization of your own (research) data. The second option is that you work on a visualization of data we have prepared.

Do not forget that if you are stuck to join us on Discord or in a feedback webinar so we can help. See the Course overview for more information.

If you made a nice visualization and still have time left in the course, why not make an animation?

"},{"location":"advanced/final_project/final_project/#option-1-your-own-data","title":"Option 1: your own data","text":"

So far you have learned how to make meshes and vertex colors in Blender using Python. So, think about if you can visualize your data using these techniques. You need to think about what you need to do to transform your data into a form that can be used to generate vertices, faces and vertex colors. And how do you want to visualize your data values? Can you visualize them through the Cartesian coordinates of the vertices and faces and maybe some colors? Do you need to use vertex coloring? Or do you need something else? Note that volumetric data will be difficult in Blender and you may need to think of some tricks.

"},{"location":"advanced/final_project/final_project/#option-2-visualize-a-computer-model-of-a-proto-planetary-disk","title":"Option 2: visualize a computer model of a proto-planetary disk","text":"

Although we highly recommend you to work on your own data, if you have none to use, you can use the following data to work on. Here we give a brief introduction to the data.

"},{"location":"advanced/final_project/final_project/#what-is-a-proto-planetary-disk","title":"What is a proto-planetary disk","text":"

A proto-planetary disk is a disk-like structure around a newly born star. This disk is filled with dust (solid-state particles with a diameter in the order of 1 micrometer) and gas. In the course of time this dust and gas can coalesce into planets. In this option we will look at a computer model of the dust in such a disk. The model calculates the temperature and density of the dust in the disk, taking the radiation and gravity of the star into account.

The calculations of the software (called MCMax) are done iteratively using Monte Carlo techniques. Packages of photons are emitted by the star in random directions and their wavelength sampled from the radiation distribution of the star (by default a blackbody). Using the absorption, scattering and emission properties of the dust grains in the disk, the scattering, absorption and re-emission of the photons are calculated throughout the disk. This is used to calculate a temperature structure in the disk. This temperature is then used to adapt the starting density structure of the disk after which a new pass is done by tracking a next set of photons and adapting the density subsequently. This is repeated until convergence is reached. The code uses a two dimensional (adaptable) grid in the radial and theta direction. The disk is assumed to be cylindrically symmetric around the polar axis (z-axis, see Fig. 1). The grid cell size is lowered in regions where the density becomes high.

Figure 1: definition of coordinates

"},{"location":"advanced/final_project/final_project/#how-to-start-visualizing-such-a-proto-planetary-disk","title":"How to start visualizing such a proto-planetary disk","text":"

You could create a 3D model of the disk at constant density and display the temperature as colors on the surface of the model. You could use this to make nice renders and animations to show the temperature structure of the disk. For this we need to pre-process the data from the model to get the spatial coordinates of the disk at a constant density. These coordinates then need to be converted into Cartesian coordinates of vertices and faces before creating the geometry in Blender. You can then add the temperatures to the faces using vertex coloring and by adding the needed shaders to the model.

"},{"location":"advanced/final_project/final_project/#how-the-model-data-is-structured","title":"How the model data is structured","text":"

You can download the data here. An example output file of modeling code MCMax is shown below.

# Format number\n     5\n# NR, NT, NGRAINS, NGRAINS2\n   100   100     1     1\n# Spherical radius grid [cm] (middle of cell)\n   7479900216981.22     \n   7479900572789.07     \n[...]\n# Theta grid [rad, from pole] (middle of cell)\n  9.233559849414326E-003\n  2.365344804038962E-002\n[...]\n# Density array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n  1.001753516582521E-050\n  1.001753516582521E-050\n[...]\n# Temperature array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n   1933.54960366819     \n   1917.22966277529     \n[...]\n# Composition array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n   1.00000000000000     \n   1.00000000000000     \n[...]\n# Gas density array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n  1.001753516582521E-048\n  1.001753516582521E-048\n[...]\n# Density0 array (for ir=0,nr-1 do for it=0,nt-1 do ...)\n  1.001753516582521E-050\n  1.001753516582521E-050\n[...]\n
The file is structured in a way the scientist thought best at the time using the tools at hand. For us it is important to notice the NR and NT, which stands for number of radial and theta points respectively (NGRAINS is related to the number of different types of dust grains in the disk and you can ignore this). Further, the output file then lists the radius points and after that the theta points. Subsequently temperature and density values are listed by iterating over the radius and then the theta indices. The units of all the values in the MCMax output are: R[cm], Theta[radians], Density[gr/cm^3], Temperature[K].

The data from the MCMax code is in spherical coordinates, while the system in Blender works with Cartesian coordinates. The theta in the output is defined as the angle with the z-axis (See Fig. 1).

"},{"location":"advanced/final_project/final_project/#how-it-could-look","title":"How it could look","text":"

To help you get an idea of what the data of the proto-planetary disk might look like, check this video we made:

"},{"location":"advanced/mesh_editing/introduction/","title":"Introduction","text":"

Info

This chapter is an extension of the Basics course Simple mesh editing chapter, so the walkthrough of that chapter should suffice as background.

This chapter will give you an introduction on the Edit mode of the 3D viewport where you will learn how to patch up your imported meshes/visualizations and even learn how to generate your own 3D shapes.

To refresh your memory on basic mesh editing you can watch the Simple mesh editing intro video of the Basics part below:

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/","title":"\ud83d\udcbb Mesh Editing with the Edit mode","text":"

This assignment will be a brief introduction on the Edit mode in the 3D viewport.

Once you opened the exercise blend file sme_assignment.blend you'll see the familiar fish iso-surface above a plane.

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/#getting-familiar-with-the-edit-mode","title":"Getting familiar with the Edit mode","text":"

To edit the mesh we first need to go the Edit mode with the fish.

  1. Select the fish and enter the Edit mode by pressing Tab. Depending on the speed of the system you're working on edit mode might be entered instantly or might take half a second. In general, for larger meshes switching to edit may take longer.

Now you will be able to see all the vertices, edges and faces that make up the 3D model. You will now try to select and move around some vertices, edges and/or faces.

  1. Change the Mesh Select Mode to Vertex by pressing 1 (or click the left icon in at the 3D view header). This might already be active by default but it will be highlighted on the icons in the 3D view header ().
  2. Before you start selecting, de-select all all current selected vertices by pressing Alt-A or double 'A' rapidly.
  3. Now try to select a single vertex by clicking on it with the LMB, or multiple with Shift-LMB. You might have to zoom in a bit to separate the vertices enough.
  4. Another method is to use the selection tools:
    1. Box selection by pressing B and dragging a box around the vertices you want to select. Hold Shift to de-select.
    2. Circle selection by pressing C and left-clicking and dragging with the mouse over the vertices you want to select. To increase the size of the Circle selection tool simply scroll with your mouse Wheel. With MMB and dragging you can de-select vertices. Press Enter to exit circle select mode (or with RMB ).
  5. Once you selected your vertices you can transform them the same way you can do with objects by pressing the hotkeys G for translation, R for rotation, and S for scaling, etc.
  6. Probably now you did the vertex editing the fish looks a bit scrabbled. One way to clean it up is, of course, using Ctrl-Z to undo it. Another way is simply deleting the vertices by using the Delete popup menu X > Vertices. Try to remove part of the fish skin to it leaves a hole in the mesh which will reveal a part of the inside of the fish.

Tip!: If your fish has been \"meshed-up\" beyond repair you can always revert it to the last saved state with: File > Revert > Confirm.

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/#filling-the-holes","title":"Filling the holes","text":"

An imported mesh from a 3D visualization program can sometimes contain unwanted holes or separations in parts of the mesh, these can also be fixed in the edit mode. Conveniently the fish in the exercise file was already poked full of holes so you can fix these.

In between: To better inspect if there are any holes left you can switch back and forth between the Object mode and Edit mode because in the Object mode they are easier to see.

  1. First, make sure the whole mesh is selected by pressing a and then remove the small holes (the size of one triangle/quad) by pressing F3 in the 3D viewport in Edit mode and type in fill holes and press enter or click on it with LMB (this might take some time). Now this already cleaned up a lot of the holes in the geometry!
  2. Through inspection you might notices there are some bigger wholes that were not filled yet because they were skipped by the previous step since they were to large. To fill these they first need to be selected by first de-selecting everything with alt-a and then press F3 and type in non manifold and press enter or click on it with LMB.
  3. This selected the big holes but also other non-manifold geometry. To select only one of the holes hold CTRL+SHIFT and drag with LMB over one of the holes. This de-selects everything excepts what was in the drag-box.
  4. Now this selected hole can easily be fixed by pressing f.
  5. Repeat step 2 to 4 for the other 2 holes.

Tip!: The fill with f fills the hole with an n-gon, a face with more then 4 vertices. These can sometimes create shading artifacts in your final render. Another way to fill these holes is to use grid-fill (ctrl+f), this tries to fill the whole with a grid of quad shaped faces. This however might not always work for numerous reasons (uneven amount of vertices, closed loops etc) which can be fixed with additional mesh editing but the easy route would be to fill it with an n-gon face.

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/#separating-skin-from-bones","title":"Separating skin from bones","text":"

Now that you got a little familiar with mesh editing you can try to separate the skin from the bones by using mesh separation.

  1. While still in edit mode (press Tab if not), try to select all the outside skin with the select linked selection by hovering the mouse cursor over the geometry and pressing L. This will only select a connected part of the skin so continue this step until you think you selected all the outside skin. Note that it is difficult to do this perfectly, as some of the insides of the fish are sometimes also selectable. Unfortunately, this occurs frequently with this type of sensor-based 3D data.
  2. Once you think all the skin is selected you can press P and select Selection to separate the selected surfaces from the main mesh into another mesh object. This new mesh will be added to the Outliner with the name fish.001.
  3. In the Outliner double-click LMB on the mesh object fish.001 to rename it to fishskin. Do the same for the fish mesh object and rename it to fishbones.
  4. If you now select the fishskin mesh object and hide it by clicking the little icon in the Outliner will reveal the insides of the fish.

Tips!: - To reverse the separation of the mesh into bone and skin you can select both the mesh objects in Object mode and press Ctrl-J to join them back together into a single mesh. - Sometimes X-ray mode, toggled with Alt-Z can be useful when editing a complex mesh, as it makes all geometry in a mesh partly transparent

"},{"location":"advanced/mesh_editing/mesh_editing_assignment/#bonus-make-your-own-annotation-arrow","title":"(BONUS) Make your own annotation arrow","text":"

Since the content of this course is mostly geared towards imported geometry or scripted geometry, you might not directly think about manually created geometry. This bonus exercise however will show you that it is relatively easy to create your own geometry in Blender. Lets start your manual mesh creation with an annotation arrow!

  1. In the 3D viewport make sure you are in Object mode and add a new cylinder with Shift-A > Mesh > Cylinder.
  2. Press / to isolate the mesh so that there are no distractions. This can be reversed again by pressing /.
  3. Press Tab to go into Edit mode.
  4. Grab the selected geometry by pressing g and press z to move it along the z-axis only and press 1 to move it 1 unit up so that the origin is at the bottom.
  5. De-select all the geometry with Alt-A and press 1 to set the select mode to Vertex and select all the bottom vertices (with LMB-drag over the vertices or with the b Box-select).
  6. Press s to scale them to a tiny point and press LMB to confirm.
  7. Now select the top vertices the same way you did with the bottom vertices, make sure that none of the bottom vertices are selected.
  8. Press i to inset the faces and move your mouse until you are satisfied with the width of the arrow shaft.
  9. Press e to extrude the selection and move the mouse up until you are satisfied with the length of the arrow shaft.
  10. Now press Tab and admire your newly created arrow!
  11. The arrow might now be a bit too big compared to the fish so scale the arrow down with s, move it to a point of interest with g and rotate the arrow to your liking with r (which is made relatively easy because we made it so that the origin is at the point)

Since the introduction of the Edit mode and switching back and forth between it and the Object mode you do need to make sure in which mode you are before adding new geometry or before using one of the transform operations (grab, scale and rotate). Otherwise you might add geometry to an already existing object instead of adding a new 3D object or you might move, scale or rotate 3D object geometry in the Edit mode and inadvertently change the origin of the object. This can be confusing sometimes but you'll get used to it!

"},{"location":"advanced/python_scripting/1_api_basics/","title":"Blender API basics","text":""},{"location":"advanced/python_scripting/1_api_basics/#introduction","title":"Introduction","text":"

Blender embeds a Python interpreter, which is used for multiple tasks. It is a central feature of Blender, as large parts of the user interface are set up and controlled from Python, as well as all add-ons (import/export, tools, etc) are written in Python.

As a user you can run scripts directly on this interpreter and also access Python modules provided by Blender, like bpy and mathutils to access scene elements. The bpy module gives access to Blender's data, functions and classes. In this section we will focus on using the Python API for automation, custom data import and manipulating geometry, but this is not all that is possible with the API, of course. The official API manual states the following things are possible using the Python API:

  • Edit any data the user interface can (Scenes, Meshes, Particles etc.).
  • Modify user preferences, key-maps and themes.
  • Run tools with own settings.
  • Create user interface elements such as menus, headers and panels.
  • Create new tools.
  • Create interactive tools.
  • Create new rendering engines that integrate with Blender.
  • Subscribe to changes to data and it's properties.
  • Define new settings in existing Blender data.
  • Draw in the 3D view using Python.

All in all, the Python API is very powerful.

More detailed Python API reference

In these chapters we provide an introduction to the Python API, using a number of examples. After finishing these chapters you can find a more extensive description of often-used Python API features in the separate API section.

"},{"location":"advanced/python_scripting/1_api_basics/#good-to-know","title":"Good to know","text":"

Before we continue, we list some bits of information and some tricks that are good to know.

  • Blender uses Python 3.x, specifically 3.10 in Blender 3.1
  • You can access the online API documentation from within Blender with Help > Python API Reference
  • Starting Blender from the console will allow you to see important outputs channels (warnings, exceptions, output of print() statements, etc). See the next section how to do this.
  • The Python Console area in Blender is great for testing Python one-liners. It also has auto-completion so you can inspect the API quickly. Example code shown with >>> lines in our course notes is assumed to be running in the Python Console.

    Python Console versus terminal console

    The Python Console is something different than the console we refer to below. The Python Console is an area within the Blender user interface in which you can enter and execute Python commands:

    While the other type of \"console\" is a terminal window or DOS box from which you start Blender. This console will then contain any output and exceptions from Python scripts that you run:

    • By enabling the Python Tooltips option in the Preferences under Interface > Display you can hover over almost any button, option, menu, etc and after a second a tool-tip is shown. This tool-tip shows information on how to access this element from the Python API.
    • Right clicking on almost any button, option, menu, etc in Blender gives you the option to 1) directly go to the API documentation with Online Manual or 2) Copy Data Path. Option 2 copies Python API properties related to that element to your clipboard to paste into your script. Note however, that not always the full path is copied, but only the last part.

In the upcoming sections we will first look at how to run Python scripts in Blender. Then we look at how to access Blenders data through scripts and we follow this up with creating geometry, vertex colors and materials in the last section.

"},{"location":"advanced/python_scripting/1_api_basics/#starting-blender-from-the-command-line","title":"Starting Blender from the command line","text":"

It is important, when scripting, to start Blender from a command line interface (macOS and Linux). Warnings, messages and print() statements will output into the console. How to start Blender from the command line depends on your operating system.

  • For macOS it would be like this:

    /Applications/Blender.app/Contents/MacOS/Blender\n
  • For Linux it would be something like:

    $ <blender installation directory>/blender\n
  • On Windows you can start Blender normally (i.e. from the Start menu) and then use Window > Toggle System Console to open the console window from within Blender.

More information on where the Blender executable is located on your system and where Blender directories of interest are located see this manual page.

"},{"location":"advanced/python_scripting/1_api_basics/#starting-blender-from-the-console","title":"\ud83d\udcbb Starting Blender from the console","text":"

Find the Blender executable on your machine. Open Blender through the console. Delete the cube in the default project of Blender, what output is shown in the console?

"},{"location":"advanced/python_scripting/1_api_basics/#running-scripts-within-the-blender-interface","title":"Running scripts within the Blender interface","text":"

When scripting inside Blender it is convenient to use the Scripting workspace (see the arrow in Fig. 1 below). For running scripts within Blender you have two main options:

  • Using the interactive Python Console (Fig. 1A)
  • Using the built-in Text Editor (Fig. 1B)

The Python Console is very useful for testing lines of Python code, and exploring the API using auto-complete (with TAB) to see what is available. The keyboard shortcuts are a bit different than you might be used to in other text editors. See this section in the Blender manual for an overview of menu options and shortcut keys.

Blender also has its own built-in text editor which you can use (Fig. 1B) to edit Python code and execute it by pressing the button in the top bar, or using Alt-P. Note that you can have multiple different text blocks, each with their own code.

If you want to use your own editor to edit your scripts you can do this by opening the script in both the Blender Text Editor and your own editor. To refresh the Blender Text Editor use Text > Reload or Alt R (or Option R on the Mac). You can also make a script that you open in the Blender Text Editor that executes an external script you edit in your own editor. See for example the script in Fig. 1B.

Figure 1: The Scripting workspace in Blender

"},{"location":"advanced/python_scripting/1_api_basics/#running-scripts-from-the-command-line","title":"Running scripts from the command-line","text":"

You can also run Python scripts in Blender directly from the command-line interface. An example of executing a script (-P) without opening the Blender GUI (-b, for background) would be:

blender -b -P script.py\n

You can combine running a Python script with, say, rendering the first frame (-f 1) from an example test.blend file. The output will go to the directory of the blender file (-o //...) and it will generate a PNG image file (-F PNG):

blender -b test.blend -o //render_ -F PNG -f 1\n

More information on command line arguments is here.

"},{"location":"advanced/python_scripting/1_api_basics/#custom-script-arguments","title":"Custom script arguments","text":"

You might want to pass extra arguments to your script, for example to provide a frame range, or file name. For this, Blender provides the -- marker option. Any arguments passed to Blender that follow -- will not get processed by Blender, but are passed in sys.argv:

# useargs.py\nimport sys\n\nargs = []\nidx = sys.argv.index('--')\nif idx != -1:\n    args = sys.argv[idx+1:]\n\nprint(args)\n# Do something with values in args\n
$ blender -b -P useargs.py -- -myopt 1,2,3\nBlender 3.1.2 (hash cc66d1020c3b built 2022-04-02 14:45:23)\nRead prefs: /home/melis/.config/blender/3.1/config/userpref.blend\n['-myopt', '1,2,3']\n\nBlender quit\n

You can then parse these custom arguments using a regular Python module like argparse.

"},{"location":"advanced/python_scripting/1_api_basics/#using-modules-and-external-scripts","title":"Using modules and external scripts","text":"

As we've shown above there's multiple ways to run Python code within Blender, either from a text editor block, the Python Console or from the command-line. Usually, you want to use Python modules or other scripts from the code you're running. Below we describe some common situations and how to handle them.

See this manual page for more tips and tricks related to working with Python scripting in Blender.

NumPy

The official binaries of Blender from blender.org include the numpy Python module, so if you need NumPy then import numpy should work out of the box.

"},{"location":"advanced/python_scripting/1_api_basics/#loading-modules-in-blender","title":"Loading modules in Blender","text":"

For modules you want to import you can use the normal Python method of editing sys.path (as needed) and importing the module:

# Example code run from a text block within Blender\nimport sys\n\n# A path somewhere on your file system\nsys.path.append(\"/some_directory/\")\n\n# Or a path relative to the current blender file\nblendfile_location = os.path.dirname(bpy.data.filepath)\nsys.path.append(blendfile_location)\n\n# Import module\nimport my_python_module\n\n# Call a function from the module\nmy_python_module.do_something()\n

However, suppose you you keep Blender running and edit my_python_module.py to update do_something(). Re-executing the above code will not pick up the changes in the module you're importing. The reason for this is that the Python interpreter doesn't reload a module if it is already loaded. So the import my_python_module has no effect the second time it is called.

To force a module to get reloaded you can use the importlib module:

import my_python_module\n\n# Force reload\nimport importlib\nimportlib.reload(my_python_module)\n\nmy_python_module.do_something()\n

Note that this will re-load the module from disk every time you run the above piece of Python code.

"},{"location":"advanced/python_scripting/1_api_basics/#executing-external-scripts","title":"Executing external scripts","text":"

To execute a Python script you can use the following:

# Execute script_file \nexec(compile(open(script_file).read(), script_file, 'exec'))\n

You could, for example, put this snippet of code in a text block and execute it every time you need to run it (or even paste it in the Python Console). This is a fairly simple way of executing externally stored Python code, while still being able to edit the external script as needed.

"},{"location":"advanced/python_scripting/1_api_basics/#adding-startup-scripts","title":"Adding startup scripts","text":"

You might want to permanently run one or more Python scripts when Blender starts. You can add these scripts in a special configuration directory. The location to place these scripts is system-dependent (see this manual page) for details. In general you want to place the scripts within the \"USER\" location of the platform you're working on:

  • Windows: %USERPROFILE%\\AppData\\Roaming\\Blender Foundation\\Blender\\3.1\\
  • Linux: $HOME/.config/blender/3.1/
  • macOS: /Users/$USER/Library/Application Support/Blender/3.1/

Inside the above directory create a scripts/startup directory. Any .py files placed there will be automatically executed when Blender starts. See this page for other special directories within the system-specific USER directory.

"},{"location":"advanced/python_scripting/2_accessing_data/","title":"Accessing Blender data","text":""},{"location":"advanced/python_scripting/2_accessing_data/#using-bpydata","title":"Using bpy.data","text":"

All data in a Blender file can be accessed through bpy.data. This contains, for example, all objects (bpy.data.objects), all meshes (bpy.data.meshes), all scenes (bpy.data.scenes) and all materials (bpy.data.materials).

The data is stored in a data-type called bpy_collection whose members (data blocks) can be accessed with both an index as well as a string (this in contrary to regular Python dictionaries). For example. bpy.data.objects[\"Camera\"] and bpy.data.objects[0] will be equivalent if Camera is the first object in the collection:

>>> bpy.data.objects\n<bpy_collection[2], BlendDataObjects>\n\n>>> len(bpy.data.objects)\n2\n\n>>> bpy.data.objects[0]\nbpy.data.objects['Camera']\n\n>>> bpy.data.objects['Camera']\nbpy.data.objects['Camera']\n

Attributes of data blocks (e.g an object, collection or material) can be accessed as regular Python attributes, for example:

>>> bpy.data.objects[0].name\n'Camera'\n

Here's two examples of changing those attributes (note that some operations only work if Blender is in the right mode):

bpy.data.objects[\"Cube\"].location.z += 1              # this works in both edit and object mode\nbpy.data.objects[\"Cube\"].data.vertices[0].co.z += 10  # this works only in object mode\n

Tips

  • Use the Python Console in Blender and the auto-complete functionality (TAB) to see what attributes bpy.data has.
  • The Info Editor in Blender shows the python commands being executed when you do operations manually in Blender (See Fig. 2.)
  • Hovering over buttons and input boxes in Blender shows how to access the underlying values through the Python API.

Figure 2: The Info Editor is a nice way to see what python commands are executed when you use Blender. In this figure we see that we deleted the initial cube, made a UV Sphere and translated it.

"},{"location":"advanced/python_scripting/2_accessing_data/#some-notes-on-bpycontext-and-bpyops","title":"Some notes on bpy.context and bpy.ops","text":"

In this section we want to briefly introduce how you can access something called the context, and use operators in the Blender Python API. bpy.context stores information about a user's selections and the context Blender is in. For example, if you want to check which mode is currently active in Blender you can check the value of bpy.context.mode.

Now if you want to change the mode, you can use an operator. Operators are tools that are usually accessed through the user interface with buttons and menus. You can access these operators with Python through bpy.ops. If we would like to change the mode we can do this using an operator, e.g. bpy.ops.object.mode_set(mode='OBJECT')

Of course the possibility of switching to, say, edit mode, depends on which objects are selected, which can be checked with bpy.context.selected_objects. But keep in mind that many of the variables in the context are read-only, for example altering bpy.context.selected_objects directly is not possible. Instead, you can select an object with the select_set() method of the object, e.g. bpy.data.objects['Cube'].select_set(True).

"},{"location":"advanced/python_scripting/2_accessing_data/#running-a-script-and-rendering-from-the-console","title":"\ud83d\udcbb Running a script and rendering from the console","text":"
  1. Write an external script that removes the Cube object that is part of the default scene 1
  2. Then, from the command line and without opening the Blender GUI execute this script and render the first frame. Let it output a PNG image file in the directory of the blender file.
  3. Was the cube indeed removed from the rendered image?
  4. Extra question: is the cube removed from the blender file?
  1. Although you might have altered your startup scene to not have the cube\u00a0\u21a9

"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/","title":"Geometry, colors and materials","text":""},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#creating-an-object-with-a-mesh","title":"Creating an object with a mesh","text":"

If we want to create a new mesh we can do this by calling the new function like this:

mesh = bpy.data.meshes.new(\"newMesh\")\n
This will create the mesh but it is not linked to an object (it will not show in the Outliner). So we make a new object and link the object to the mesh:
obj = bpy.data.objects.new(\"newObject\", mesh)\n

We can actually verify this worked correctly by checking the value of obj.data:

>>> obj.data\nbpy.data.meshes['newMesh']\n

If you check the Outliner in the user interface you will see both the object newObject and the mesh newMesh linked to it.

Now we have an empty mesh, linked to an object. We will now construct a simple piece geometry to show how this is done in Blender. Vertices are defined by their x, y and z values like this:

verts = [ (0,0,0), (0,2,0), (0,1,2) ]\n

Edges are defined as a tuple holding two indices pointing to two vertices in the verts list. So (0,1) refers to a line from vertex (0,0,0) (index 0 in verts) to (0,2,0) (index 1 in verts) in this example. We make the following edges:

edges = [ (0,1), (1,2), (2,0) ]\n

To make faces we need three or more vertices. Per face you make a tuple of three or more indices pointing to three vertices in the verts list. For example the face (0,1,2) is a face made up from the vertices (0,0,0), (0,2,0) and (0,1,2), which are at index 0, 1 and 2 in the verts list. For now lets make one face:

faces = [ (0,1,2) ]\n

We now use a function from the Python API to make a mesh from our verts, edges and faces:

mesh.from_pydata(verts, edges, faces)\n

Now the mesh and the object are created, but it does not yet show in the 3D viewport or the Outliner. This is because we still need to link the new object to an existing collection and in so doing to a scene.

bpy.data.collections[0].objects.link(obj)\n

To summarize here is the full code to generate this geometry:

import bpy\n\n# Create a new mesh\nob_name = \"triangle\"\nmesh = bpy.data.meshes.new(ob_name + \"_mesh\")\n\n# Create a new object with the mesh\nob = bpy.data.objects.new(ob_name, mesh)\n\n# Define some geometry\nverts = [ (0,0,0), (0,2,0), (0,1,2) ]\nedges = [ (0,1), (1,2), (2,0) ] # These are indices pointing to elements in the list verts\nfaces = [ (0,1,2) ] # These are indices pointing to elements in the list verts\n\n# Add it to the mesh\nmesh.from_pydata(verts, edges, faces)\n\n# Link the object to the first collection\nbpy.data.collections[0].objects.link(ob)\n

Tips

  • Note that in general you do not need to explicitly specify mesh edges, as these will be generated automatically based on the faces specified. It's only when you want to have edges that are not connected to faces that you need to specify them explicitly.
  • All objects in Blender (and object data of the same type, i.e. all meshes) are enforced to have unique names. When using the Python API this is no different. So if you create an object with bpy.data.objects.new(\"obj\", mesh) and there already is an object named \"obj\" the name of the new object will be automatically set to something else. This can become important if you generate many objects (say in a loop) but still want to be able to refer to them later by name.
"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#a-filled-disk-from-scratch","title":"\ud83d\udcbb A filled disk from scratch","text":"

In the text above we created a triangle, now as an exercise let's create a spherical disk. First create a ring of vertices, then create edges and a face.

"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#adding-vertex-colors-to-a-mesh","title":"Adding vertex colors to a mesh","text":"

Not seeing vertex colors?

In the video below there's an essential step that's only shown near the end (around 7:00), which setting a material on the geometry. If the correct material isn't set the vertex colors won't show.

Vertex coloring is a way to color a mesh without using textures or uv-mapping. It works by assigning for every face that a vertex is a member of a color to that vertex. So a vertex can have different colors for each of the different faces it is in. Let's say we have a mesh, named \"triangle_mesh\": mesh = bpy.data.meshes['triangle_mesh'], the vertex colors for this mesh will be stored in mesh.vertex_colors. If the mesh does not have a vertex color layer yet, you can make a new one with: mesh.vertex_colors.new(name='vert_colors'). Now we have a color layer to work with: color_layer = mesh.vertex_colors['vert_colors'].

"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#making-triangles-and-a-vertex-color-layer","title":"\ud83d\udcbb Making triangles and a vertex color layer","text":"

Let's take the triangle we made above, but let's add another triangle to it, attached to the first. The code would look like this:

import bpy\n\n# Create a new mesh\nob_name = \"triangle\"\nmesh = bpy.data.meshes.new(ob_name + \"_mesh\")\n\n# Create a new object with the mesh\nob = bpy.data.objects.new(ob_name, mesh)\n\n# Define some geometry\nverts = [ \n        (0,0,0), (0,2,0), (0,1,2) ,\n        (0,3,2)\n        ]\nedges = [ \n        (0,1), (1,2), (2,0),  \n        (1,3), (3, 2)\n        ] # These are indices pointing to elements in the list verts\nfaces = [ (0,1,2), (1,3,2) ] # These are indices pointing to elements in the list verts\n\n# Add it to the mesh\nmesh.from_pydata(verts, edges, faces)\n\n# Link the object to the first collection\nbpy.data.collections[0].objects.link(ob)\n

Now make a vertex color layer for your triangles. Then inspect how many entries are in color_layer = mesh.vertex_colors['vert_colors']. Why are they the same or different from the total number of vertices in the mesh?

In an earlier exercise we saw that color_layer.data contains six entries while we only have four vertices in the mesh. This is because a vertex has a color for every face it is in. So vertex (0,2,0) and (0,1,2) are each in two faces, while the other two vertices are only in one face. So the former vertices have two entries in the color layer, one for each face they are in, the latter only one color entry.

The link between vertex indices in a mesh and those in the vertex color layer can be deduced from the polygons in mesh.polygons. Let's take one polygon from the triangles, lets say the first (poly = mesh.polygons[0]). Now, for one vertex in the polygon, poly.vertices gives you the index of the vertex in the mesh and poly.loop_indices gives you the index of the vertex in color_layer.data. See Fig. 3.

Figure 3: Sketch of the two triangles from Exercise 4. For the vertices are shown the coordinates (in black italic (x, x, x)), indices of the vertex in its mesh (green, outside of the face) and the indices in the loop_indices of the polygon (red, italic and inside the faces.)

Once you have set colors for your vertices you need to set up the shader of the object. For this go to the Shading workspace. Create a Vertex Color node and connect it to a Principled BSDF (connect Color output to Base Color input). And then make a Material Output and connect the Principled BSDF to the Surface input of the Material Output. See Fig. 4.

Figure 4: Shader setup for vertex colors

"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#coloring-your-triangles","title":"\ud83d\udcbb Coloring your triangles","text":"

Let's take the two connected triangles of exercise 4. We will color them in two different ways, using vertex coloring and Python scripting:

  • Make the first triangle (face (0,1,2)) green and the second (face (1,3,2)) red.
  • Now color vertex (0,0,0) and (0,3,2) red and (0,2,0) and (0,1,2) green.
"},{"location":"advanced/python_scripting/3_geometry_colors_and_materials/#adding-a-material","title":"Adding a material","text":"

You can also add materials through the Python API. As an example to show how you could do this, let's add a material to the triangle from exercise 4 in the last section. Materials are stored in bpy.data.material and we can make a new material:

# Make material\ntriangle_material_name = \"triangle_mat\"\nmat = bpy.data.materials.new(triangle_material_name)\n
The nodes and the node tree are stored in the material (node-based materials will be further described in another chapter).

mat.use_nodes = True\nnodes = mat.node_tree.nodes\n

Before we start making nodes we remove the automatically generated nodes.

nodes.clear()\n
We will make two nodes, one Principled BSDF shader and an output node. We can make the shader by making a new node.

shader = nodes.new(type='ShaderNodeBsdfPrincipled')\n
How a node type is called you can search up in Blender in the following way. Go to the Shading workspace and open the add menu in the Shader Editor. Now go to Shader and hover over Principled BSDF until an information pop-up appears. In the pop-up you can find how the node type is called. See Fig. 5.

Figure 5: The type name of a node can be found by navigating to the Add menu and hovering over the node of your interest

If you also want to organize the nodes in the Shader Editor you can place the node like this:

shader.location = 0, 300 # Location in the node window\n
We can set the inputs of the Principled BSDF shader to a default_value.

shader.inputs['Base Color'].default_value = (1,0,0,1)\n
We can now also make an output node and place it in the Shader Editor.

node_output = nodes.new(type='ShaderNodeOutputMaterial')\nnode_output.location = 400, 300\n
Links between nodes can be made using the links in the node_tree. A new link will take outputs and inputs from the nodes you want to link.

links = mat.node_tree.links\nlinks.new(shader.outputs[0], node_output.inputs[0])\n
Now we only need to add the material to the mesh containing the spherical disk.

mesh.materials.append( mat )\n

In summary, the total code for making the material is:

# Make material\ntriangle_material_name = \"triangle_mat\"\nmat = bpy.data.materials.new(triangle_material_name)\n\nmat.use_nodes = True\nnodes = mat.node_tree.nodes\n\n# Clear default nodes\nnodes.clear()\n\nshader = nodes.new(type='ShaderNodeBsdfPrincipled')\nshader.location = 0, 300 # Location in the node window\nshader.inputs['Base Color'].default_value = (1,0,0,1)\n\n# Create an output for the shader\nnode_output = nodes.new(type='ShaderNodeOutputMaterial')\nnode_output.location = 400, 300\n\nlinks = mat.node_tree.links\nlinks.new(shader.outputs['BSDF'], node_output.inputs['Surface'])\n\nmesh.materials.append( mat )\n
"},{"location":"advanced/python_scripting/4_volumetric_data/","title":"Visualizing volumetric data through OpenVDB","text":"

In this section we will show a simple example of how to visualize custom volumetric data with Blender and Python. The current support in Blender for volumetric data is directly tied to the OpenVDB file format. In fact, the only way to create a volume object is to load an OpenVDB file. This is a file format and data structure that originated from the motion-picture industry, where it is often used to show clouds, smoke and fire in computer graphics like movies and games. Here's an example of such a volumetric rendering:

Gasoline explosion. Free example from Embergen.

The reason OpenVDB is used for many volumetric data applications in computer graphics is that is allows sparse volumes to be stored efficiently, while also providing easy querying of the data, for example during rendering. OpenVDB is also a bit more than just a file format, as the OpenVDB library also supports more advanced operations. From the OpenVDB website:

OpenVDB is an Academy Award-winning C++ library comprising a hierarchical data structure and a suite of tools for the efficient manipulation of sparse, time-varying, volumetric data discretized on three-dimensional grids. It is based on VDB, which was developed by Ken Museth at DreamWorks Animation, and it offers an effectively infinite 3D index space, compact storage, fast data access, and a collection of algorithms specifically optimized for the data structure for common tasks such as filtering, CSG, compositing, numerical simulation, sampling, and voxelization from other geometric representations.

For more documentation on OpenVDB see here. Some example OpenVDB files can be found here, under Sample Models.

"},{"location":"advanced/python_scripting/4_volumetric_data/#example","title":"Example","text":"

OpenVDB models are mostly generated with specialized software like Houdini and Embergen. Volumetric data in general is also used for scientific visualizations, for example in ParaView, but support for OpenVDB is still lacking somewhat. In this section we will explain how OpenVDB files can be made from scratch. For example for when you have you own volumetric data in your own data format and you want to visualize or animate this in Blender. To convert your data to a OpenVDB format we will use the Python package pyopenvdb.

First we will create data in Python and write it to an OpenVDB file using OpenVDB and the Python package pyopenvdb.

"},{"location":"advanced/python_scripting/4_volumetric_data/#installation-of-pyopenvdb","title":"Installation of pyopenvdb","text":"

Installing the Python module to access the openVDB functionality can be very easy or more difficult depending on your operating system. See the installation instructions on the pyopenvdb website.

Tip

If you cannot get it to work that way, we made a simple Docker container you can use to run it, see here for the github repository.

"},{"location":"advanced/python_scripting/4_volumetric_data/#making-a-vdb-file-with-pyopenvdb","title":"Making a VDB file with pyopenvdb","text":"

Let us make a simple volumetric cube using pyopenvdb. To start we first load pyopenvdb and numpy:

import numpy as np\nimport pyopenvdb as vdb\n

And we make a zero filled array of size 400x400x400:

dimension = 400\narray = np.zeros((dimension, dimension, dimension))\n

We then fill a cube sized portion of the array with the value 1:

for i in range(dimension):\n   for j in range(dimension):\n      for k in range(dimension):\n         if i < 200 and i >=100 and \\\n          j < 200 and j >=100 and \\\n          k < 200 and k >=100:\n\n            array[i,j,k] = 1.0\n

Now we come to the openvdb part, where we first need to make a grid. In this case we make a float grid (there are more grids besides a float grid for example a BoolGrid and Vec3SGrid are also standardly available).

grid = vdb.FloatGrid()\n

We now copy the values in the array into the grid:

grid.copyFromArray(array)\n

The last important thing we need to do before we save it to file is to name the grid. You will use this name later when using the grid in Blender.

grid.name = \"cube\"\n

The last thing left to do is to save the grid to file:

vdb.write('cube.vdb', grids=[grid])\n
"},{"location":"advanced/python_scripting/4_volumetric_data/#loading-a-vdb-file-into-blender","title":"Loading a VDB file into Blender","text":"

Open a new Blender file and if its there, remove the starting cube. In the 3D viewport choose menu option Add > Volume > Import OpenVDB or use the shortcut Shift-A. Locate the cube.vdb file we just made through the script. You will most likely not see anything yet, so scale the cube down using the shortcut S until you can see the outline of the cube. Now if you change the **Viewport shading** in the top right of the 3D viewport to Rendered (see Fig. 1, #1), you will not see anything beside the outline since we still need to add a shader to the model.

Figure 1: definition of coordinates

Change to the Shading workspace (see Fig. 1, #2) and in the Shader editor click on new to make a new material (see Fig. 1, #3). You see Blender makes a Principled volume and Material output node. To make the cube appear we need to change one thing and for this we need to know the name of the grid in the VDB file.

From the Python script we know this is cube, but you can also figure out the grids and their names in a VDB file from within Blender. In the Properties panel go to Object Data Properties tab (see Fig. 1, #4). Here under Grids you can see the names of the grids in the VDB file. For now, in the Principled Volume node, add the name of the grid (cube) into the field next to Density Attribute (see Fig. 1, #5). This tells the node to use the values in the grid for the scattering density of the voxels.

"},{"location":"advanced/python_scripting/4_volumetric_data/#coloring-the-cube","title":"\ud83d\udcbb Coloring the cube","text":"

Now make a cube similar to the one we just made, but color it blue on one side and red on the other (See Fig. 2). First alter the Python script to include a second grid in the VDB file. In this second grid set one side of the cube to value 1 and the other to zero. Use an Attribute node (do not forget to add the grid name to the Name: field in the attribute node) to feed the second grid into a ColorRamp node (and choose the colors you want). Now feed the ColorRamp into the Color field of the Principled Volume. Do not forget to set the original grid in the Density Attribute.

Does it come out right? Maybe you need to play a bit with settings, like set the Density to 1. You might also need to play with the lighting. If you still have the original light in your scene, try increasing its Power and location. Now also see how it looks in Cycles compared to Eevee.

Figure 2: Colored cube"},{"location":"api/10000_foot_view/","title":"The 10,000 foot view","text":""},{"location":"api/10000_foot_view/#introduction","title":"Introduction","text":"

The Blender Python API mostly consists of a thin layer on top of the underlying Blender C/C++ data structures and methods. The underlying C/C++ code is used to automatically generate the Python API during the build process of the Blender executable, which means the API is always up-to-date with respect to the underlying code.

The user-facing Python API isn't the only part of Blender that uses Python. Large parts of the user interface, most import/export functionality and all add-ons are written in Python. It is therefore relatively easy to extend Blender with, say, new UI dialogs or a custom importer. This is one of the strengths of the Blender Python API.

Be careful

Since the API provides access to Blender internals at a very low level you can screw up the Blender state, causing unexpected behaviour, data corruption or even crashes. In the worst case you can end up with a file that will no longer load in Blender at all, although that's rare.

So when working with Python scripting, save your session to file often, preferably in a number of incremental versions, so you can recover or go a step back when needed.

In cases where you suspect Blender's current internal state has been corrupted you can save the current state to a temporary file, start a second instance of Blender (keeping the first Blender running!) and then open the temporary file in the second instance to help ensure you can start from a known-good state. This prevents you from saving a corrupt Blender state and overwriting your last known-good file.

Some things to be aware of:

  • Blender 3.1 embeds the Python 3.10 interpreter.
  • You can access the online API documentation from within Blender with Help > Python API Reference
  • Starting Blender from the console will allow you to see important outputs channels (warnings, exceptions, output of print() statements, etc).

The earlier chapter on the Python API provides a hands-on introduction, including basic information on how to execute Python scripts in Blender.

"},{"location":"api/10000_foot_view/#api-modules","title":"API modules","text":"

The Blender Python API is comprised of several modules, with bpy being the main one. But there's also useful routines in mathutils, bmesh and a few others.

Accessing API reference documentation

The API documentation on these modules can be easily accessed from within Blender using Help > Python API Reference.

By default none of the API modules, not even bpy, are loaded in the environment where a script file runs, so you need to import the ones you need explicitly.

The Python Console does import quite a few things by defaults and also sets some useful variables, like C to access bpy.context and D to access bpy.data with less typing:

PYTHON INTERACTIVE CONSOLE 3.9.4 (default, Apr 20 2021, 15:51:38)  [GCC 10.2.0]\n\nBuiltin Modules:       bpy, bpy.data, bpy.ops, bpy.props, bpy.types, bpy.context, \nbpy.utils, bgl, blf, mathutils\nConvenience Imports:   from mathutils import *; from math import *\nConvenience Variables: C = bpy.context, D = bpy.data\n\n>>> D.objects.values()\n[bpy.data.objects['Camera'], bpy.data.objects['Cube'], bpy.data.objects['Light']]\n
"},{"location":"api/10000_foot_view/#developer-settings","title":"Developer settings","text":"

When developing Python scripts in Blender it can be useful to enable a few extra settings:

  • The Python Tooltips under Interface > Display > Python Tooltips. When enabled a tooltip will show the corresponding Python command or a path to the data for a UI element.
  • The Developer Extras under Interface > Display > Developer Extras. When enabled this provides multiple things:
    • The 3D viewport overlay for a mesh in edit mode will now have an extra setting Indices to show the low-level indices of selected vertices/edges/faces. This can be very useful when debugging Python code that works on mesh geometry.
    • The right-click menu for a UI item, such as a button or menu entry, will now also contain an entry called Online Python Reference linking to the relevant Python documentation page.
    • It will enable Operator Search, which will add entries to the F3 search menu for operators. These will be listed after the regular menu entries in the search results.
    • It adds a new menu option Help > Operator Cheat Sheet that will create a new text area called OperatorList.txt, which contains all available operators (see Operators) and their default parameters. This list can give you a quick overview of the available operators, with the API documentation providing all the details.
"},{"location":"api/10000_foot_view/#info-area","title":"Info area","text":"

As mentioned in the video in the introductory chapter the Info area can be useful if you want to inspect which Python calls Blender performs for certain operations. This certainly will not provide all the details in all cases, but can give some insight. You can either switch to the default Scripting workspace (using the tabs at the top of the window) to check the output, or use the normal UI area operations to add/change an area to an Info area. The latter is shown below:

"},{"location":"api/10000_foot_view/#sources-of-examples","title":"Sources of examples","text":"

This chapter provides small snippets of code and serves mostly as a reference. Sometimes it can be useful to get more information or examples of how specific parts of the Blender Python API are used. Some good sources for other code are:

  • The add-ons included with Blender show many uses of the Python API. They can be found in the directory <blender-version>/scripts/addons in the Blender distribution directory.
  • A number of script templates are also included, in <blender-version>/scripts/templates_py, mostly examples of defining custom operators or UI elements.
"},{"location":"api/10000_foot_view/#data-blocks","title":"Data-blocks","text":"

The different types of data in Blender are stored in data-blocks. For example, there's Mesh, Object, Texture and Shader data-blocks, but there's quite a few more. One of the clever bits in the way Blender is programmed is that data-blocks written to file contain enough information about their content (i.e. metadata) to make them readable by both older and newer versions of Blender than the one they were written with. This metadata system also makes it possible to automatically provide the Python API for accessing those data-blocks without much manual work from Blender's developers.

Data-blocks are available through Python, per type, under bpy.data. For example, there's bpy.data.objects and bpy.data.meshes. The type of a data-block is the corresponding class under bpy.types:

>>> type(bpy.data.objects['Cube'])\n<class 'bpy_types.Object'>\n\n>>> bpy.types.Object\n<class 'bpy_types.Object'>\n

Each type of data-block has its own set of attributes and methods, particular to that type. Learning the Blender Python API involves getting to know the details of the data-block types you want to work with and how they interact.

Automatic data-block garbage collection

Blender keeps track of which data-blocks are no longer being referenced to decide when a data-block does not need to be saved (so-called garbage collection). Usually you don't need to explicitly interact with this system, but it is good to be aware that it is there, see this section for more details.

"},{"location":"api/10000_foot_view/#unique-data-block-names","title":"Unique data-block names","text":"

Per type of data all the data-blocks need to have a unique name. This is enforced automatically by Blender when a data-block is created by appending a number to make the name unique. For example:

>>> bpy.data.meshes.new('my object')\nbpy.data.meshes['my object']\n\n>>> bpy.data.meshes.new('my object')\nbpy.data.meshes['my object.001']\n\n>>> bpy.data.meshes.new('my object')\nbpy.data.meshes['my object.002']\n

This usually isn't an issue, but just something to be aware of when working with referencing objects by name, as the name of a data-block you created might sometimes be different than you expect.

"},{"location":"api/10000_foot_view/#objects-and-object-data","title":"Objects and object data","text":"

When we use the word \"Object\" in these pages we mean one of the object types that can be present in a 3D scene, e.g. camera, mesh or light. Such objects are of type bpy.types.Object and all have general properties related to their presence in the 3D scene. For example, their name, 3D transformation, visibility flags, parent, etc.

But a Light object needs to specify different properties than, say, a Camera object and these per-type properties are stored as \"object data\". The object data can be accessed through the data attribute of an Object:

# Both lights and cameras are Objects\n>>> type(bpy.data.objects['Light'])\n<class 'bpy_types.Object'>\n\n>>> type(bpy.data.objects['Camera'])\n<class 'bpy_types.Object'>\n\n# But their object data are of a different type\n>>> type(bpy.data.objects['Camera'].data)\n<class 'bpy.types.Camera'>\n\n>>> type(bpy.data.objects['Light'].data)\n<class 'bpy.types.PointLight'>\n\n# And have different attributes, relevant to that type\n>>> dir(bpy.data.objects['Camera'].data)\n[..., 'angle', ..., 'clip_start', ..., 'dof', ...]\n\n>>> dir(bpy.data.objects['Light'].data)\n[..., 'color', ..., 'distance', 'energy', ..., 'falloff_type', ...]\n
"},{"location":"api/10000_foot_view/#objects-of-a-specific-type","title":"Objects of a specific type","text":"

Sometimes you want to iterate over all objects in a scene, but only perform some operation on a specific type of object. You can use the type attribute for checking an object's type:

>>> bpy.data.objects['Camera'].type\n'CAMERA'\n\n>>> bpy.data.objects['Light'].type\n'LIGHT'\n\n>>> for obj in bpy.data.objects:\n    if obj.type == 'MESH':\n        # Do something\n
"},{"location":"api/10000_foot_view/#native-blender-data-structures","title":"Native Blender data structures","text":"

When working with the Python API will you frequently use internal Blender types that appear similar to regular Python types, like lists and dictionaries. However, the Blender types are not real native Python types and behave differently in certain aspects.

For example, the different collections of scene elements (such as objects or meshes) that are available under bpy.data are of type bpy_prop_collection. This type is a combination of a Python list and a dictionary, sometimes called an ordered dictionary, as it allows indexing by both array position and key:

>>> type(bpy.data.objects)\n<class 'bpy_prop_collection'>\n\n# Some of its methods match those of native Python data types\n>>> dir(bpy.data.objects)\n['__bool__', '__contains__', '__delattr__', '__delitem__', '__doc__', '__doc__', \n'__getattribute__', '__getitem__', '__iter__', '__len__', '__module__', \n'__setattr__', '__setitem__', '__slots__', 'bl_rna', 'find', 'foreach_get', \n'foreach_set', 'get', 'items', 'keys', 'new', 'remove', 'rna_type', 'tag', \n'values']\n\n# Index by position\n>>> bpy.data.objects[0]\nbpy.data.objects['Camera']\n\n# Index by key\n>>> bpy.data.objects['Camera']\nbpy.data.objects['Camera']\n\n# (key, value) pairs\n>>> bpy.data.objects.items()\n[('Camera', bpy.data.objects['Camera']), ('Cube', bpy.data.objects['Cube']), \n('Light', bpy.data.objects['Light'])]\n

Note that the position of an item in the collection, and hence its index, can change during a Blender session.

"},{"location":"api/10000_foot_view/#inspecting-values","title":"Inspecting values","text":"

One of the more annoying aspects when working in the Blender Python Console inspecting these kinds of values is that the elements in a bpy_prop_collection (or other Blender types) aren't printed by default, this in contrast to a regular Python dictionary. You need to, for example, cast to a list or call its values() method:

# Regular Python dict, prints both keys and values\n>>> d = dict(a=1, b=2, c=3)\n>>> d\n{'a': 1, 'b': 2, 'c': 3}\n\n# No items printed\n>>> bpy.data.objects\n<bpy_collection[3], BlendDataObjects>\n\n# values() returns a list, so gets printed in detail\n>>> type(bpy.data.objects.values())\n<class 'list'>\n\n>>> bpy.data.objects.values()           \n[bpy.data.objects['Camera'], bpy.data.objects['Cube'], bpy.data.objects['Light']]\n\n# Difference in list() result:\n>>> list(d)\n['a', 'b', 'c']\n# Returns dict *keys*\n\n>>> list(bpy.data.objects)\n[bpy.data.objects['Camera'], bpy.data.objects['Cube'], bpy.data.objects['Light']]\n# Returns collection *values*\n

The choice for not printing the values inside a bpy_prop_collection is (most likely) that in many cases the collection will contain large numbers of objects, so printing them all would not be too useful, or might even make the UI non-responsive for a short time.

"},{"location":"api/10000_foot_view/#data-organization","title":"Data organization","text":"

In certain cases Blender uses a more elaborate data structure in cases where you might except low-level values, like lists. For example, the set of vertices that make up a mesh are only accessible as a collection of MeshVertex objects:

>>> m\nbpy.data.meshes['Cube']\n\n>>> type(m.vertices)\n<class 'bpy_prop_collection'>\n\n>>> len(m.vertices)\n8\n\n>>> m.vertices[0]\nbpy.data.meshes['Cube'].vertices[0]\n\n>>> type(m.vertices[0])\n<class 'bpy.types.MeshVertex'>\n\n>>> dir(m.vertices[0])\n['__doc__', '__module__', '__slots__', 'bevel_weight', 'bl_rna', 'co', 'groups', \n'hide', 'index', 'normal', 'rna_type', 'select', 'undeformed_co']\n\n# Vertex coordinate (object space)\n>>> m.vertices[0].co\nVector((1.0, 1.0, 1.0))\n\n# Vertex normal\n>>> m.vertices[0].normal\nVector((0.5773491859436035, 0.5773491859436035, 0.5773491859436035))\n

The reason for this is that there's several types of data associated with a single vertex, which are all centralized in a MeshVertex object. In short, Blender uses a so-called array-of-structs design. The alternative design choice would have been to have separate arrays for vertex coordinates, vertex normals, etc (which would be a struct-of-arrays design).

"},{"location":"api/10000_foot_view/#vertices-and-matrices","title":"Vertices and matrices","text":"

The example above also shows that even a vertex coordinate is not accessed as a low-level Python data type, like a tuple, but by the Vector type (which is in the mathutils module). This has the advantage of providing many useful methods for operating on vector values:

>>> v = m.vertices[0].normal\n>>> v\nVector((0.5773491859436035, 0.5773491859436035, 0.5773491859436035))\n\n>>> v.length\n0.999998137353116\n\n# Return a new vector that's orthogonal \n>>> w = v.orthogonal()\n>>> w\nVector((0.5773491859436035, 0.5773491859436035, -1.154698371887207))\n\n# Dot product (should be zero as v and w are orthogonal)\n>>> v.dot(w)\n0.0\n\n# Note: v*w is element-wise product, not dot product!\n>>> v*w\nVector((0.3333320915699005, 0.3333320915699005, -0.666664183139801))\n\n# Cross product between two vectors\n>>> v.cross(w)\nVector((-0.9999963045120239, 0.9999963045120239, 0.0))\n\n# Swizzling (returning vector elements in a different order)\n>>> w\nVector((0.5773491859436035, 0.5773491859436035, -1.154698371887207))\n\n>>> w.zxy\nVector((-1.154698371887207, 0.5773491859436035, 0.5773491859436035))\n

The builtin mathutils module contains many useful data types and methods for working with 3D data, including vectors and matrices, but also different methods for working with transformations (like quaternion) and colors spaces.

# Transformation matrix for an object with uniform scale 2 and \n# translation in Z of 3. These values will match with the Transform UI area\n>>> o\nbpy.data.objects['Cube']\n\n>>> o.matrix_world\nMatrix(((2.0, 0.0, 0.0, 0.0),\n        (0.0, 2.0, 0.0, 0.0),\n        (0.0, 0.0, 2.0, 3.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# Create a rotation matrix\n>>> m = Matrix.Rotation(radians(90.0), 4, 'X')\n>>> m\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 7.549790126404332e-08, -1.0, 0.0),\n        (0.0, 1.0, 7.549790126404332e-08, 0.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n>>> v = Vector((1,2,3))\n\n# Transform the vector using the matrix. Note the different outcomes \n# depending on the multiplication order.\n>>> m @ v\nVector((1.0, -2.999999761581421, 2.000000238418579))\n\n>>> v @ m\nVector((1.0, 3.000000238418579, -1.999999761581421))\n\n# Also, a 3-vector is assumed to have a fourth element equal to *one* when \n# multiplying with a matrix:\n>>> m = Matrix.Translation((4, 5, 6))\n>>> m\nMatrix(((1.0, 0.0, 0.0, 4.0),\n        (0.0, 1.0, 0.0, 5.0),\n        (0.0, 0.0, 1.0, 6.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n>>> m @ Vector((1, 2, 3))\nVector((5.0, 7.0, 9.0))\n\n>>> m @ Vector((1, 2, 3, 0))\nVector((1.0, 2.0, 3.0, 0.0))\n
"},{"location":"api/10000_foot_view/#api-quirks","title":"API quirks","text":"

Working with the Blender Python API has some peculiarities compared to your average Python scripting. These have to do with the way the API is structured, but also how it interacts with the Blender internals. The API manual contains a lengthy page on some gotchas, but here we list some of the common ones.

"},{"location":"api/10000_foot_view/#object-modes","title":"Object modes","text":"

An object is always in one of several modes. These modes are the same ones you work with in the UI: Object mode, Edit mode, etc. The current mode for an object can be retrieved through the mode property:

>>> o = bpy.data.objects['Cube']\n>>> o.mode\n'OBJECT'\n\n# <enter edit mode with TAB>\n\n>>> o.mode\n'EDIT'\n

Depending on the current mode of a mesh object certain data might not be up-to-date, or even unavailable, when accessing it through the Python API. This is especially true when an object is in Edit Mode.

This is because the edit mode uses its own copy of the data to let you edit, which is synced with the underlying mesh data when going in and out of edit mode. See here for the relevant section in the Blender API docs.

An example continuing with the Cube mesh above:

>>> o.mode\n'OBJECT'\n\n>>> m = o.data\n>>> m\nbpy.data.meshes['Cube']\n\n# Check UV map data\n>>> len(m.uv_layers[0].data)\n24\n\n# <enter edit mode with TAB>\n\n>>> o.mode\n'EDIT'\n\n# UV map data now empty...\n>>> len(m.uv_layers[0].data)\n0\n

In most cases when working on low-level data such as mesh geometry you want the object to be in object mode (or use the bmesh module when you need the object be in edit mode). It's usually a good idea to add a check at the top of your script to verify the current mode is what you expect:

o = bpy.context.active_object\nif o.mode != 'OBJECT':\n    raise ValueError('Active object needs to be in object mode!')\n

There are alternatives for still allowing a mesh to be in edit-mode when accessing its data from a script, see the API docs for details.

"},{"location":"api/10000_foot_view/#interrupting-long-running-scripts","title":"Interrupting (long-running) scripts","text":"

During script development you might get in a situation where your code is stuck in a loop, or takes much longer than you like. Interrupting a running script can usually be done by pressing Ctrl-C in the terminal console window:

>>> while True:\n...     pass\n...     \n\n# Uh oh, execution stuck in a loop and the Blender UI will now have become unresponsive\n\n# Pressing Ctrl-C in the terminal console window interrupts script execution,\n# as it raises a KeyboardInterrupt\n\nTraceback (most recent call last):\n  File \"<blender_console>\", line 2, in <module>\nKeyboardInterrupt\n
"},{"location":"api/10000_foot_view/#interaction-with-the-undo-system","title":"Interaction with the Undo system","text":"

In some cases when you undo an operation Blender might re-create certain data, instead of going back to a stored version still in from memory. This might cause existing references to the original data to become invalid. This can be especially noticeable when working interactively in the Python Console.

For example, with a cube object as active object in the 3D viewport:

# The Cube is the active object\n>>> bpy.context.active_object\nbpy.data.objects['Cube']\n\n# Save a reference to it\n>>> o = bpy.context.active_object\n\n# <Grab the object in the 3D viewport and move it somewhere else>\n\n# Object reference still valid\n>>> o\nbpy.data.objects['Cube']\n\n# <Undo the object translation in the 3D viewport>\n\n# Uh oh, object reference has now become invalid\n>>> o\n<bpy_struct, Object invalid>\n\n# Reason: object referenced under name 'Cube' has changed\n>>> bpy.data.objects['Cube'] == o\nFalse\n\n>>> id(o)\n140543077302976\n\n>>> id(bpy.data.objects['Cube'])\n140543077308608\n\n# Will need to reacquire the active object, or consistently use bpy.data.objects['Cube'] \n>>> o = bpy.context.active_object\n>>> o\nbpy.data.objects['Cube']\n
"},{"location":"api/bpy_data_and_friends/","title":"A note on bpy.data, bpy.data.objects, ...","text":"

We have been using bpy.data.objects in most examples above to access objects in the scene. This is actually not completely clean, as bpy.data.objects holds all objects in the Blender file. Usually, the distinction doesn't matter as you only have one scene, but a Blender file can hold multiple scenes, each with their own set of objects:

# A file with two scenes, each with their own set of objects\n>>> bpy.data.scenes.values()\n[bpy.data.scenes['Scene'], bpy.data.scenes['Scene.001']]\n\n# Current scene\n>>> bpy.context.scene\nbpy.data.scenes['Scene']\n\n# And its objects\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Bottom cube'], bpy.data.objects['Top Cube']]\n\n# <Select different scene>\n\n# Different current scene\n>>> bpy.context.scene\nbpy.data.scenes['Scene.001']\n\n# And its objects\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Bottom cube.001'], bpy.data.objects['Top Cube.001']]\n\n# All objects in the file\n>>> bpy.data.objects.values()\n[bpy.data.objects['Bottom cube'], bpy.data.objects['Bottom cube.001'], \nbpy.data.objects['Top Cube'], bpy.data.objects['Top Cube.001']]\n

Although objects can also be shared between scenes:

# Two scenes\n>>> bpy.data.scenes.values()\n[bpy.data.scenes['Scene'], bpy.data.scenes['Scene.001']]\n\n# First scene, cubes are local to scene, torus is shared between scenes\n>>> bpy.context.scene\nbpy.data.scenes['Scene']\n\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Torus'], bpy.data.objects['Bottom cube'], \nbpy.data.objects['Top Cube']]\n\n# Second scene, different cubes, torus is shared\n>>> bpy.context.scene\nbpy.data.scenes['Scene.001']\n\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Bottom cube.001'], bpy.data.objects['Top Cube.001'], \nbpy.data.objects['Torus']]\n

The point here is that bpy.data.objects, and every other attribute under bpy.data, holds values of the complete Blender file. Per-scene values are available through attributes of a Scene object, e.g. bpy.context.scene.objects. For certain use cases this distinction matters.

"},{"location":"api/custom_properties/","title":"Custom properties","text":"

Sometimes it can useful to be able to control certain values that you use in a script from the UI. The most flexible, but also most complex, approach would be write an add-on. This allows full control over UI elements, but can be quite a bit of work to create.

However, in quite a few cases there's a simpler alternative if all you need to control are simple Python values, like an int, float, string or list. From Python you can set custom properties on pretty much any Blender Python data block (see here for more details) and then access those values from the UI:

>>> o\nbpy.data.objects['Cube']\n\n>>> o['My prop'] = 123.4\n>>> o['My 2nd prop'] = (1, 1, 0.5)\n

This works, of course, both ways: adding or editing a value from the UI will update the value(s) available through Python. You can then use these values in a script, for example to control a number of objects to create, set a 3D coordinate, etc. See here for more details and examples.

"},{"location":"api/data_block_users_and_gc/","title":"Data-block users and garbage collection","text":"

Blender uses a system based on reference-counting to decide when data-blocks have become unused and can get purged. In the short video below we show some of the details of this scheme:

The video shows the Orphan Data outliner mode, but there are several modes that can be used to get detailed insight into the current state of Blender internals:

  • The Blender File mode gives a high-level overview of a file's contents, including some of the more implicit data block types, such as Workspaces.
  • The Data API mode provides an even more detailed view. It is actually a great way to inspect all the gory details of Blender's internal data structures. It will show all data-blocks by type and their attributes. Some attributes can be even be edited in this outliner mode.
  • The Orphan Data mode shows data blocks that do not have any users and which will not be saved (unless they are marked to have a fake user). Some of the data-blocks you see here might not have been created by you, but are used by Blender internally, for example the Brushes.

Although the video only focused on materials, the way data-block lifetime is managed using the user counts is general to all types of data-blocks in Blender. But there are subtle differences in whether a data-block is really deleted or just has a link to it removed:

  • Whenever the term \"unlink\" is used it means that a link to that data-block is removed and its user count decreased, but the data-block itself will still be in memory. An example of this is clicking the X next to a mesh's material in the Material Properties.
  • If the UI uses the term \"delete\" it means the data-block is deleted immediately from memory. Any data-blocks linked from the deleted data-block will have their users count decreased. An example of this is deleting a Camera object in the 3D view: the Camera object's data-block is deleted from memory, but the Camera object data data-block (containing the actual camera settings) is still in memory, which you can check in the Orphan Data mode of the outliner.

The usage count of data-blocks can also be queried from Python:

# Two cube meshes using the same material\n>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Cube'], bpy.data.objects['Cube.001']]\n\n>>> bpy.data.materials['Material'].users\n2\n\n# Add a new material, set one of the cubes to use it\n>>> bpy.data.materials['Material'].users\n1\n\n>>> bpy.data.materials['Material.001'].users\n1\n\n# <Delete Cube.001 object in the UI>\n\n# Hmmm, still has a user?\n>>> bpy.data.materials['Material.001'].users\n1\n\n# The reason is we deleted the Cube.001 *object*, but\n# the Cube.001 *mesh* is still alive (as its usage count\n# was merely decremented) and it still references the material\n>>> bpy.data.objects['Cube.001']\nTraceback (most recent call last):\n  File \"<blender_console>\", line 1, in <module>\nKeyError: 'bpy_prop_collection[key]: key \"Cube.001\" not found'\n\n>>> bpy.data.meshes['Cube.001']\nbpy.data.meshes['Cube.001']\n\n>>> bpy.data.meshes['Cube.001'].users\n0\n\n>>> bpy.data.meshes['Cube.001'].materials.values()\n[bpy.data.materials['Material']]\n

The use_fake_user attribute of a data block controls whether a Fake user is set, similar to the checkbox in the UI.

Warning

In most cases you probably don't want to manually delete data blocks from a file and only use the normal UI operations for that. But it is possible for cases that need it. Truly purging a data block from Python can be done with the relevant remove() method, e.g.

>>> bpy.context.scene.objects.values()\n[bpy.data.objects['Cube']]\n\n>>> o = bpy.context.active_object\n>>> o\nbpy.data.objects['Cube']\n\n>>> m = o.data\n>>> m\nbpy.data.meshes['Cube']\n\n# Remove the Mesh data-block from the file\n>>> bpy.data.meshes.remove(m)\n>>> bpy.data.meshes.values()\n[]\n\n>>> bpy.data.objects.values()\n[]\n

Note that in the case of deleting object data (in this case a Mesh) any Objects referencing that object data also get removed!

A second thing to note is the above code does not actually update the current Blender file on disk. That only happens on an explicit save action (e.g. through the File menu or using the relevant operator from Python).

"},{"location":"api/materials/","title":"Materials","text":"

As shown in one of the introductory exercises for the Python API it is possible to use Python to create a node-based shader. In most cases using the node-based editor in the UI is the preferred option due to its interactivity, but for certain cases it can be interesting to use Python.

The general workflow for this is to create the necessary shader nodes, connect them through links as needed and then set the material on the relevant mesh.

# Create a new material\nmat = bpy.data.materials.new(\"my material\")\n\n# Enable shader nodes on the material\nmat.use_nodes = True\n\n# Remove the default nodes\nnodes = mat.node_tree.nodes\nnodes.clear()\n\n# Add a Principled BSDF shader node and set its base color\nshader = nodes.new(type='ShaderNodeBsdfPrincipled')\nshader.location = 0, 300\nshader.inputs['Base Color'].default_value = (1,0,0,1)\n\n# Add a Material Output node\nnode_output = nodes.new(type='ShaderNodeOutputMaterial')\nnode_output.location = 400, 300\n\n# Add a link between the nodes\nlinks = nodes.links\nlinks.new(shader.outputs['BSDF'], node_output.inputs['Surface'])\n\n# Add material to the mesh's material slots\nmesh.materials.append(mat)\n

A node's inputs and outputs can be referenced by name. This can then be used to set values on inputs, or connect outputs to inputs, as shown. For example, for the Principled BSDF node above:

>>> shader.inputs.keys()\n['Base Color', 'Subsurface', 'Subsurface Radius', 'Subsurface Color', 'Metallic', \n'Specular', 'Specular Tint', 'Roughness', 'Anisotropic', 'Anisotropic Rotation', \n'Sheen', 'Sheen Tint', 'Clearcoat', 'Clearcoat Roughness', 'IOR', 'Transmission', \n'Transmission Roughness', 'Emission', 'Emission Strength', 'Alpha', 'Normal', \n'Clearcoat Normal', 'Tangent']\n\n>>> shader.outputs.keys()\n['BSDF']\n

The location attributes set above are not strictly needed if you're not going to work on the shader network in the Shader Editor in the UI. But they help to make the node network layout somewhat visually pleasing.

"},{"location":"api/materials/#material-slots","title":"Material slots","text":"

The last line in the Python code above adds the created material to the mesh's material slots. An object can have multiple materials assigned to it and each assigned material uses a so-called material slot. Each polygon in a mesh can only use a single material, by specifying the material index (i.e. slot) to use for that polygon. This allows different parts of a mesh to use different shaders.

By default all faces in a mesh will reference material slot 0. But here's an example of a cube mesh that uses 3 different materials:

Inspecting the underlying material data:

# Get the mesh, as the material is linked to the mesh by default\n>>> o = bpy.data.objects['Cube']\n>>> m = o.data\n\n# The material slots used\n>>> list(m.materials)\n[bpy.data.materials['red'], bpy.data.materials['black-white checkered'], \nbpy.data.materials['voronoi']]\n\n# Polygon -> slot index\n>>> m.polygons[0].material_index\n2\n>>> m.polygons[1].material_index\n0\n>>> m.polygons[2].material_index\n0\n>>> m.polygons[3].material_index\n0\n>>> m.polygons[4].material_index\n1\n>>> m.polygons[5].material_index\n0\n

Material indices can be set per polygon, or set as an array in one go:

# Material slot index for a single polygon \nm.polygons[0].material_index = 0\n\n# Set all polygon material indices\nface_materials = [0, 1, 2, 2, 1, 0]\nm.polygons.foreach_set('material_index', face_materials)\n# Force an update of the mesh, needed in this case\nm.update()\n
"},{"location":"api/meshes/","title":"Meshes","text":"

One of the more common scene data types to work with from Python are 3D meshes. Meshes in Blender can contain polygons of an arbitrary number of vertices (so-called N-gons), can contain wire edges and support extra layers of data, such as vertex colors and UV coordinates.

We go into a fair amount of detail on how to create and access mesh data, in several ways. As usual, the Blender API docs on the Mesh type contain many more details, but we feel the discussion below is a good summary to get you started for many use cases.

"},{"location":"api/meshes/#creating-a-mesh-high-level","title":"Creating a mesh (high-level)","text":"

As shown earlier the Mesh.from_pydata(vertices, edges, faces) method allows a simple and high-level way of creating a mesh. This method doesn't offer full control over the created mesh and isn't very fast for large meshes, but it can be good enough in a lot of cases.

It takes three lists of values, or actually, any Python iterable that matches the expected form:

  • vertices: a sequence of float triples, e.g. [(1.0, 2.0, 3.0), (4, 5, 6), ...]
  • edges: a sequence of integer pairs (vertex indices), that define edges by. If [] is passed then edges are inferred from polygons
  • faces: a sequence of one or more polygons, each defined as a sequence of 3 or more vertex indices. E.g. [(0, 1, 2), (1, 2, 3, 4), ...]

Info

The choice of how the mesh data is passed might incur an overhead in memory usage and processing time, especially when regular Python data structures, like lists, are used. An alternative would be to pass NumPy arrays.

For the examples below we assume that no explicit list of edges is passed. Edges will then be created implicitly based on the polygons specified, which is usually what is preferred. We discuss explicitly specifying edges below.

An example of creating a simple mesh:

# Create a mesh consisting of 3 polygons using 6 vertices\n\nvertices = [\n    (0, 0, 0),      (2,  0,  0),    (2,  2,  0.2),    \n    (0,  2,  0.2),  (1, 3, 1),      (1, -1, -1),    \n]\n\npolygons = [\n    (0, 1, 2, 3),   # Quad\n    (4, 3, 2),      # Triangle\n    (0, 5, 1)       # Triangle\n]\n\nm = bpy.data.meshes.new(name='my mesh')\nm.from_pydata(vertices, [], polygons)\n

At this point we have created a new Mesh object, which corresponds to Object Data of type Mesh. Object Data cannot be directly added to a scene, but needs to be referenced by a 3D Object:

# Create an Object referencing the Mesh data\no = bpy.data.objects.new(name='my mesh', object_data=m)\n\n# Add the Object to the scene\nbpy.context.scene.collection.objects.link(o)\n

The resulting mesh and outliner entry looks like this:

"},{"location":"api/meshes/#careful-invalid-data","title":"Careful: invalid data","text":"

Note that it is possible to set up a mesh with invalid/inconsistent data when setting the underlying arrays manually, as is the case here. This can cause weird behaviour or even crashes.

For example:

# 3 vertices\nvertices = [ (0, 0, 0), (1,  1, 1), (-1, 2, -1) ]\n\n# Invalid vertex index 3 used!\npolygons = [ (0, 1, 2, 3) ]   \n\nm = bpy.data.meshes.new(name='my invalid mesh')\nm.from_pydata(vertices, [], polygons)\n\no = bpy.data.objects.new(name='my invalid mesh', object_data=m)\nbpy.context.scene.collection.objects.link(o)\n

When executing the above code a new mesh is added to the scene, but it will show as a triangle in the 3D viewport, instead of a quad. And even though that doesn't appear to be unreasonable behaviour in this case Blender will crash if we subsequently enter edit mode on the mesh!

So the lesson here is to be careful when specifying geometry using these low-level API calls. This actually applies to all parts of the Blender Python API in general.

In this case, to make sure a created mesh has valid data we can use the validate() method on a Mesh. This will check the mesh data and remove any invalid values, e.g. by deleting the polygon using non-existent vertex index 3 above. This might not result in a mesh that matches what you want based on the data, but at least you can detect this situation and handle it without Blender crashing.

The validate() method has two issues to be aware of:

  • The method returns True in case the mesh does not validate, i.e. when it has issues. More specifically, it returns True when changes were made to the mesh data to remove invalid values.
  • It will only report on the specific issues found when called with validate(verbose=True) and then will only output to the console.

But it is still a good idea to always validate a mesh when creating it manually:

...\nm = bpy.data.meshes.new(name='my invalid mesh')\nm.from_pydata(vertices, [], polygons)\n\nif m.validate(verbose=True):\n    print('Mesh had issues and has been altered! See console output for details')\n

In the example of the invalid mesh data above this results in these message being printed in the console output:

ERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:351 BKE_mesh_validate_arrays:     Edge 0: v2 index out of range, 3\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:351 BKE_mesh_validate_arrays:     Edge 3: v2 index out of range, 3\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:605 BKE_mesh_validate_arrays:     Loop 3 has invalid vert reference (3)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:782 BKE_mesh_validate_arrays:     Loop 0 is unused.\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:782 BKE_mesh_validate_arrays:     Loop 1 is unused.\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:782 BKE_mesh_validate_arrays:     Loop 2 is unused.\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:782 BKE_mesh_validate_arrays:     Loop 3 is unused.\n

After validate() returns we can see in this case that invalid data was indeed removed:

>>> vertices = [ (0, 0, 0), (1,  1, 1), (-1, 2, -1) ]\n>>> polygons = [ (0, 1, 2, 3) ]   \n>>> m = bpy.data.meshes.new(name='my invalid mesh')\n>>> m.from_pydata(vertices, [], polygons)\n\n>>> len(m.polygons)\n1\n>>> len(m.edges)\n4\n>>> len(m.vertices)\n3\n\n>>> m.validate()\nTrue\n\n>>> len(m.polygons)\n0\n>>> len(m.edges)\n2\n>>> len(m.vertices)\n3\n

"},{"location":"api/meshes/#creating-a-mesh-low-level","title":"Creating a mesh (low-level)","text":"

A second, and more flexible, way of creating a mesh is using low-level calls for setting the necessary data arrays directly on a Mesh object. This is especially useful in combination with NumPy arrays, as this allows the creation of large meshes with relatively high performance and low memory overhead.

Meshes in Blender are stored using 4 arrays, as attributes of the bpy.types.Mesh type:

  • vertices: vertex locations, each specified by 3 floats
  • loops: contains the vertex indices used for defining polygons of a mesh, each polygon as a sequence of indices in the vertices array
  • polygons: defines the start index of each polygon as an index in loops, plus the length of each polygon in number of vertices
  • edges: defines the edges of the mesh, using two vertex indices per edge

So to create a mesh at this level we need to set up the necessary values for these arrays. Here, we create the same mesh as in the previous section, using NumPy arrays for storing the data.

# Vertices (8): x1 y1 z1 x2 y2 z2 ...\nvertices = numpy.array([\n    0, 0, 0,    2,  0,  0,    2,  2,  0.2,    0,  2,  0.2,\n    1, 3, 1,    1, -1, -1,    0, -2, -1,      2, -2, -1\n], dtype=numpy.float32)\n\n#\n# Polygons, defined in loops\n#\n\n# List of vertex indices of all loops combined\nvertex_index = numpy.array([\n    0, 1, 2, 3,                             # Quad\n    4, 3, 2,                                # Triangle\n    0, 5, 1                                 # Triangle\n], dtype=numpy.int32)\n\n# For each polygon the start of its indices in vertex_index\nloop_start = numpy.array([\n    0, 4, 7\n], dtype=numpy.int32)\n\n# Length of each polygon in number of vertices\nloop_total = numpy.array([\n    4, 3, 3\n], dtype=numpy.int32)\n

We additionally also specify texture coordinates and vertex colors. This is something that is not possible with the high-level from_pydata() API shown above. Note that we need to specify these values per vertex per polygon loop.

# Texture coordinates per vertex per polygon loop\nuv_coordinates = numpy.array([\n    0,   0,    1, 0,      1, 1,    0, 1,    # Quad   \n    0.5, 1,    0, 0,      1, 0,             # Triangle\n    0,   1,    0.5, 0,    1, 1              # Triangle\n], dtype=numpy.float32)\n\n# Vertex color (RGBA) per vertex per polygon loop\nvertex_colors = numpy.array([\n    1, 0, 0, 1,   1, 0, 0, 1,   1, 0, 0, 1,   1, 0, 0, 1,\n    0, 1, 0, 1,   0, 1, 0, 1,   0, 1, 0, 1,\n    1, 0, 0, 1,   0, 1, 0, 1,   0, 0, 1, 1,\n], dtype=numpy.float32)\n

Next, we create a new mesh using the above arrays:

num_vertices = vertices.shape[0] // 3\nnum_vertex_indices = vertex_index.shape[0]\nnum_loops = loop_start.shape[0]\n\nm = bpy.data.meshes.new(name='my detailed mesh')\n\n# Vertices\nm.vertices.add(num_vertices)\nm.vertices.foreach_set('co', vertices)\n\n# Polygons\nm.loops.add(num_vertex_indices)\nm.loops.foreach_set('vertex_index', vertex_index)\n\nm.polygons.add(num_loops)\nm.polygons.foreach_set('loop_start', loop_start)\nm.polygons.foreach_set('loop_total', loop_total)\n\n# Create UV coordinate layer and set values\nuv_layer = m.uv_layers.new(name='default')\nuv_layer.data.foreach_set('uv', uv_coordinates)\n\n# Create vertex color layer and set values\nvcol_layer = m.color_attributes.new(name='vcol', type='FLOAT', domain='CORNER')\nvcol_layer.data.foreach_set('color', vertex_colors)\n\n# Done, update mesh object\nm.update()\n\n# Validate mesh\nif m.validate(verbose=True):\n    print('Mesh data did not validate!')\n\n# Create an object referencing the mesh data\no = bpy.data.objects.new(name='my detailed mesh', object_data=m)\n\n# Add the object to the scene\nbpy.context.scene.collection.objects.link(o)    \n

Info

Passing a multi-dimensional NumPy array directly to foreach_set() will not work:

>>> vertices = numpy.array([\n...     (0, 0, 0),    (2,  0,  0),    (2,  2,  0.2),    (0,  2,  0.2),\n...     (1, 3, 1),    (1, -1, -1),    (0, -2, -1),      (2, -2, -1)\n... ], 'float32')\n>>> vertices.shape\n(8, 3)\n\n>>> m = bpy.data.meshes.new(name='my detailed mesh')\n>>> m.vertices.foreach_set('co', vertices)\nTraceback (most recent call last):\n  File \"<blender_console>\", line 1, in <module>\nRuntimeError: internal error setting the array\n

However, passing a flattened array does work:

>>> m.vertices.foreach_set('co', vertices.flatten())\n>>> [v.co for v in mesh.vertices]\n[Vector((0.0, 0.0, 0.0)), Vector((2.0, 0.0, 0.0)), Vector((2.0, 2.0, 0.20000000298023224)), Vector((0.0, 2.0, 0.20000000298023224)), Vector((1.0, 3.0, 1.0)), Vector((1.0, -1.0, -1.0)), Vector((0.0, -2.0, -1.0)), Vector((2.0, -2.0, -1.0))]\n
"},{"location":"api/meshes/#specifying-edges-when-creating-a-mesh","title":"Specifying edges when creating a mesh","text":"

In most cases we want to create a mesh consisting of only polygons and in that case don't need to specify edges. For certain mesh objects it can be of interest to also be able to specify edges explicitly, or even to create a mesh that consists only of vertices and edges between them. Edges can be used to add line segments that are not part of polygons.

We build upon the example mesh we created above by adding a set of 3 edges:

# Create a mesh consisting of 3 polygons using 8 vertices, with 3 extra edges\n# that are not part of the polygons\n\nvertices = [\n    (0, 0, 0),    (2,  0,  0),    (2,  2,  0.2),    (0,  2,  0.2),\n    (1, 3, 1),    (1, -1, -1),    (0, -2, -1),      (2, -2, -1)\n]\n\nedges = [\n    (5, 6), (6, 7), (5, 7)\n]\n\npolygons = [\n    (0, 1, 2, 3),   # Quad\n    (4, 3, 2),      # Triangle\n    (0, 5, 1)       # Triangle\n]\n\nm = bpy.data.meshes.new(name='my mesh with edges')\nm.from_pydata(vertices, edges, polygons)\n\no = bpy.data.objects.new(name='my mesh with edges', object_data=m)\nbpy.context.scene.collection.objects.link(o)\n

The resulting mesh and outliner entry looks like this:

Note that even though we specified only 3 edges explicitly the polygons in the mesh implicitly define 8 more. These are the edges making up those polygons, with shared edges being present only once. In total this results in 11 edges in the mesh:

>>> len(m.edges)\n11\n

For the second, low-level, method of mesh creation edges are handled slightly different. Edges can be set explicitly by using Mesh.edges:

# Vertices (8): x1 y1 z1 x2 y2 z2 ...\nvertices = numpy.array([\n    0, 0, 0,    2,  0,  0,    2,  2,  0.2,    0,  2,  0.2,\n    1, 3, 1,    1, -1, -1,    0, -2, -1,      2, -2, -1\n], dtype=numpy.float32)\n\n# Extra edges (3) not defined implicitly by polygons\nedges = numpy.array([\n    5, 6,    6, 7,    5, 7\n], dtype=numpy.int32)\n\n#\n# Polygons, defined in loops\n#\n\n# List of vertex indices of all loops combined\nvertex_index = numpy.array([\n    0, 1, 2, 3,                             # Quad\n    4, 3, 2,                                # Triangle\n    0, 5, 1                                 # Triangle\n], dtype=numpy.int32)\n\n# For each polygon the start of its indices in vertex_index\nloop_start = numpy.array([\n    0, 4, 7\n], dtype=numpy.int32)\n\n# Length of each polygon in number of vertices\nloop_total = numpy.array([\n    4, 3, 3\n], dtype=numpy.int32)\n\nnum_vertices = vertices.shape[0] // 3\nnum_edges = edges.shape[0] // 2\nnum_vertex_indices = vertex_index.shape[0]\nnum_loops = loop_start.shape[0]\n\nm = bpy.data.meshes.new(name='detailed mesh with edges')\n\n# Vertices\nm.vertices.add(num_vertices)\nm.vertices.foreach_set('co', vertices)\n\n# Edges\nm.edges.add(num_edges)\nm.edges.foreach_set('vertices', edges)\n\n# Polygons\nm.loops.add(num_vertex_indices)\nm.loops.foreach_set('vertex_index', vertex_index)\n\nm.polygons.add(num_loops)\nm.polygons.foreach_set('loop_start', loop_start)\nm.polygons.foreach_set('loop_total', loop_total)\n\n# Done, update mesh object\nm.update()\n\n# Validate mesh\nif m.validate(verbose=True):\n    print('Mesh data did not validate!')\n

Here, we only specify the extra edges and not the polygon edges. But when we try to validate the mesh errors will be reported:

ERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 0 needs missing edge (0, 1)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 0 needs missing edge (1, 2)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 0 needs missing edge (2, 3)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 0 needs missing edge (3, 0)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 1 needs missing edge (4, 3)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 1 needs missing edge (3, 2)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 1 needs missing edge (2, 4)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 2 needs missing edge (0, 5)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 2 needs missing edge (5, 1)\nERROR (bke.mesh): ../source/blender/blenkernel/intern/mesh_validate.c:628 BKE_mesh_validate_arrays:     Poly 2 needs missing edge (1, 0)\n

So the polygon edges, which we did not specify, are being reported. In this case the validate() method will correct this and add the missing edges. But having errors reported for regular polygon edges makes it harder to detect any other issues with the mesh data. So the Mesh.update() method provides the option calc_edges. By default this option is False, but when set to True all edges in the mesh will be recalculated to be consistent with the available vertex indices, polygons and extra edges set.

...\n\n# Done, update mesh object and recalculate edges\nm.update(calc_edges=True)\n

Validation now succeeds:

>>> m.validate(verbose=True)\nFalse\n
"},{"location":"api/meshes/#accessing-mesh-data-object-mode","title":"Accessing mesh data (object mode)","text":"

Inspecting or using mesh data is straightforward. Here we use one of the meshes created with the low-level methods above and retrieve some of its data. Note that Blender provides a few values derived from the original arrays, such as loop_indices and vertices per polygon, which can be useful for certain operations.

m = bpy.data.meshes['my detailed mesh']\n\nlen(m.vertices)            => 8                            \nlen(m.polygons)            => 3\n# 2 triangles + 1 quad = 2*3 + 1*4 = 10\nlen(m.loops)               => 10\n# 8 implicit edges (for 2 triangles and 1 quad), shared edges only listed once\nlen(m.edges)               => 8                \n\nm.vertices[7].co           => Vector((2.0, -2.0, -1.0))         # Coordinate\nm.vertices[7].normal       => Vector((0.6.., -0.6.., -0.3..))   # Normal\nm.vertices[7].select       => True              # Selected (edit mode)\n\nm.polygons[2].index        => 2                 # Useful in 'for p in m.polygons'\nm.polygons[2].loop_start   => 7                 # First index in loops array\nm.polygons[2].loop_total   => 3                 # Number of vertices in loop\nm.polygons[2].loop_indices => [7, 8, 9]         # Indices in m.loops\nm.loops[7].vertex_index    => 0\nm.loops[8].vertex_index    => 5\nm.loops[9].vertex_index    => 1\nm.polygons[2].vertices     => [0, 5, 1]         # Actual vertex indices\nm.polygons[2].select       => True              # Selected (edit mode)\nm.polygons[2].use_smooth   => False             # Smooth shading enabled\n\n# These are automatically computed\nm.polygons[2].area         => 1.4142135381698608\nm.polygons[2].normal       => Vector((0.0, -0.707..., 0.707...))   \nm.polygons[2].center       => Vector((1.0, -0.333..., -0.333...))  \n\nm.edges[0].vertices        => [2, 3]            # (bpy_prop_array)\n

Starting with Blender 3.1 there's new attributes vertex_normals and polygon_normals on Mesh objects to access normals directly from the underlying array they're stored in:

# Access per vertex, as above\n>>> m.vertices[0].normal\nVector((-0.5773503184318542, -0.5773503184318542, -0.5773503184318542))\n\n# Access from array of vertex normals\n>>> m.vertex_normals[0].vector\nVector((-0.5773503184318542, -0.5773503184318542, -0.5773503184318542))\n\n# Access per polygon, as above\n>>> m.polygons[0].normal\nVector((-1.0, -0.0, 0.0))\n\n# Access from array of polygon normals\n>>> m.polygon_normals[0].vector\nVector((-1.0, 0.0, 0.0))\n

The array-based normal access is more efficient that accessing the normal value of a MeshVertex. Note that vertex_normals and polygon_normals only provide read-only access.

"},{"location":"api/meshes/#vertex-colors","title":"Vertex colors","text":"

A mesh can have multiple sets of vertex colors. Each set has a name and for each vertex the associated color (but see below). By default meshes created in Blender do not have a vertex color layer, so it needs to be created explicitly.

>>> m\nbpy.data.meshes['Cube']\n\n>>> type(m.vertex_colors)\n<class 'bpy_prop_collection'>\n\n# Create a new vertex color layer\n>>> vcol_layer = m.vertex_colors.new(name='My vertex colors')\n>>> vcol_layer\nbpy.data.meshes['Cube'].vertex_colors[\"My vertex colors\"]\n\n>>> len(m.vertex_colors)\n1\n\n# Name shown under Object Data -> Vertex Colors \n>>> vcol_layer.name\n'My vertex colors'\n

The vertex colors themselves are accessed through the data member:

>>> type(vcol_layer.data)\n<class 'bpy_prop_collection'>\n\n>>> len(vcol_layer.data)\n24\n\n>>> type(vcol_layer.data[0].color)\n<class 'bpy_prop_array'>\n\n>>> list(vcol_layer.data[0].color)\n[1.0, 1.0, 1.0, 1.0]\n\n>>> len(m.polygons)\n6\n\n>>> len(m.vertices)\n8\n\n>>> len(m.loops)\n24\n

One thing to notice here is that the vertex color array has 24 entries. But the Cube object only has 8 vertices and 6 polygons. The reason for the higher number of vertex colors is that Blender stores separate vertex colors per polygon. So the Cube has 6 polygons, each defined using 4 vertices, hence 6*4=24 vertex colors in total (which is the same number as the length of the loops array).

This is more flexible than what most 3D file formats allow, which usually only store one color per vertex. During import Blender will duplicate those colors to set the same color for a vertex in all polygons in which it is used. An example of how to take advantage of the added flexibility is that we can set a random color per cube face by setting each of the 4 vertex colors of a face to the same color:

for i in range(6):\n    r = random()\n    g = random()\n    b = random()\n    for j in range(4):\n        vcol_layer.data[4*i+j].color = (r, g, b, 1)\n

A slightly more Blender-like (and robust) way to write the above code would be to take advantage of the polygon loop indices:

for p in m.polygons:\n    r = random()\n    g = random()\n    b = random()    \n    for i in p.loop_indices:\n        vcol_layer.data[i].color = (r, g, b, 1)\n

Vertex color space changed in 3.2+

In Blender 3.2 the interpretation of vertex colors values was changed. Previously, vertex color RGB values were assumed to be in sRGB color space. But from 3.2 onwards they are assumed to be in scene linear color space. Specifically, the vcol_attr.data[i].color attribute assumes linear values are passed, while `vcol_attr.data[i].color_srgb can be used to set sRGB values (the latter will use automatic conversion where needed).

When passing the wrong values, i.e. sRGB instead of linear, the difference in color can be subtle, but noticeable. Below is the same set of values, but one passed as sRGB (left), the other as linear (right):

To manually convert a color value between the two color spaces use the functions from mathutils.Color, specifically from_scene_linear_to_srgb() and from_srgb_to_scene_linear().

"},{"location":"api/meshes/#active-set","title":"Active set","text":"

As noted above a mesh can have more than one layer of vertex colors. Among the sets present on a mesh there can be only one that is active. The active vertex color layer set controls, for example, which vertex colors are visible in the 3D viewport and are edited in Vertex Paint mode.

When adding a vertex color layer (and similar for UV maps described below) through the UI the active layer is changed to the newly added layer. Also, clicking in the Vertex Color layer UI changes the active layer. Below is a list of 2 vertex color layers on a mesh shown, of which Col is the active one used in vertex paint mode.

The camera icon right of the vertex color names controls which layer is used during rendering by default (and which is set independently of the active status). But in most cases the shader used on an object will explicitly choose a vertex color layer using an Attribute node and so override the setting in the UI list.

XXX doesn't seem to work in 3.6?

Controlling the active vertex color (or UV map) layer can be done using the active property:

>>> m.vertex_colors.active_index\n1\n\n>>> m.vertex_colors.active\nbpy.data.meshes['Cube'].vertex_colors[\"Another layer\"]\n\n>>> m.vertex_colors.active = m.vertex_colors[0]\n>>> m.vertex_colors.active\nbpy.data.meshes['Cube'].vertex_colors[\"Col\"]\n
"},{"location":"api/meshes/#uv-coordinates","title":"UV coordinates","text":"

UV coordinates follow the same setup as vertex colors, but instead store a 2-tuple of floats per vertex per polygon. Note that just like for vertex colors UV coordinates are also specified per vertex per polygon.

Meshes created in Blender will already have a UV map called UVMap:

>>> m\nbpy.data.meshes['Cube']\n\n>>> len(m.uv_layers)\n1\n\n>>> m.uv_layers[0].name\n'UVMap'\n

The actual UV values are once again stored under the data member:

>>> uv_map = m.uv_layers[0]\n>>> uv_map\nbpy.data.meshes['Cube'].uv_layers[\"UVMap\"]\n\n>>> type(uv_map.data)\n<class 'bpy_prop_collection'>\n\n>>> len(uv_map.data)\n24\n\n>>> type(uv_map.data[0])\n<class 'bpy.types.MeshUVLoop'>\n\n>>> uv_map.data[0].uv\nVector((0.375, 0.0))\n

In general, UV maps are either set through importing or edited within Blender using the UV Editor, although there can be valid reasons for wanting to control them through the Python API.

"},{"location":"api/meshes/#bmesh","title":"BMesh","text":"

There is another method in Blender for creating meshes and accessing their data: the so-called BMesh, which is implemented by the bmesh module and its BMesh class. BMesh is especially interesting when you want to perform more complex geometric operations on an existing mesh, or build up a mesh polygon-by-polygon instead of providing the full mesh in one go as a set of arrays as shown above. Also, a large set of high- and low-level geometric operations on BMeshes is available, such as merging vertices within a given distance, face splitting, edge collapsing or generating a convex hull. These are provided in the bmesh.ops and bmesh.utils modules. These operations would be tedious and error prone to script manually.

In this section we only give a brief overview of BMesh and refer to the API docs for all the details.

The differences of BMesh compared to working with the native mesh data structure we showed above:

  • A BMesh holds extra data on mesh connectivity, like the neighbours of a vertex, which can be easily queried for geometric editing. The trade-off is that a BMesh will use more memory to store all this extra data, but that is usually only a limiting factor for very large meshes.
  • It is somewhat slower to create a (large) mesh using a BMesh, as each mesh element (vertex, edge, polygon) takes a Python call to create, plus needs extra calls and Python values to set up.
  • A BMesh cannot be used directly in a scene, it first needs to be converted (or copied back) to a Mesh. So mesh data is present twice in memory at some point in time, in the two different forms.

Here's a (verbose) example of create a BMesh from scratch that holds a single triangle and edge:

import bpy, bmesh \n\nbm = bmesh.new()\n\n# Create 4 vertices\nv1 = bm.verts.new((0, 0, 0))\nv2 = bm.verts.new((1, 0, 1))\nv3 = bm.verts.new((0, 1, 1))\nv4 = bm.verts.new((1, 1, 1))\n\n# Add a triangle\nbm.faces.new((v1, v2, v3))\n\n# Add a line edge\nbm.edges.new((v3, v4))\n\n# Done setting up the BMesh, now copy geometry to a regular Mesh\nm = bpy.data.meshes.new('mesh')\nbm.to_mesh(m)\n\n# Release BMesh data, bm will no longer be usable\nbm.free()\n\n# Add regular Mesh as object\no = bpy.data.objects.new('mesh', m) \nbpy.context.scene.collection.objects.link(o)\n

A BMesh can also be created from an existing Mesh, edited and then copied back to the Mesh:

o = bpy.context.active_object\nm = o.data\n\n# Create a new BMesh and copy geometry from the Mesh\nbm = bmesh.new()\nbm.from_mesh(m)\n\n# Edit some geometry\nbm.verts.ensure_lookup_table()\nbm.verts[4].co.x += 3.14\n\nbm.faces.ensure_lookup_table()\nbm.faces.remove(bm.faces[0])\n\n# Copy back to Mesh\nbm.to_mesh(m)\nbm.free()\n

If a Mesh is currently in edit mode you can still create a BMesh from it, edit that and the copy the changes back, while keeping the Mesh in edit mode:

o = bpy.context.active_object\nm = o.data\nassert m.mode == 'EDIT'\n\nbm = bmesh.new()\n# Note the different call, from_edit_mesh() instead of from_mesh()\nbm.from_edit_mesh(m)\n\n# <edit BMesh>\n\n# Update edit-mesh of Mesh (again, different call)\nbm.update_edit_mesh(m)\nbm.free()\n

This can be useful when you're working in edit mode on a mesh and also want to run a script on it that uses BMesh, but don't want to switch in and out of edit-mode to run the script.

Warning

There are some things to watch out for when synchronizing BMesh state to a Mesh, see here.

Some examples of the geometric queries that you can do on a BMesh (see docs for more):

bm.verts[i]                 # Sequence of mesh vertices (read-only)\nbm.edges[i]                 # Sequence of mesh edges (read-only)\nbm.faces[i]                 # Sequence of mesh faces (read-only)\n\nbm.verts[i].co              # Vertex coordinate as a mathutils.Vector\nbm.verts[i].normal          # Vertex normal\nbm.verts[i].is_boundary     # True if vertex is at the mesh boundary\nbm.verts[i].is_wire         # True if vertex is not connected to any faces\nbm.verts[i].link_edges      # Sequence of edges connected to this vertex\nbm.verts[i].link_faces      # Sequence of faces connected to this vertex\nbm.verts[i].index           # Index in bm.verts\n\nbm.edges[i].calc_length()   # Length of the edge\nbm.edges[i].is_boundary     # True if edge is boundary of a face\nbm.edges[i].is_wire         # True if edge is not connected to any faces\nbm.edges[i].is_manifold     # True if edge is manifold (used in at most 2 faces)\nv = bm.edges[i].verts[0]    # Get one vertex of this edge\nbm.edges[i].other_vert(v)   # Get the other vertex\nbm.edges[i].link_faces      # Sequence of faces connected to this edge\nbm.edges[i].index           # Index in bm.edges\n\nbm.faces[i].calc_area()     # Face area\nbm.faces[i].calc_center_median()    # Median center\nbm.faces[i].edges           # Sequence of edges defining this face\nbm.faces[i].verts           # Sequence of vertices defining this face\nbm.faces[i].normal          # Face normal\nbm.faces[i].index           # Index in bm.faces\n

Indices

The use of indices above, both to index the sequences of vertices/edges/faces as well as retrieving .index values, requires up-to-date indices. During operations on a BMesh the indices (and sequences) might become incorrect and need an update first.

To ensure the .index values of vertices, edges and faces are correct call the respective index_update() method on their sequence:

bm.verts.index_update()\nbm.edges.index_update()\nbm.faces.index_update()\n

To ensure you can correctly index bm.verts, bm.edges and bm.faces call the respective ensure_lookup_table() method:

bm.verts.ensure_lookup_table()\nbm.edges.ensure_lookup_table()\nbm.faces.ensure_lookup_table()\n

A Blender mesh can contain polygons with an arbitrary number of vertices. Sometimes it can be desirable to work on triangles only. You can convert all non-triangle faces in a BMesh to triangles with a call to bmesh.ops.triangulate():

bm = bmesh.new()\n\nv1 = bm.verts.new((0, 0, 0))\nv2 = bm.verts.new((1, 0, 1))\nv3 = bm.verts.new((0, 1, 1))\nv4 = bm.verts.new((1, 1, 1))\n\n# Add a quad\nbm.faces.new((v1, v2, v3, v4))\n\n# Ensure indices printed are correctly\nbm.verts.index_update()\n\nfor f in bm.faces:\n    print([v.index for v in f.verts])\n\n# Force triangulation. The list of faces can optionally be a subset of the faces in the mesh.\nbmesh.ops.triangulate(bm, faces=bm.faces[:])\n\nprint('After triangulation:')\nfor f in bm.faces:\n    print([v.index for v in f.verts])\n\n# Output:\n#\n# [0, 1, 2, 3]\n# After triangulation:\n# [0, 2, 3]\n# [0, 1, 2]\n
"},{"location":"api/object_transformations/","title":"Transforms and coordinates","text":""},{"location":"api/object_transformations/#object-to-world-transform","title":"Object-to-world transform","text":"

The matrix_world attribute of an Object contains the object-to-world transform that places the object in the 3D scene:

>>> o = bpy.context.active_object\n>>> o\nbpy.data.objects['Cube']\n\n>>> o.matrix_world\nMatrix(((1.3376139402389526, 0.0, 0.0, 0.3065159320831299),\n        (0.0, 1.3376139402389526, 0.0, 2.2441697120666504),\n        (0.0, 0.0, 1.3376139402389526, 1.2577730417251587),\n        (0.0, 0.0, 0.0, 1.0)))\n

Comparing this matrix with the values set in the Transform panel, you can see the Location value is stored in the right-most column of the matrix and the scaling along the diagonal. If there was a rotation set on this object some of these values would not be as recognizable anymore.

The location, rotation (in radians) and scale values can also be inspected and set separately:

>>> o.location\nVector((0.3065159320831299, 2.2441697120666504, 1.2577730417251587))\n\n>>> o.rotation_euler\nEuler((0.0, 0.0, 0.0), 'XYZ')\n\n>>> o.scale\nVector((1.3376139402389526, 1.3376139402389526, 1.3376139402389526))\n\n>>> o.location = (1, 2, 3)\n# Rotations are set in radians\n>>> o.rotation_euler.x = radians(45)\n>>> o.scale = (2, 1, 1)\n>>> o.matrix_world\nMatrix(((2.0, 0.0, 0.0, 1.0),\n        (0.0, 0.7071067690849304, -0.7071067690849304, 2.0),\n        (0.0, 0.7071067690849304, 0.7071067690849304, 3.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

See the section on parenting for some subtle effects on transformations in cases object parenting is used.

"},{"location":"api/object_transformations/#geometry-coordinates","title":"Geometry coordinates","text":"

Mesh geometry in Blender stores vertex coordinates (and other geometric information) in object-space coordinates. But a mesh (or object in general) will usually get transformed to a specific position, scaling and orientation in the scene. As described above the net transform from object-space to world-space coordinates, also called the object-to-world transform, is available through matrix_world. In cases where you need to have access to geometric data in world-space, say vertex coordinates, you need to apply the matrix_world transform manually.

For example, given the cube transformed as shown above, with vertex 7 selected (visible bottom-left in the image below):

>>> o\nbpy.data.objects['Cube']\n\n>>> m = o.data\n>>> o.matrix_world\nMatrix(((1.3376139402389526, 0.0, 0.0, 0.3065159320831299),\n        (0.0, 1.3376139402389526, 0.0, 2.2441697120666504),\n        (0.0, 0.0, 1.3376139402389526, 1.2577730417251587),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# The object-space coordinate of this vertex\n>>> m.vertices[7].co\nVector((-1.0, -1.0, -1.0))\n\n# The world-space coordinate of this vertex, which matches\n# what the Transform UI shows. Note the Global display mode\n# select in the UI, if we select Local if will show (-1, -1, -1).\n>>> o.matrix_world @ m.vertices[7].co\nVector((-1.0310980081558228, 0.9065557718276978, -0.07984089851379395))\n
"},{"location":"api/often_used_values_and_operations/","title":"Often used values and operations","text":"

Here, we list some frequently used parts of the API, for varying types of data.

"},{"location":"api/often_used_values_and_operations/#scene","title":"Scene","text":"
  • Current scene: bpy.context.scene (read-only)
"},{"location":"api/often_used_values_and_operations/#objects","title":"Objects","text":"
  • Active object: bpy.context.active_object (read-only)
  • Selected objects: bpy.context.selected_objects (read-only)
  • Delete selected objects: bpy.ops.object.delete()
"},{"location":"api/often_used_values_and_operations/#camera","title":"Camera","text":"
  • Active camera object: Scene.camera (this is the camera object, not camera object data)
  • Type: Camera.type (\"PERSP\", \"ORTHO\", ...)
  • Focal length: Camera.lens (in mm)
  • Clipping distances: Camera.clip_start, Camera.clip_end
"},{"location":"api/often_used_values_and_operations/#rendering","title":"Rendering","text":"
  • Image resolution:
    • Width: Scene.render.settings.resolution_x
    • Height: Scene.render.settings.resolution_y
    • Percentage: Scene.render.settings.resolution_percentage
  • Output file: Scene.render.filepath
  • Image output type: Scene.render.image_settings.file_format (\"PNG\", \"JPEG\", ...)
  • Number of samples per pixel (Cycles): Scene.cycles.samples
  • Render current scene: bpy.ops.render.render(). See parameters how to control the specific type of render (still image versus animation) and whether to save output
"},{"location":"api/often_used_values_and_operations/#animation","title":"Animation","text":"
  • Current frame Scene.frame_current
  • Frame range: Scene.frame_start, Scene.frame_end
  • Frame rate: Scene.render.fps
"},{"location":"api/often_used_values_and_operations/#file-io","title":"File I/O","text":"
  • Save the current session to a specific file: bpy.ops.wm.save_as_mainfile()
  • Open Blend file bpy.ops.wm.open_mainfile()
  • Import a file (call depends on file type): bpy.ops.import_scene.obj() (OBJ scene), bpy.ops.import_scene.gltf (glTF scene), bpy.ops.import_mesh.ply (PLY mesh), etc. See here and here for more details.
  • Export a file (call depends on file type) follows the same call names, see here and here
"},{"location":"api/operators/","title":"Operators","text":"

A special class of important API routines are the so-called operators. These are usually higher-level operations, such as adding a new cube mesh, deleting the current set of selected objects or running a file importer. As noted above many parts of the Blender UI are set up with Python scripts and in a lot of cases the operations you perform in the UI through menu actions or shortcut keys will simply call the relevant operator from Python to do the actual work.

The Info area will show most operators as they get executed, but you can also check what API call is made for a certain UI element (this requires Python Tooltips to be enabled, see developer settings. For example, adding a plane mesh through the Add menu will call the operator bpy.ops.mesh.primitive_plane_add(), as the tooltip shows:

You can simply call the operator directly from Python to add a plane in exactly the same way as with the menu option:

>>> bpy.data.objects.values()\n[]\n\n>>> bpy.ops.mesh.primitive_plane_add()\n{'FINISHED'}\n\n# A plane mesh is now added to the scene\n>>> bpy.data.objects.values()\n[bpy.data.objects['Plane']]\n

Many of the operators take parameters, to influence the results. For example, with bpy.ops.mesh.primitive_plane_add() you can set the initial size and location of the plane (see the API docs for all the parameters):

>>> bpy.ops.mesh.primitive_plane_add(size=3, location=(1,2,3))\n{'FINISHED'}\n

Info

Note that operator parameters can only be passed using keyword arguments.

"},{"location":"api/operators/#operator-context","title":"Operator context","text":"

This is all very nice and powerful, but operators have a few inherent properties that can make them tricky to work with.

An operator's execution crucially depends on the context in which it is called, where it gets most of the data it needs. As shown above simple parameter values can usually be passed, but values like the object(s) to operate on are retrieved implicitly. For example, to join a set of mesh objects into a single mesh you can call the operator bpy.ops.object.join(). But the current context needs to be correctly set for the operator to work:

# We have no objects selected\n>>> bpy.context.selected_objects\n[]\n\n>>> bpy.ops.object.join()\nWarning: Active object is not a selected mesh\n{'CANCELLED'}\n\n# With 3 objects selected\n>>> bpy.context.selected_objects\n[bpy.data.objects['Cube'], bpy.data.objects['Cube.001'], \nbpy.data.objects['Cube.002']]\n\n# Now it works\n>>> bpy.ops.object.join()\n{'FINISHED'}\n

As can be seen above an operator only returns a value indicating the execution status. When calling the operator in the Python Console as above some extra info is printed. But when calling operators from scripts the status return value is all you have to go on, as the extra message isn't printed when the script is executed. And in some cases the reason an operator fails can be quite unclear:

>>> bpy.context.selected_objects\n[bpy.data.objects['Cube'], bpy.data.objects['Camera']]\n\n>>> bpy.ops.mesh.intersect_boolean()\nTraceback (most recent call last):\n  File \"<blender_console>\", line 1, in <module>\n  File \"/usr/share/blender/3.6/scripts/modules/bpy/ops.py\", line 113, in __call__\n    ret = _op_call(self.idname_py(), None, kw)\n          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nRuntimeError: Operator bpy.ops.mesh.intersect_boolean.poll() failed, context is incorrect\n

This shows that the so-called poll function failed, but what does that mean? The poll function is used by operators to determine if they can execute in the current context. They do this by checking certain preconditions on things like the selected object(s), the type of data or an object mode. In this case the bpy.ops.mesh.intersect_boolean() operator can't perform a boolean intersection on multiple meshes, but only on the faces of a single object in edit mode, but this is not something you can tell from the error message (nor does the documentation make that clear):

To actually perform a boolean intersection on two objects from a Python script requires us to do what we would be do in the UI: add a Boolean modifier on one of the objects and set its parameters. We could take advantage of the Python Tooltips to see which operator we need:

This would suggest that using bpy.ops.modifier_add(type='BOOLEAN') would be what we need, but then setting the required parameters on the modifier (i.e. the object to subtract) would become tricky.

So for a boolean operation, and setting object modifiers in general, there's an easier way:

>>> o = bpy.data.objects['Cube']\n# Add a modifier on the object and set its parameters\n>>> mod = o.modifiers.new(name='boolmod', type='BOOLEAN')\n>>> mod.object = bpy.data.objects['Cube.001']\n>>> mod.operation = 'DIFFERENCE'\n\n# At this point the modifier is all set up. We hide\n# the object we subtract to make the boolean result visible.\n>>> bpy.data.objects['Cube.001'].hide_viewport = True\n

Unfortunately, certain operations can only be performed by calling operators. So there's a good chance that you will need to use them at some point when doing Python scripting. Hopefully this section gives some clues as how to work with them. See this section for more details on all the above subtleties and issues relating to working with operators.

The bpy.ops documentation also contains useful information on operators, including how to override an operator's implicit context with values you set yourself.

"},{"location":"api/parenting/","title":"Parenting","text":"

An object's parent can be queried or set simply through its parent attribute, which needs to reference another Object (or None).

But when parenting is involved the use of transformation matrices becomes somewhat more complex. Suppose we have two cubes above each other, the top cube transformed to Z=5 and the bottom cube to Z=2:

Using the 3D viewport we'll now parent the bottom cube to the top cube (LMB click bottom cube, Shift-LMB click top cube, Ctrl-P, select Object) and inspect the values in Python:

>>> bpy.data.objects['Bottom cube'].parent\nbpy.data.objects['Top cube']\n\n# The bottom cube is still located in the scene at Z=2, \n# even after parenting, as is expected\n>>> bpy.data.objects['Bottom cube'].matrix_world\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 2.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

If an object has a parent its matrix_local attribute will contain the transformation relative to its parent, while matrix_world will contain the resulting net object-to-world transformation. If no parent is set then matrix_local is equal to matrix_world.

Let's check the bottom cube's local matrix value:

# Correct, it is indeed -3 in Z relative to its parent\n>>> bpy.data.objects['Bottom cube'].matrix_local\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, -3.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

As already shown above the parent attribute can be used to inspect and control the parenting relationship:

>>> bpy.data.objects['Top cube'].parent\n# None\n>>> bpy.data.objects['Bottom cube'].parent\nbpy.data.objects['Top cube']\n\n# Remove parent\n>>> bpy.data.objects['Bottom cube'].parent = None\n

At this point the two cubes are no longer parented and are at Z=2 (\"Bottom cube\") and Z=5 (\"Top cube\") in the scene. But when we restore the parenting relationship from Python something funny happens 1:

# Set parent back to what it was\n>>> bpy.data.objects['Bottom cube'].parent = bpy.data.objects['Top cube']\n

The reason for the different position of the cube called \"Bottom cube\" (which is now on top) is that when using the UI to set up a parenting relationship it does more than just setting the parent attribute of the child object. There's also something called the parent-inverse matrix. Let's inspect it and the other matrix transforms we've already seen for the current (unexpected) scene:

# Identity matrix, i.e. no transform\n>>> bpy.data.objects['Bottom cube'].matrix_parent_inverse\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 0.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# Hmmm, this places the \"Bottom cube\" 2 in Z *above* its parent at Z=5...\n>>> bpy.data.objects['Bottom cube'].matrix_local\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 2.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# ... so it indeed ends up at Z=7 as we saw (above \"Top cube\")\n>>> bpy.data.objects['Bottom cube'].matrix_world\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 7.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

So what happened here? Apparently the matrix_local matrix changed from its value of Z=-3 as we saw earlier. The answer is that when you set up a parenting relationship using the UI the parent-inverse matrix is set to the inverse of the current parent transformation (as the name suggests) while matrix_local is updated to inverse(parent.matrix_world) @ to_become_child.matrix_world.

If we clear the parent value from Python and redo the parenting in the UI we can see this in the resulting transform matrices:

>>> bpy.data.objects['Bottom cube'].parent = None\n\n# <parent \"Bottom cube\" to \"Top cube\" in the UI>\n\n# Was identity, is now indeed the inverse of transforming +5 in Z\n>>> bpy.data.objects['Bottom cube'].matrix_parent_inverse\nMatrix(((1.0, -0.0, 0.0, -0.0),\n        (-0.0, 1.0, -0.0, 0.0),\n        (0.0, -0.0, 1.0, -5.0),\n        (-0.0, 0.0, -0.0, 1.0)))\n\n# Was Z=2, is now 2-5\n>>> bpy.data.objects['Bottom cube'].matrix_local\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, -3.0),\n        (0.0, 0.0, 0.0, 1.0)))\n\n# Was Z=7\n>>> bpy.data.objects['Bottom cube'].matrix_world\nMatrix(((1.0, 0.0, 0.0, 0.0),\n        (0.0, 1.0, 0.0, 0.0),\n        (0.0, 0.0, 1.0, 2.0),\n        (0.0, 0.0, 0.0, 1.0)))\n

The reason for this behaviour is that when doing parenting in the 3D viewport you usually do not want the object that you are setting as the child to move. So the parenting matrices are adjusted accordingly when the parenting relationship is set up. But when we simply set parent from Python, the matrix_local value is used as is, causing our bottom cube to suddenly move up, as it is used as the transform relative to its parent, while it actually would need a different value to stay in place.

There's actually quite a bit more going on with all the different parenting options available from the UI. See this page for more details.

"},{"location":"api/parenting/#children","title":"Children","text":"

To retrieve an object's children (i.e. the objects it is the parent of) can be done through its children property. This only returns the direct children of that object, and so not children of its children, etc. Getting to the set of all children of an object (direct and indirect) was made slightly easier in Blender 3.1 with the addition of the children_recursive attribute.

For example, given a Cube, Suzanne and Torus object, where Suzanne is parented to Cube, and the Torus is parented to Suzanne:

>>> list(bpy.data.objects)\n[bpy.data.objects['Cube'], bpy.data.objects['Suzanne'], bpy.data.objects['Torus']]\n\n>>> bpy.data.objects['Suzanne'].parent\nbpy.data.objects['Cube']\n\n>>> bpy.data.objects['Torus'].parent\nbpy.data.objects['Suzanne']\n\n>>> bpy.data.objects['Cube'].children\n(bpy.data.objects['Suzanne'],)\n\n>>> bpy.data.objects['Suzanne'].children\n(bpy.data.objects['Torus'],)\n\n>>> bpy.data.objects['Cube'].children_recursive\n[bpy.data.objects['Suzanne'], bpy.data.objects['Torus']]\n

These attributes are also available for collections.

  1. The same thing happens when setting the parent in the UI using Object Properties > Relations > Parent \u21a9

"},{"location":"api/selections/","title":"Selections","text":"

In a lot of cases you want to operate on a set of selected objects. You can access (read only) the current selection with bpy.context.selected_objects:

>>> bpy.context.selected_objects\n[bpy.data.objects['Cube'], bpy.data.objects['Plane']]\n

Changing the current selection can be done in several ways. Selection state per object can be controlled with the select_get() and select_set() methods:

>>> bpy.context.selected_objects\n[]\n\n>>> bpy.data.objects['Camera'].select_get()\nFalse\n\n>>> bpy.data.objects['Camera'].select_set(True)\n>>> bpy.context.selected_objects\n[bpy.data.objects['Camera']]\n

The full selection set can also be changed:

# Select all visible objects\n>>> bpy.ops.object.select_all(action='SELECT')\n\n# Deselect all objects\n>>> bpy.ops.object.select_all(action='DESELECT')\n\n# Toggle the selection state for each object\n>>> bpy.ops.object.select_all(action='TOGGLE')\n

Note that the default mode for bpy.ops.object.select_all() when not specified is TOGGLE.

Also note that the selection methods above operate only on objects that are currently visible objects in the scene (in terms of the outliner eye icon), just like for the selection hotkeys (like A) in the 3D viewport.

"},{"location":"basics/animation/everything/","title":"Animating everything","text":"

Here, we'll show how generic and powerful the Blender animation system is.

"},{"location":"basics/animation/example_flipbook_animation/","title":"\ud83d\udcbb Flipbook animation","text":"

Here are the steps needed to import a set of animated meshes and make them play as an animation within Blender. The approach we use here is to have a single mesh object on which we change the associated mesh data each frame. So even though all timesteps are loaded only one of them is visible at a time.

Here we take advantage of the Blender scene organization, where each object (a mesh object in this case) refers to object data (one of the meshes in the animation). We use a small Python script, called a frame handler, to respond to a change of the current frame time.

Warning

The method below will import all meshes in the animation into the current scene. This uses quite a bit of memory (around 1GB in our tests).

Info

The data for this example is part of our advanced course and you can find the data on https://edu.nl/hrvbe under data/animation.

The animated_ply_imports.blend scene file contains two Python scripts, in the Text Editor called 1. import ply files and 2. register anim handler.

The dambreak.tar.gz file contains a set of animated meshes in binary PLY format and so is quite large when extracted.

  1. Extract dambreak.tar.gz in the same directory as animated_ply_imports.blend. This will create a directory dambreak which contains the PLY files.
  2. Load animated_ply_imports.blend

As noted above, this blend file not only contains a 3D scene, but also two Python scripts we use to set up the flipbook animation.

The first step is to load all the timesteps in the dataset using one of the scripts. This might take a bit of time, depending on the speed of your system. By default, only the first 100 steps are loaded. You can increase the number of files to the full 300 if you like by updating the variable N in both the import script and the animation handler script.

  1. Execute the script that imports the time step meshes from the PLY files. To do this step make sure the script called 1. import ply files is selected in the text editor panel. Then press the play button to the right of it, which will execute the script (an alternative is press Alt-P in the editor).

  2. The cursor will change to an animated circle, indicating the import is running. In case you get the idea something is wrong check the console output in the terminal where you started Blender.

  3. After all PLY files are loaded execute the script that installs the frame change handler. This script is called 2. register anim handler. Make sure the text editor is switched to this script, then press the play button.

  4. Verify that the flipbook animation works with Space and/or moving the time slider in the Timeline with Shift-RMB. You should see the fluid simulation evolve with each frame. You can also check the object data associated with the Fluid sim object in the Outliner to see that it changes.

The playback speed will depend on your system's performance, but also on the framerate setting chosen.

  1. Change the Frame Rate value (in the Output properties tab at the right side of the screen, icon ) to different values to see how your system handles it. Is 60 fps feasible?

  2. The Fluid sim object is still transformable as any normal object. Experiment with this, to see how it influences the flipbook animation.

  3. If you like, you can add a Camera to the scene and make it follow the wave of fluid in a nice way and then render this into animation.

"},{"location":"basics/animation/example_flipbook_animation/#mesh-sequence-cache","title":"Mesh (Sequence) Cache","text":"

Specifically for the Alembic and USD file formats Blender has support to animate a set of meshes stored in a single file. When importing such an animation cache file a Mesh Sequence Cache modifier is automatically added and the animated mesh will work as expected in the scene.

Although it is possible to convert you animated meshes to such a single-file animation cache there's a few downsides:

  • Storing a large number of mesh animation steps in a single file will potentially lead to a very large file, plus it doubles the disk storage needed if you keep the original mesh files around.
  • Both Alembic and USD are complex binary formats, and you need some form of library support in order to easily write them.

There is also the Mesh Cache modifier, which has a similar function. Yet, this modifier only supports MDD and PC2 files.

"},{"location":"basics/animation/exercise_manual_camera_orbit/","title":"\ud83d\udcbb Orbiting an object manually","text":"

Info

The steps in this exercise were partly shown in the presentation as well, but that was mostly to illustrate keyframe animation. Here, you can redo those steps in detail and experiment with them.

To orbit an object the camera needs a circular path around the object's location.

  1. Load orbit.blend

The scene contains a single monkey (centered at the origin) and a camera. Note that the animation has a length of 100 frames, starting at frame 0.

As a first way of doing an orbit we're going to add keyframes for the camera position, as it rotates around the monkey, using the 3D cursor pivot mode.

  1. Set the Pivot Point mode to 3D cursor (bring up the Pivot Point pie menu with period ., select 3D Cursor).
  2. Make sure the 3D cursor is located in the origin by resetting its position with Shift-C. This will also change the view to fit the scene extents. In general, you can check the current position of the 3D cursor in the sidebar (N to toggle) on the View tab under 3D Cursor
  3. Select the camera. Verify that as you rotate it around the Z axis the camera indeed orbits the 3D cursor, and therefore also orbits around the monkey head.
  4. Add 4 keyframes at intervals of 25 frames and 90 degrees rotation around Z to complete a 360 degree rotation of the camera around the object over the full animation of 100 frames
  5. Play the animation with Spacebar. Is the camera orbit usable? Why not? Also check the camera view during playback.
  6. Check the graphs in the Graph Editor. See if you can improve the camera orbit, either by changing the graphs, inserting more keyframes, or both. One way to influence the shape of the curves is to edit the handles attached to each control point, or to change the keyframe interpolation for a control point with T.

Tip

If you have only a single object in front of the camera around which you want to orbit, an alternative approach is to simply rotate the object itself while keeping the camera in a fixed position. However, this might not always be feasible or preferable.

"},{"location":"basics/animation/exercise_parented_camera_orbit/","title":"\ud83d\udcbb Camera orbiting using parenting","text":"

We will try another way of doing a camera orbit. This method involves parenting the camera to an empty. Parenting is creating a hierarchical relation between two objects. An empty is a special 3D object with no geometry, but which can be placed and oriented in the scene as usual. It is shown as a 3D cross-hairs in the 3D view. It is often used when doing parenting.

  1. Load orbit.blend.
  2. If you happened to have saved the file in the previous assignment with some keyframes set on the camera you can delete these by selecting the Camera. Then go into the Timeline editor at the bottom and select all keyframes (diamond markers) with A, press X, choose Delete Keyframes.
  3. Reset the 3D cursor to the origin with Shift-C
  4. Add an Empty to the scene: Shift-A > Empty > Arrows
  5. Select only the camera, then add the Empty to the selection by clicking Shift-LMB with the cursor over the empty (or using Ctrl-LMB in the outliner). The camera should now have a dark orange selection outline, while the empty should have a light orange outline, as the latter is the active object.
  6. Press Ctrl-P and pick Object to add a parent-child relationship

A black dotted line from the camera to the empty should now be visible in the scene. This means the camera is now parented to the empty. Any transformation you apply to the empty will get applied to the camera as well.

Bad Parenting

If you made a mistake in the parenting of step 6 then you can clear an object's parent by selecting that object, pressing Alt-P and picking Clear Parent.

  1. Verify in the outliner that the Camera object is now indeed a child of the Empty (you might have to use the little white triangles to open the necessary tree entries)

  2. Make the empty the single selected object. Enter Z rotation mode by pressing R followed by Z. Note that as you move the mouse both the empty and camera are transformed. Exit the rotation mode with Esc, leaving the Z rotation of the empty set to zero.

  3. Add key frames at the beginning and end of the animation to have the empty rotate 360 degrees around Z over the animation period

  4. Check the camera orbit, including how it looks in the camera view. Is this orbit better?

You might have noticed that, even though we now have a nice circular rotation of the camera around the object, the rotation speed actually isn't constant. If you select the empty and look at the Graph Editor you can see that the graph line representing the Z rotation value isn't straight, but looks like an S. This is due to the default interpolation mode that is used between key frames.

  1. To make the rotation speed constant make sure the empty is selected. Then in the Graph Editor select all curve points with A and press V to set the handle type, pick Vector. The curves should now have become straight lines. Check the animation to see the rotation speed has become constant.

  2. Depending on how exactly you set up the animation you might notice a hickup at the moment that the animation wraps around from frame 99 to frame 0. This happens in case you set the same visible rotation of the empty for frame 0 and 99 (e.g. 0 degrees for frame 0 and 360 degrees for frame 99). You can fix this by changing the animation length to 99 frames by setting End to 98 in the Output properties panel (the value is directly below Frame Start). Now, the animation should wrap around smoothly.

"},{"location":"basics/animation/exercise_track_to/","title":"\ud83d\udcbb Track To constraint","text":"
  1. Load track_to.blend

This scene contains two moving cubes and a single camera.

We would like to keep the camera pointed at one of the cubes as it moves across the scene. We could animate the camera orientation ourselves, but there is an easier way using a constraint. A constraint operates on an object and can influence things like orientation or scale amount based on another object's properties.

We will be using a Track To constraint here, which keeps one object pointing at another object.

  1. Select the camera
  2. Switch the Properties panel to the Object Constraints tab using the icon
  3. In the Add Object Constraint menu pick Track To under Tracking

The Track To constraint will keep the object, in this case our camera, oriented at another object all the time. The other object is called the Target object (in this case one of the cubes).

  1. In the constraint settings under Target (the top one!) pick Cube

If you had the 3D View set to view through the active camera (the view will be named Camera Perspective) one of the cubes should now be nicely centered in the view.

  1. Check that when playing the animation the cube indeed stays centered in the camera view.
  2. Orient the 3D view so you can see the camera's orientation in relation to the scene, specifically the targeted cube.

There is a blue dotted line indicating the constraint between the camera and the cube. To understand how the Track To constraint works in this case we need to understand the basic orientation of a Blender camera.

  1. Add a new Camera (Shift-A > Camera)
  2. Select it and clear its rotation with Alt-R.
  3. Zoom in on the new camera so you can see along which axis it is looking. Also note which axis is the Up direction of the camera (i.e. pointing towards the top of the view as seen by this camera).
  4. Select the original camera we wanted to animate and which has the Track To constraint.
  5. Change the 3D view so you can see the whole scene, including the selected camera. Change the Track Axis value of the Track To constraint to different values. Also experiment with different values for the Up setting. Compare these settings against what you concluded from step 10.
"},{"location":"basics/animation/introduction/","title":"Introduction","text":"

Animation is a very broad topic and we will only cover a very small part of what is possible in Blender. We'll begin with an introduction into animation and then focus on basic keyframe animation.

"},{"location":"basics/animation/introduction/#summary-of-basic-ui-interaction-and-shortcut-keys","title":"Summary of basic UI interaction and shortcut keys","text":""},{"location":"basics/animation/introduction/#all-3d-view-timeline-graph-editor","title":"All (3D View, Timeline, Graph Editor)","text":"
  • Shift-Left for moving time to the first frame in the animation, Shift-Right for the last frame
  • Left key for 1 frame back, Right for 1 forward
  • Up key for 1 keyframe forward, Down for 1 back
  • Spacebar for toggling animation playback
"},{"location":"basics/animation/introduction/#3d-view","title":"3D view","text":"
  • I in the 3D view for inserting/updating a keyframe for the current frame (pick the type)
  • Alt-I in the 3D view for deleting the keyframe data for the current frame
"},{"location":"basics/animation/introduction/#timeline","title":"Timeline","text":"
  • Changing current frame (either click or drag):
    • LMB on the row of frame numbers at the top
    • OR Shift-RMB within the full area
  • Change zoom with mouse Wheel, zoom extent with Home
  • LMB click or LMB + drag for selecting keyframes (the yellow diamonds)
  • The usual shortcuts for editing keyframes, e.g. A for selecting all keyframes, X for deleting all selected keyframes, G for grabbing and moving, etc
"},{"location":"basics/animation/introduction/#graph-editor","title":"Graph editor","text":"
  • Change current frame with Shift-RMB
  • Change zoom with Ctrl-MMB drag, or mouse Wheel
  • Translate with Shift-MMB (same as in 3D view)
  • Zoom graph extent with Home (same as in 3D view)
  • The usual shortcuts for editing curve control points, e.g. A for selecting all, X for deleting all selected points, G for grabbing and moving, etc

Tip

If one or more curves in the graph editor don't seem to be editable (and they show as dotted lines) then you might have accidentally disabled editing. To fix: with the mouse over the graph editor select all curves with A and press TAB to toggle editability.

"},{"location":"basics/animation/introduction/#further-reading","title":"Further reading","text":"
  • This section in the Blender manual contains many more details on keyframing, particularly with respect to the curves in the Graph Editor.
  • The proper definitions of the colors of keyframed values is described here
"},{"location":"basics/animation/tradeoffs_settings_output/","title":"Trade-offs, settings and output","text":"

Here, we look into trade-offs that you can make in terms of chosen frame rate, animation length, quality, etc.

Secondly, we will look in detail into the different settings available for an animation, including the type of output (images or video file) and strategies to handle long render times. We also describe how to do command-line rendering.

"},{"location":"basics/animation/tradeoffs_settings_output/#easy-command-line-rendering","title":"Easy command-line rendering","text":"

If you have set up the animation and its settings (e.g. frame rate, start/end frame, output name, etc) as you like in the Blender file then rendering from the command-line usually doesn't involve anything more than running this command:

blender -b file.blend -a

The -b option makes sure Blender renders in the background without opening a window. You only need to add extra options if you want to override values set in the Blender file.

"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/","title":"\ud83d\udcbb Interaction, selections, outliner","text":"

Here it's time for a first exercise! Follow the steps given below, which will let you work with Blender yourself and get to know the different methods of 3D scene interaction.

Tip

Summary of 3D view navigation:

  • MMB = rotate view
  • Scrollwheel or Ctrl+MMB = zoom view
  • Shift+MMB = translate view
  • Home = zoom out to show all objects

See the cheat sheet to refresh your memory w.r.t. other view interaction and shortcut keys and mouse actions.

"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/#viewpoints","title":"Viewpoints","text":"
  1. Load motorbike.blend

    Tip

    This file will be in the data share under data/basics/blender_basics

  2. In one of the two 3D views (your choice) manipulate the view to the following viewpoints:

    • Alongside the motorbike, amongst the streamlines, looking in the direction of travel.
    • From the rider's point of view, just in front of the helmet, looking ahead.
    • An up-close point of view clearly showing the two streamlines that cross near the rider's helmet on his/her right side, one going under the arm, the other going over it.
  3. There is a single streamline that goes between the two rods of the steering column. Does that streamline terminate on the bike or does it continue past the bike? Try to get really close with the view so you can see where the streamline goes.

"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/#individual-selection","title":"Individual selection","text":"
  1. Select all objects using the A key. As you've seen earlier this will introduce orange outlines surrounding selected objects.
  2. Check the outliner, specifically the color of the object names, to see how the current selection is represented.
  3. In the 3D view deselect only the motorbike using Shift-LMB with the mouse cursor at the appropriate position
  4. Again check the outliner status, do you notice a difference in the name for the motorbike object?
  5. Add the motorbike back to the selection by using Shift-LMB over the bike in the 3D view.
  6. Check the orange outline color of the motorbike (or the corresponding entry in the outliner) to verify that it is now the active object. It should be the only object with a light orange color.
  7. Use Shift-LMB with the mouse over the \"floor and walls\" object. What changed in the selection? Specifically, what is now the active object?
  8. Once more use Shift-LMB on the \"floor ans walls\" object. What changed this time in the selection status of the object?
"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/#box-selection","title":"Box selection","text":"
  1. Clear the selection with Alt-A (or double click the A key).
  2. Use box select (LMB drag) to select all objects in the scene.
  3. Clear the selection with the Alt-A key.
  4. Now try to select ONLY the motorbike using box select. Check the outliner to make sure you're selecting just one object. You can also check the status line at the bottom of the Blender window, specifically the part that reads Objects: #/#, meaning selected / total.
"},{"location":"basics/blender_fundamentals/1_assignment_interaction_selections/#outliner-selection","title":"Outliner selection","text":"
  1. Make sure no objects are currently selected.
  2. Test with following actions in the outliner to get a good idea of what actions it supports and how this influences the visual state of the items in the outliner tree:

    • Left-clicking on an item (possibly holding the Shift or Ctrl key)
    • Using the keys A and Alt-A (note how these are similar in functionality to what they do in the 3D view, but in the context of the outliner items)
    • Right-clicking on an item and choosing Select or Deselect
  3. How does the blue highlight of a line in the outliner relate to the selection status of an object in the 3D view?

"},{"location":"basics/blender_fundamentals/2_assignment_transformations/","title":"\ud83d\udcbb Transformations","text":"

Tip

  • You can clear an object's translation to all zero with Alt-G
  • You can clear an object's rotation to all zero with Alt-R
  • You can clear an object's scale to all zero with Alt-S
  • You can undo a transformation with Ctrl-Z (or reload the file to reset completely)
  • See section Object Actions of the cheat sheet for more shortcut keys
"},{"location":"basics/blender_fundamentals/2_assignment_transformations/#basic-transformations","title":"Basic transformations","text":"
  1. Load axes.blend
  2. The Axes object in the scene is a 3D object just like any other. Note that the axes object shows the local axes of the object.

  3. Try translating, rotating and scaling the axes object with the different methods shown:

    • The transform widgets (accessible from the toolbox on the upper-left)
    • Using the G, R or S keys
    • Entering values in the properties region in the upper-right of the view, under Transform
  4. Activate one of the transform modes (e.g. G for translation) and experiment with limiting a transformation to an axis with X, Y or Z keys,

  5. Activate one of the transform modes (e.g. G for translation) and experiment with limiting a transformation to a plane with Shift-X, Shift-Y or Shift-Z.

  6. Reload the axes.blend file to get back the original scene.

  7. Rotate the axes 30 degrees around (global) X.
  8. Now rotate the axes 45 degrees around the local Z axis.
"},{"location":"basics/blender_fundamentals/2_assignment_transformations/#pivot-point-modes","title":"Pivot point modes","text":"
  1. Load transformations.blend
  2. Select the cone, monkey, torus and sphere

  3. Set pivot mode to Median Point (using the Pivot Point pie menu, which opens with the . key, i.e. period), if it isn't already.

  4. Press S to start scaling, then move the mouse to scale the objects apart

  5. Notice that as you scale up the objects increase in size and move apart, but only the torus' center point (the orange dot) moves below the plane. Why?

  6. Cancel the scale operation with Esc or a RMB click

  7. Enable the Only Locations option in the Pivot Point pie menu. When this is enabled it will cause any transformation to be applied to the locations of the objects (shown as orange circles), instead of to the objects themselves.

  8. Repeat the scaling of the four objects. Do you notice how the objects now transform differently?

  9. Change the pivot mode to Individual Origins and disable the Only Locations option. Do the scaling again, notice the difference.

  10. Enable the Only Locations setting. When you try to rotate the objects around Z nothing happens. Why not?

  11. Change the pivot mode to Median Point, leave Only Locations enabled.

  12. Rotate the objects around the Z axis.

  13. Now disable the Only Locations option and rotate the objects once again around the Z axis. Do you notice the subtle difference in transformation?

  14. Experiment some more with different selections of objects and the different Pivot Point modes, until you feel you get the hang of it.

"},{"location":"basics/blender_fundamentals/2_assignment_transformations/#rubiks-cube","title":"Rubik's cube","text":"

Tip

  • You can add a cube object with Shift-A > Mesh > Cube
  • You can duplicate selected objects with Shift-D. This will also activate grab mode after the duplication.
  1. Start with an empty scene (File > New > General)

  2. Model a Rubik's cube: 3x3x3 Cube objects (minus the center cube) on a rectangular grid. Try to get the spacing between the Cube objects the same in all directions.

  3. Now select one face of the Rubik's cube (i.e. 3x3 cubes) and rotate it 30 degrees just like the real thing.

"},{"location":"basics/blender_fundamentals/2_assignment_transformations/#bonus-2001-a-space-odyssey","title":"Bonus: 2001 - A Space Odyssey","text":"
  1. Start with an empty scene (File > New > General)

  2. Remember the scene from Space Odyssey 2001, with our primate ancestors looking up at the monolith? Recreate that scene :)

  • Add 4 or more monkey heads, surrounding a thin narrow box for the monolith
  • Make the monkeys look up at the monolith
  • If you want to go crazy add bodies to the monkeys using some scaled spheres
  • Add a sun object + corresponding light somewhere in the sky.
"},{"location":"basics/blender_fundamentals/3_assignment_camera_and_views/","title":"\ud83d\udcbb Cameras and views","text":"
  1. Open cameras.blend

    This scene contains a bunny object, a sun light and two cameras: \"Close-up\" near the bunny's head and \"Overview\" further away.

  2. Select the Overview camera object, by either left-clicking on it in the 3D view or in the Outliner.

  3. Make this camera the active camera with either the outliner (click on the green camera icon right of the name), View > Cameras > Set Active Object as Camera or use Ctrl-Numpad0. Notice that the 3D view changes to the camera's viewpoint.
  4. Rotate the 3D view with MMB to exit the camera view. You are now back in the normal 3D view interaction.
  5. Select the Close-up camera
  6. Switch to camera view by bringing up the View pie menu with ` (backtick, usually below the ~), then pick View Camera.
  7. What camera view are you now seeing, Close-up or Overview?
  8. So one thing to remember is that selecting a camera does not make it the \"active camera\" (even though it can be the active object, confusingly).
  9. Change the active camera to Close-up
  10. Rotate away from the camera view to the normal 3D view
  11. For switching back to the active camera view there's two more methods apart from the pie menu, try them:
    • Using the View menu at the top of the 3D view area: View > Cameras > Active Camera
    • Press Numpad0
  12. Experiment with the different camera controls until you find the ones you're comfortable with
  13. Rotate away from the camera view to a 3D view that shows both cameras.
  14. In the Scene properties tab on the right-hand side of the window (and not the similar icon in the top bar left of Scene) there's a drop-down menu Camera which lists the active camera. Change the active camera using that selection box. Apart from the name listed under the Scene properties do you notice how you can identify the active camera in the 3D view? Hint: it's subtle and unrelated to the yellow/orange color used for highlighting selected objects.
"},{"location":"basics/blender_fundamentals/3_assignment_camera_and_views/#camera-transformation","title":"Camera transformation","text":"
  1. Make sure the Overview camera object is the only selected object
  2. Make the Overview camera the active camera and then switch to its view
  3. In the camera view use regular object transformations to point the camera at the rabbit's tail. To refresh, in camera view with only the camera selected:
    • Press G to translate, then move the mouse to change the view
    • While still in move mode press MMB (or Z twice) to enter \"truck\" mode: this moves the camera forward/backward along the view axis. Pressing X twice will allow moving the camera sideways.
    • Press R to rotate around the view axis
    • In rotate mode press MMB to \"look around\"
    • LMB to confirm, Esc to cancel
  4. Another useful feature is when you like the current viewpoint in the 3D view and want to match the active camera to this viewpoint. For this you can use Ctrl-Alt-Numpad0 (or with View > Align View > Align Active Camera To View in the header of the 3D view)
"},{"location":"basics/blender_fundamentals/3_assignment_camera_and_views/#quad-view","title":"Quad view","text":"
  1. Switch the 3D View to the so-called Quad View with Ctrl-Alt-Q. You now have orthogonal 2D views along the three axes (Top, Front and Right Orthographic), plus a 3D view (Camera Perspective). Note: the three axis views can only be translated and zoomed, not rotated
  2. Change the upper-right quad to a camera view, if it isn't already
  3. Press N to show the sidebar on the right
  4. On the View tab, under View Lock there's a Lock option called Camera to View. Enable that option. You should now see a dotted red outline around the orange camera rectangle in the Camera Perspective view.
  5. Hide the sidebar again (N), leaving the Lock option enabled
  6. Change the view in the Camera Perspective view using the regular 3D view mouse interaction (MMB to rotate, Shift-MMB to translate, Ctrl-MMB to move forward/backward). Observe what happens to the active camera in the other quadrants when you alter the view.
  7. Use the sidebar again to disable the Lock Camera to View option
"},{"location":"basics/blender_fundamentals/3_assignment_camera_and_views/#fly-mode","title":"Fly mode","text":"
  1. Add a camera to the scene (Shift-A > Camera). It will be placed at the position of the 3D cursor (the small red-white striped circle and axes).
  2. Change the upper-right view to this camera
  3. Activate fly mode with Shift-` (backtick). Use the ASDWXQE keys to move this camera close to the two bunny ears and look between them. You can change the fly speed with the mouse Wheel. In fly mode you can confirm the current view with LMB or press Enter. Press Esc to cancel and go back to the original view.
"},{"location":"basics/blender_fundamentals/avoiding_data_loss/","title":"\u26a0\ufe0f Avoiding data loss","text":"

There are some things to be aware of when working with Blender that might behave a little different from other programs, or general expectations, and that can potentially cause you to loose work.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#the-file-overwrite-prompt-is-very-subtle","title":"The file overwrite prompt is very subtle","text":"

Suppose we have saved our work to a file scene.blend. We then make some more changes in Blender to create a second version of our scene and save this as scene2.blend. Finally, we make a third version and intend to save this as scene3.blend, but we forget to change the file name in the save dialog and it stays at the current scene2.blend. The Blender way of warning you that you are about to overwrite an existing file is really subtle:

Notice the red color behind the file name? That's the signal that the file name you entered is the same as an existing file in the current directory. If we change the file name to something that doesn't exist yet the color becomes gray again:

The File > Save As workflow (and similar for related file dialogs) is a somewhat double-edged sword:

  • If you're aware of the above signal and intend to quickly overwrite the current file you can simply press Enter once in the dialog, and the file will be saved with no \"Overwriting are you sure?\" prompt is shown. So in this respect the UI stays out of your way and avoids an extra confirmation dialog.
  • But if you miss the red prompt or are unaware of its meaning then it's easy to accidentally overwrite existing work.
"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#easy-file-versions","title":"Easy file versions","text":"

A nice way to save to successive versions of a file is using the + button right of the file name, as shown in the pictures above. Using the + (and -) you can easily change the version number at the end of a file name, e.g. scene2.blend to scene3.blend. The red overwrite indicator will also update depending on the existence of the chosen file name.

Warning

Using the + button merely increments the number in the file name. It does not guarantee that the file does not exist yet (i.e. no check is made with what's on disk).

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#unused-data-blocks-in-the-scene-are-not-saved","title":"Unused data-blocks in the scene are not saved","text":"

Suppose you have a 3D scene and have created a material A that you use on some object. You then create a material B and assign it to the same object, causing material A to now be unused in the scene. If you save your scene to file at this point material A will not get saved to the file, as it is not referenced by anything in the scene. This automatic \"garbage collection\" feature of Blender is somewhat controversial, and it is definitely good to be aware of this behaviour.

For most scene elements used in the Basics part of this course garbage-collection-on-save does not really cause concern, except for the case of Materials (as described in the example above). For materials, and other scene elements, you can see if they are unused by checking for a 0 left of their name when they appear in a list:

The quick fix in case you have a material that is currently not used in the scene, but that you definitely want to have saved to file, is to use the \"Fake User\" option by clicking the shield icon (be sure to enable this option for the right material!):

You can verify the material now has a fake user as intended by checking for an F left of its name:

Note that you can use the same Fake User option for some other types of scene elements as well.

We have a more detailed discussion of the garbage collection system in a section in the Python scripting reference. The behaviour described relates to the data block system that Blender uses internally and for normal use the description above should be sufficient, but can also be influenced from Python.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#recovering-lost-work","title":"Recovering lost work","text":"

Murphy's Law usually strikes when you least expect it. Fortunately, there are several layers of defense in case something goes unexpectedly wrong when saving files, or in case Blender crashes. It depends on the situation you're trying to recover from which one of the options below provides the best results, if applicable.

Please check what each of these features does, to make sure you don't accidentally make things worse by using one of the recover options within Blender in the wrong way.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#those-blend1-files","title":"Those .blend1 files?","text":"

You might notice that when you overwrite an existing file, say file.blend, another file called file.blend1 will now have appeared next to it in the same directory. This is Blender's method for automatically keeping around the previous version of the file you overwrote: it first moves the existing file.blend to file.blend1, and only then saves the new file.blend.

So if you accidentally overwrite a file you can still get to the previous version (the .blend1 file), as long as you haven't overwritten more than once.

Keeping more than 1 previous version

You can actually have multiple previous versions kept around if you like. The preference setting for this is Save & Load > Blend Files > Save Versions, which defaults to 1. If you would increase it then files with extensions .blend2, .blend3 and so on would be kept around.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#auto-save","title":"Auto save","text":"

By default, Blender will automatically save your current scene to a file in a temporary directory every few minutes (2 minutes by default). The settings that control this are under Save & Load > Blend Files, specifically the checkbox on Auto Save to enable auto-save and the Timer (Minutes) setting under Auto Save.

This auto-save file is stored in your system's temporary directory, and uses the process ID of Blender in the file name, as well as the string _autosave. Here is an example from a Linux system, where /tmp is used and Blender's process ID is 66597:

melis@juggle 22:13:/tmp$ ps aux | grep blender\nmelis      66597  1.2  5.7 1838680 463920 ?      Sl   21:54   0:14 blender\n\nmelis@juggle 22:13:/tmp$ ls 66597*\n66597_autosave.blend\n

See this section of the Blender manual on recovering a session from an auto-save file from the File manual (you can also copy or load the file manually, of course, there is nothing special about it).

Edit mode data not saved

If you happend to be in edit (or sculpt) mode at the time Blender does an auto-save, then the current updated state of the mesh will not get saved. This is a limitation of the auto-save feature.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#last-session-accidental-quit-without-saving","title":"Last session (accidental quit without saving)","text":"

Whenever Blender quits normally (i.e. not a crash) it will save the current session state to a file called quit.blend in your system's temporary directory. You can easily load this file with the File > Recover > Last Session option (or copy it to a different location and load it as any Blender file).

One of the cases where this feature might come in handy is if you quit Blender, have unsaved changed, but accidentally click the Don't Save button in the Save changes before closing? dialog. The quit.blend file in this case will contain those unsaved changes. But be sure to make a copy of it before quitting Blender again, as that will overwrite it.

Info

Note that there currently is no option to disable this Save-on-Quit feature. So for large scenes this will incur a (usually short) delay when exiting.

"},{"location":"basics/blender_fundamentals/avoiding_data_loss/#blender-crash","title":"Blender crash","text":"

In case Blender crashes it usually does not manage to save the current scene to a recovery file. So in this case you are hopefully able to recover any lost work using the data available saved through the features described above.

"},{"location":"basics/blender_fundamentals/cameras_and_views/","title":"Cameras and views","text":"

This section shows cameras and how to work with them. In the exercise after this section you get to try a lot of the operations shown, so following the video along isn't strictly needed. But if you do want to then the file used is data/blender_basics/cameras.blend.

"},{"location":"basics/blender_fundamentals/first_steps/","title":"First steps in the user interface","text":"

Tip

A lot of new concepts and UI elements will be introduced in the upcoming videos. It probably works best to watch video(s) limited to a certain topic, try the operations shown and corresponding exercise(s) in Blender yourself, before moving on to the next topic.

"},{"location":"basics/blender_fundamentals/first_steps/#starting-blender","title":"Starting Blender","text":"

In general Blender isn't different to start than any other GUI application.

However, warning and error messages will be printed to the console window. It depends on the operating system you're working on how the console window is available:

  • (All operating systems) If you start Blender from a terminal window, e.g. xterm or Windows Command window, then Blender output will be printed in that window
  • (Windows only) If Blender was started from the Start menu, or using a desktop icon, then you can toggle the associated console window using the Window > Toggle System Console option

See this section in the Blender manual for more details on starting Blender from the command line and details specific for each operating system.

"},{"location":"basics/blender_fundamentals/first_steps/#user-interface-fundamentals","title":"User interface fundamentals","text":"

We will go over fundamentals of the user interface in terms of interaction and areas, specifically the 3D view and Outliner. We also touch on a number of often-performed operations, such as rendering an image and changing the set of selected objects. We also look a bit closer into keyboard shortcuts and menus.

It's probably best to follow along in Blender on your own system while viewing the video. The files used in the video can be found under data/blender_basics.

Slow 3D interaction

If the interaction in the 3D view isn't smooth (as seen in the video) on your PC/laptop something might be wrong in the setup of your system. Please contact us if this appears to be the case.

Accidental 'Edit mode'

If the 3D view (or some of the other areas) suddenly appear to behave strangely, or you now see your mesh with all kinds of dots or lines then you might have accidentally entered the so-called \"Edit Mode\" or any of the other modes available (Tab and Ctrl-Tab are used for this). Check the menu in the upper left of the 3D view, which should read Object Mode:

In this course we will use only Object Mode (and briefly use Vertex Paint mode in one of the exercises). You can use the drop-down menu shown above (or the Ctrl-Tab menu in the 3D view) and pick Object Mode to get back to the correct mode.

Accidental workspace switch

Another thing that might happen is that you accidentally click one of the tabs at the top of the screen, which then completely changes the layout of your user interface. These tabs are used to switch between workspaces, where each workspace allows a different layout to focus on a certain task (e.g. 3D modeling, versus shader editing, versus animation). The default workspace is Layout and you might have to switch back to that one:

"},{"location":"basics/blender_fundamentals/first_steps/#some-user-interface-tips","title":"Some user interface tips","text":"
  • To bring up the relevant section of the official Blender manual for (almost) any user interface element, e.g. button, setting or menu, right-click on that element and click Online Manual. This will start a web browser showing the relevant manual page.
  • You can hover with the mouse over pretty much any UI element to get a tooltip with a short description, including shortcut key(s) if available.
  • The keyboard and mouse shortcuts for object selection, editing, view interaction, etc work mostly the same in all Blender editors. So G to grab, X to delete, LMB to select, Shift-MMB to translate, Wheel to zoom, etc.
  • The mouse controls the current area in focus and any keyboard actions are applied in the active area first.
  • You can maximize a user interface area by using Ctrl+Spacebar having the mouse in the area you want to maximize. This can sometimes be useful to temporarily get a larger area to work with. You can use the same shortcut to toggle the area back to its original size, or use the Back to Previous button at the top of the screen.
"},{"location":"basics/blender_fundamentals/first_steps/#changes-to-default-preference-settings","title":"Changes to default preference settings","text":"

Here we suggest some preferences settings to change from their default value.

Optional

It's not required to change these defaults, but we find they help us in working with Blender, and so might be useful for you as well

Under Edit > Preferences, in the Interface tab:

  • Under Display disable Splash Screen. This will save you a click to get rid of the splash screen each time you start Blender. If you ever want to look at the splash again you can use the Blender logo icon in the top-level of the window and use Splash Screen.
  • Under Editors > Status Bar enable Scene Statistics, System Memory and Video Memory. This will show extra scene statistics in the status bar. Another way to do this is to right-click on the status bar and enable the same options.
  • Under Editors > Temporary Editors set Render In to Image Editor. This will cause the rendered image to be displayed as a replacement of the 3D view, instead of in a separate window. After rendering press Escape to get back the 3D view that was replaced by the rendered output.
  • In case you find that Blender's user interface elements, such as buttons or menu text, are too small you can scale up the UI with a single setting under Display > Resolution Scale. If you change the value you can see the changes in the UI immediately.

Orbit around selection

Another option which you might consider enabling is Orbit Around Selection. By default this is turned off and in that mode any rotation of the 3D viewport will be around the center of the view, which might cause selected objects to go out of view. When the option is turned on viewport rotation will be around the selected object(s), always keeping them in view. You can find this option on the Navigation tab under Orbit & Pan.

"},{"location":"basics/blender_fundamentals/introduction/","title":"Introduction","text":"

This first part of the course is meant to introduce you to Blender, its user interface and basic features. We'll start with a brief look into some of the background of Blender and challenges in learning it.

"},{"location":"basics/blender_fundamentals/objects_3d_cursor_undo/","title":"Objects, 3D cursor, Undo","text":"

A short section on how to add, duplicate or delete objects. What the 3D cursor is and what role it plays, plus the undo system.

"},{"location":"basics/blender_fundamentals/scene_hierarchy/","title":"Scene hierarchy","text":"

We briefly look a the way a scene is organized and how this interacts with the properties panel.

The above actually isn't the full story, as we only briefly mention collections. In the official Blender manual you can find more detail on collections here, in case you want to know more.

"},{"location":"basics/blender_fundamentals/transformations/","title":"Transformations","text":"

This might be a bit more of a technical subject and deals with the way 3D objects can be transformed in a scene. The transformations exercise will allow you to try most of these operations yourself. But if you wanto to follow along with the video then the file used is data/blender_basics/three_objects.blend.

"},{"location":"basics/blender_fundamentals/transformations/#summary-of-shortcut-keys","title":"Summary of shortcut keys","text":"
  • G to enter translation mode (\"grab\")
  • S to enter scale mode
  • R to enter rotation mode
  • LMB or Enter to confirm the current transformation, Escape to cancel while still one of the transformation modes
  • While in a transformation mode press X, Y or Z to constrain the transformation to the X, Y or Z axis, respectively.
  • While in a transformation mode press Shift+X, Shift+Y or Shift+Z to constrain the transformation to the plane perpendicular to the X, Y or Z axis, respectively.
"},{"location":"basics/blender_fundamentals/ui/","title":"User interface configuration","text":"

A short section on how the Blender user interface system works and how to configure it to your liking. This is useful to know as the current UI layout is saved in a Blender file, so files you get from some other source might look very different.

"},{"location":"basics/importing_data/exercise_vertex_colors/","title":"\ud83d\udcbb Vertex colors","text":"

This exercise uses a file exported from the ParaView scientific visualization package, and uses some of the workflow needed to get it into Blender.

X3D Importer

Check if you have a menu option to import X3D format. For this, go to File -> Import and check if there is an entry X3D Extensible 3D (.x3d/.wrl).

If you do NOT have the X3D import option then perform the following steps to enable the X3D add-on (otherwise continue to step 3):

  • Open the preferences window with Edit -> Preferences
  • Switch to the Add-ons tab
  • In the search field (with the little spyglass) enter \"X3D\", the list should get reduced to just one entry
  • Enable the checkbox left of \"Import-Export: Web3D X3D/VRML2 format\"
  • Close the preferences window (it saves the settings automatically)
  • Under File -> Import there should now be a new entry X3D Extensible 3D (.x3d/.wrl)
  1. Importing data always adds to the current scene. So start with an empty scene, i.e. delete all objects.

  2. Make sure Blender is set to use Cycles as the renderer. For this, switch to the Render tab in the properties area. Check the Render Engine drop-down, it should be set to Cycles.

  3. Import file glyphs.x3d using File > Import > X3D Extensible 3D. In the importer settings (on the right side of the window when selecting the file to import) use Forward: Y Forward, Up: Z Up.

  4. This X3D file holds a scene exported from ParaView. Check out the objects in the scene to get some idea of what it contains.

  5. Delete all the lights in the scene to clear everything up a bit. Add a single sun light in return.

"},{"location":"basics/importing_data/exercise_vertex_colors/#inspecting-the-vertex-colors","title":"Inspecting the vertex colors","text":"

This 3D model has so-called \"vertex colors\". This means that each vertex of the geometry has an associated RGB color, which is a common way to show data values in a (scientific) visualization.

There are a few ways to inspect if, and what, vertex colors a model has. First, there is the so-called Vertex Paint mode. In this mode vertex colors are shown when they are available and can even be edited (\"painted\").

To enable Vertex Paint mode:

  1. Select the 3D arrows in the scene (as the only single selected object)
  2. Open the Mode pie menu with Ctrl-TAB and switch to Vertex Paint. An alternative is to use the menu showing Object Mode in the upper-left of the 3D view header and select Vertex Paint there.
  3. The 3D View should now show the arrow geometry colored by its vertex colors. The colors shown are velocity values from a computational flow simulation, using the well-known rainbow color scale (low-to-high value range: blue \u2192 green \u2192 yellow \u2192 orange \u2192 red)
"},{"location":"basics/importing_data/exercise_vertex_colors/#altering-vertex-colors","title":"Altering vertex colors","text":"

You might have noticed two things have changed in the interface: 1) the cursor is now a circle, and 2) the tool shelf on the left now shows color operations (paint brush = Draw, drop = Blur, ...)

As this is Vertex Paint mode you can actually alter the vertex colors. This works quite similar to a normal paint program, like Photoshop or the GIMP, but in 3D. Although it may not make much sense to change colors that are based on simulation output (like these CFD results) it can still be interesting to clean up or highlight vertex-colored geometry in certain situations.

  1. Experiment with vertex painting: move the cursor over part of the arrow geometry, press and hold LMB and move the mouse. See what happens.
  2. Switch to the Active Tool and Workspace settings tab in the properties area on the right-hand side of the window
  3. You can change the color you're painting with with the colored box directly right of the Draw in the bar at the top of the 3D view area. Click the color to bring up the color chooser. You can also change the radius and strength settings to influence the vertex painting.
  4. Change back to Object Mode using the Ctrl-TAB mode menu when you're done playing around. Note that the arrows no longer show the vertex colors.
"},{"location":"basics/importing_data/exercise_vertex_colors/#rendering","title":"Rendering","text":"

The second way to use vertex colors is to apply them during rendering.

  1. If you've screwed up the vertex colors really badly in the previous steps you might want to reimport the model...
  2. Make the 3D arrows in the scene the single selected object
  3. Switch to the Object Data tab in the properties
  4. Check that there is an entry \"Col\" in the list under Color Attributes. A model can have multiple sets of vertex colors, but this file has only one set called \"Col\", which has domain Face Corner and type Byte Color.

Now we will set up a material using the vertex colors stored in the \"Col\" layer.

  1. Go to the Material tab
  2. Select the material called \"Material\" in the drop-down list left of the New button. This set the (grey) material \"Material\" on the arrows geometry.
  3. Press F12 (or use interactive render) to get a rendered view of the current scene.

You'll notice that all the geometry is grey/white, i.e. no vertex colors are used. We'll now alter the material to use vertex colors.

  1. In the settings of the material there is a field called \"Base Color\" with a white area right of it. This setting controls the color of the geometry.
  2. Click the button left of the color area (it has a small yellow circle in it)
  3. Pick Attribute from the left-most column labeled Input. This specifies that the material color should be based on an attribute value.
  4. Base Color is now set to Attribute | Color. Directly below the entry there is a Name field. Enter \"Col\" here, leave Type set to Geometry. This specifies that the attribute to use is called \"Col\" and comes from the mesh geometry (i.e. our vertex colors).
  5. Now render the scene again
  6. The rendered image should now be showing the arrow geometry colored by vertex colors

"},{"location":"basics/importing_data/exercise_your_data/","title":"\ud83d\udcbb Your own data","text":"

Info

If you do not have data that you want to import in Blender then you can skip this part.

  1. Think about your own data

    • What is the goal for importing the data?
    • What visual representation(s) of the data do you aim for?
    • What scene object types do you need for this?
    • What approach would you use to get it into Blender?
    • Challenges?
    • Problems?
  2. Try to import your own data, or a representative subset, using your chosen approach.

"},{"location":"basics/importing_data/introduction/","title":"Introduction","text":"

This chapter will present a lot of information on getting data into Blender through importing. It will describe the overall approach, available file formats and their relative strengths/weaknesses and look closer into handling specific types of data, specifically point data and volumetric data.

Most of this chapter consists of the video presentation below, which covers quite a few subjects. After you are done viewing the video there is a first exercise on vertex colors, which uses data we provide. While the second exercise is more of a guideline for when you want to import your own data.

As mentioned in the presentation the PDF slides for this chapter contain some more reference material on getting data from ParaView, VisIt and VTK.

Point cloud primitive (3.1+)

As shown in the video, one way to render point data is to use instancing for placing a simple primitive like a sphere at each point location. Working with such instanced geometry is somewhat limited, as it introduces a hit on performance and memory usage, both for interactive work in the user interface, as well as rendering in Cycles.

Starting with Blender 3.1 Cycles now has dedicated support for rendering large numbers (millions) of points as spheres directly. However, there is currently in 3.1 no way to directly create a point cloud primitive by importing a file, and the only alternative is using Geometry Nodes to generate a point cloud primitive from a vertex-only mesh. But Geometry Nodes are not a topic in this Basics part of the course.

Availability of importers/exporters (Linux distributions)

When using the official Blender binaries from https://www.blender.org all supported importers and exporters will be included.

But especially when using a Linux distribution's Blender package some features might not be available, usually to libraries not being enabled when the package was built. For example, currently (May 2022) on Arch Linux the USD import/export support is not available in the Arch Blender package.

If you run into such issues, please download and use the official binaries instead.

"},{"location":"basics/rendering_lighting_materials/composition/","title":"Composition","text":"

Below you'll find supplementary video on image composition. It is supplementary in a way that you won't need it to do the exercises but it might help you with your future Blender renders. This video will give you some practical guidelines that could give your final renders the extra edge it needs to stand out:

"},{"location":"basics/rendering_lighting_materials/gpu_rendering/","title":"GPU-based rendering","text":"

In general using Cycles with GPU-based rendering is a lot faster than rendering on a multi-core CPU. For example, here are render times on one of our workstations for the scene with the 3 monkey heads used in the video from the last chapter (showing camera settings and depth-of-field):

Type Device Render time* CPU Intel Core i5 @ 3.20 GHz 50.16 s GPU NVIDIA GTX970 6.59 s * 960x540 pixels, 128 samples per pixel

On this particular scene, with these settings and on this hardware using GPU rendering is roughly 7.6x faster! However, only by making a comparison on your particular system can you really find out if GPU rendering is a good option for you (for example, you might not have a very powerful GPU in your laptop or workstation).

Apart from performance there are some other aspects to consider with GPU rendering:

  • When doing a GPU render your desktop environment might become less responsive, although this has become less of a problem with recent Blender versions
  • A GPU usually has less memory available, which might cause problems with really large scenes

In case you want to try enabling GPU rendering go to the Preferences window (Edit > Preferences) and then the System tab. The settings available under Cycles Render Devices are somewhat dependent on the hardware in your system but should look a little like this:

GPU rendering in Blender has slightly different support depending on whether you're on Windows, Linux or macOS. Below, we summarize the options you can encounter. The most up-to-date official reference for this information is this page from the Blender manual.

"},{"location":"basics/rendering_lighting_materials/gpu_rendering/#windows-linux","title":"Windows, Linux","text":"

By default None will be active, meaning no GPU acceleration is used for rendering and it all happens on the CPU.

In general, on a PC/Laptop with an NVIDIA GPU the CUDA option is available and to be preferred, although OptiX might work well as an alternative (but will only be available on more recent NVIDIA GPUs).

On Windows systems with an AMD GPU the option HIP might be available and is then definitely worth a try.

"},{"location":"basics/rendering_lighting_materials/gpu_rendering/#macos","title":"macOS","text":"

macOS GPU rendering is still under development

The Blender 3.1 release notes warn that the GPU rendering implementation on macOS is in an early state. Performance optimizations and support for Intel GPUs are under development.

For macOS systems only the Metal option will be available, apart from the default None.

In Blender 3.1 the GPU rendering support in Cycles is based on the Metal API, which is not supported on all macOS systems (also depending on the system version). Only for the following two configurations is GPU rendering support currently available:

  • Apple M1 computers running macOS 12.2 or newer
  • Apple computers with AMD graphics cards running macOS 12.3 or newer

GPU rendering versus acceleration

This section is about GPU rendering in Cycles, which is different from GPU acceleration for the Blender user interface and EEVEE (see below) rendering. So even though your macOS system might not provide GPU rendering in Cycles, it might still work fine for Blender usage with a GPU-accelerated 3D viewport, while using CPU-based rendering.

"},{"location":"basics/rendering_lighting_materials/gpu_rendering/#a-thing-called-eevee","title":"A thing called EEVEE?","text":"

When consulting other Blender materials, specifically on rendering, you may see references to EEVEE. This is another render engine available in Blender, which is different from the Cycles engine we will be using in this course.

Even though EEVEE is meant for fast and highly interactive rendering work, even more so than the Cycles preview render we showed so far, we do not use EEVEE in this course. The reasons for this are:

  • We personally find Cycles to be more intuitive to work with and explain, as it is built around the path tracing algorithm, which is easy to understand while providing a very versatile set of rendering and lighting features. EEVEE's rendering setup is somewhat more complex, as it uses a combination of different techniques that needs more separate controls.
  • Cycles can render both on CPU and GPU, whereas EEVEE can only render on a GPU (more specifically, it needs OpenGL)
  • EEVEE doesn't support headless rendering, i.e. when starting a Blender render from the command-line without showing the user interface. This is especially relevant when rendering long animations on an HPC system, or other cluster environment without a GPU-accelerated display environment.
  • Cycles is more feature-complete, whereas EEVEE has some limitations compared to Cycles, although that situation improves with each Blender release
  • Although Cycles and EEVEE are getting closer in features with every Blender release they are still not fully equivalent. They also use separate controls in the UI for certain features. This would mean having to dedicate extra course material on these differences

If you do like more information on EEVEE then please check this section in the Blender manual.

"},{"location":"basics/rendering_lighting_materials/introduction/","title":"Introduction","text":"

This part of the course is all about the aesthetics, the last part of the pipeline. You know now of the basics and by now you are able to import some scientific data within Blender, the final thing that is left is how the final image will look like. What does the surface of your 3D-model will look like, what tangible texture and colors it has, how it will be illuminated and finally how it will be composed. All these things go hand-in-hand and these things need to be in balance to create an aesthetically pleasing image.

Before you start with the exercises the following video will give you the theoretical and practical background to make these exercises. In this video there are some Blender walk-throughs, if you want to follow along you can use the walk-through files in the walkthroughs/basics/06-rendering-lighting-and-materials directory.

Cycles X

Due to the 3.0 update of Blender and the introduction of Cycles X some details have changed when it comes to rendering with Blender. The video and exercises have been updated to accommodate this but some of these changes might have been missed, please inform us when you find one of those discrepancies.

"},{"location":"basics/rendering_lighting_materials/rlm_assignment/","title":"\ud83d\udcbb Rendering, lighting and materials","text":"

Open the rlm_assignment.blend file and you'll see several objects in the scene: a ground plane, a plateau, Suzanne (the monkey head) and 3 knots.

The goal of this assignment is to place some lights, set the camera parameters to your liking, add materials to the objects and render the final image. We'll do this in steps.

Tip

To view your result with realistic lighting and materials use the Shading pie menu, which opens with the Z key:

  • Option Rendered shows realistic lighting and materials, with slower interaction
  • Option Solid shows simple colors and lighting, with faster interaction
"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#lighting-creating-light-sources","title":"Lighting - Creating light sources","text":"

To see what we are doing in Rendered shading (Z-Rendered) we first need to add the lighting.

  1. Add one or two sun lights by either using the 3D view menu in the header (Add > Light > Sun) or use Shift-A > Light > Sun in the 3D view
  2. Try to position and rotate the lights so that they light the objects under a bit of an angle (G and R keys).
  3. Before we change the appearance of the lights we need to switch to Rendered using the Shading pie menu (Z > Rendered)
  4. Now adjust the Color and Strength settings under the Object Data properties tab in the properties panel, perhaps try to give one the lights a warm yellowish sun-like color and the other a more less strong blueish and cold color.
  5. In the same properties panel tab, try to adjust the Angle (or Radius or Size for the other light types) of the sun light and see how it affects the shadows. Small angles (or radii or sizes) create hard shadows, which are ideal to see minor details and large angles (or radii or sizes) create soft shadows, which is more suited to reduce the overall contrast and make it less straining on the eye.
  6. Now in the same properties editor tab, try out some different lamp types (Point, Sun, ...) to experiment with the different lighting effects they produce.

Bonus: If setting up the lamps is too cumbersome, you can go to the World tab in the properties editor and click the little globe drop-down menu button at the top and select the HDRIWorldLighting. This will enable predefined environment lighting using a 360 image of somebody's living room. Do make sure that you de-activate ( in the Outliner) or remove the lamps to see the full effect.

"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#camera-setting-the-starting-point-of-the-light-paths-or-rather-camera-paths","title":"Camera - Setting the starting point of the light paths (or rather camera paths)","text":"

With the lighting setup, we can now see what each of the camera settings does. Or from the light ray paths perspective: configure the starting point of the light rays.

  1. First you need to be in the camera view to be able to see the changes of the camera settings by selecting with the View pie menu (`-button) the View Camera option or through the 3D view menu in the header (View > Viewpoint > Camera). The former way is a toggle interaction so when you are already in the camera view you will toggle it off.
  2. Try changing the camera's focal length. For this, select the camera Camera and go the Lens settings in the Object Data properties tab in the properties panel. There you can find the Focal Length setting, try for example values 18, 50 and 100 and see what effect this has. Notice that when you set the Focal Length to a lower value you might see clipping (the scene is cut off from a certain distance). This can be changed by setting the Clip Start in the same Lens section to a lower value, e.g. 0.01. Finally set the focal length to the desired value.
  3. Next we are going to bring the focus to a chosen object in the scene with the depth of field settings. For this, select the camera, scroll down in the Object Data properties tab in the properties panel to the Depth of Field settings. Check the check-box before Depth of Field to activate the depth of field. Now try to set the Focus on Object value to the Suzanne object and test different values for the Aperture > F-Stop setting.
  4. When you are done, disable depth of field again. This makes the material editing easier.

If the lighting gives the desired effect looking through the configured camera then you can give the objects the look you want with materials in the next section.

"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#materials-how-will-the-light-paths-bounce","title":"Materials - How will the light paths bounce?","text":"

To design how the light is reflected or refracted off the objects you are going to give each object a different material.

  1. For each object (including ground plane and plateau): - Select the object and go to the Material tab in the properties editor. - In the Material tab click the New button. - Then under the Surface section set the Surface parameter to either Diffuse BSDF, Glossy BSDF or Principled BSDF.
  2. Try to play with the material settings Roughness and Color (the latter is called Base Color for the Principled BSDF)

Bonus: When you feel that the roughness and the color alone didn't give you the look that you want with the Principled BSDF then also try and have a look at the other, in the slides, mentioned settings, Metallic, Transmission, IOR and Subsurface.

"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#rendering-creating-your-final-image","title":"Rendering - Creating your final image","text":"

Lights, camera, (materials,) set aaaaaaand action!... Now you will set the desired render settings to generate the final image!

  1. Go to the properties editor and set the following settings: - Render properties tab
    • Set Device to GPU Compute. If your device doesn't have a (powerful) GPU set it to CPU.
    • Sampling section: set Render > Samples to 128
    • Light Paths section: set Clamping > Indirect Light to 1.0
    • Output tab
    • Format section: set Resolution to 1920x1080, 100%.
  2. If everything is set, press F12.

Now the Image editor will replace the 3D view and your image will slowly be rendered in parts called \"tiles\".

  1. Finally when the image looks the way you want don't forget to save it! In the Image editor go to the Image menu and click on Save As... and choose a location and save the image.
"},{"location":"basics/rendering_lighting_materials/rlm_assignment/#performance-speed-up-those-renders","title":"Performance - Speed up those renders","text":"

Now that we know how to improve the look of the scene and save the final render we will improve the speed of the render.

  1. Write down the render time shown in the upper left corner of the Image editor (Example upper right corner: Frame:1 | Time:00:09:84 | Mem:6.09M, Peak: 164.29M).
  2. Close the Image editor if it is still open.
  3. Change the following settings in the Render properties tab : * Sampling section: set Render > Samples to 32 * Sampling section: turn on the denoiser with, Render > Denoise.
  4. Now press F12 again to render another image.

As you can see when comparing the render time of our previous render and this one, this one is significantly faster.

Render quality when using denoise features

One thing to keep in mind is that when you are using the denoise feature you will lose a little detail.

Noise Threshold

Blender 3.0 introduced another feature to reduce the render times called Noise Threshold. Turning it on and giving it a value between 0.1 to 0.001 will prematurely terminated the sampling when a pixel reaches a certain noise threshold and by doing so reduces the render time.

"},{"location":"basics/simple_mesh_editing/introduction/","title":"Introduction","text":"

This chapter will introduce the basic mesh editing tools available within Blender. The basic mesh editing will be performed with the so called modifiers, these modifiers make it relatively easy to do large mesh editing operations that can greatly impact the visual representation of your 3D-models. Below you'll find a video that will give you a theoretical introduction followed by a practical walk-through in Blender. If you want to follow along with the walk-through you can find the Blend files in the walk-through directory walkthroughs/basics/04-simple-mesh-editing.

After you watched the video about simple mesh-editing you are ready for the exercises!

"},{"location":"basics/simple_mesh_editing/sme_assignment/","title":"\ud83d\udcbb Simple mesh editing","text":"

In this exercise you will use some mesh modifiers on an iso-surface of a CT scan of a fish and try to see if you can reveal the insides.

Once you opened the exercise blend file sme_assignment.blend you'll see the fish iso-surface above a plane.

Info

This exercise uses a somewhat large 3D model, at around 155,000 triangles. On most modern PCs and laptops this should not pose a problem, so it is a good test to see if your system is able to handle this (which might indicate some limitation).

"},{"location":"basics/simple_mesh_editing/sme_assignment/#decimate-reducing-the-triangles","title":"Decimate - Reducing the triangles","text":"

The fish 3D model, for your convenience, has been divided into two parts: the fishskin and fishbones. Combined, this model has a large amount of triangles (155k for the fishskin and 573k for the fishbones). On lower end devices this can slow everything down to a crawl. In order to be able to add modifiers or edit the meshes with reasonable interactivity you first need to decimate the meshes. The decimation is for reducing the number of triangles, by merging adjacent triangles together into one, iteratively.

  1. Select the fishskin by clicking on the fishskin with LMB.
  2. Once selected go the Modifiers tab in the properties editor.
  3. Click Add Modifier and add the Decimate modifier (it's in the Generate column).
  4. Keep the decimation type set to Collapse, set the Ratio to 0.5 and press Enter. The mesh processing will take a couple of seconds but will immediately reduce the number of triangles to ~77k, which is visible in the modifier under Face Count. You can even reduce it to a lower number but it might affect the appearance and shape of the model negatively by creating hard edges on the surface.
  5. Once you are satisfied with the results press Apply, under the dropdown-menu arrow to the right of Decimate , or by pressing Ctrl-A, while focused on the Decimate modifier, to make the changes permanent. Again, this can take a few seconds.
  6. Now that the fishskin triangles have been reduced, select it and press H to hide it or click the icon in the Outliner. This simultaneously hides the fishskin and reveal the fishbones.
  7. Preform the same steps the fishbones and try to reduce the triangle count significantly without affecting the appearance of the model.
  8. Now unhide the fishskin again for the next assignment by clicking the icon.
"},{"location":"basics/simple_mesh_editing/sme_assignment/#smooth-ironing-the-creases","title":"Smooth - Ironing the creases","text":"

The geometry of the fishskin and the fishbones both look a bit rough because of the iso-surface extraction algorithm. If that is not desired, the rough edges can be smoothed out with the Smooth Modifier.

  1. Select the fishskin model by clicking on the fishskin with LMB.
  2. Go to the Modifiers tab in the properties editor.
  3. Click Add Modifier and add the Smooth modifier (it's in the Deform column).
  4. Keep the Factor at 0.5 but increase the Repeat to 5. Watch out with using the slider, every change re-triggers the modifier and when you accidentally slide to a high number it will take a while to calculate.

Unfortunately you will notices that the Smooth modifier creates tears along the skin model, this conveniently revealed that the underlying mesh triangles are not fully connected and are present in connected patches. These patches stems from the creation of this model where the calculation of the geometry was done in multiple processors and each patch was created by a separate process. This can be fixed in the Edit-mode but that will be covered in the advanced course.

  1. The Factor is good as it is, but changing the value shows what kind of drastic effect it has.
  2. Once you are satisfied with the smoothness of the fishskin press Apply and try to do the same with the fishbones.
"},{"location":"basics/simple_mesh_editing/sme_assignment/#boolean-slicing-the-geometry","title":"Boolean - Slicing the geometry","text":"

If you wanna both show the inside of the fish with the context of the outside you can use slice through the fishskin model and reveal the insides of the fish by using a Boolean Modifier.

  1. Before you add the Boolean Modifier you first need to reveal the fishskin mesh object again by clicking the icon in the Outliner.
  2. Select the fishskin mesh object and go to the Modifiers tab in the properties editor to add a Boolean modifier (it's in the Generate column).

Now that the Boolean modifier is added we still miss another 3D mesh object to perform the Boolean operation with. You are now gonna prepare the other mesh object.

  1. Move the mouse into the 3D view and add a new UV sphere with Shift-A > Mesh > UV Sphere
  2. Scale and translate (S and G keys) the UV sphere so that it overlaps a part of the fish which you want to clip away.
  3. The UV sphere is now shown as a solid surface, which is not desirable when you want to use it for clipping because you want to see through it. You can change the representation of an object in the 3D view using the Object properties under Viewport Display: set Display As to Wire.
  4. Also when you want to look at the results in Rendered mode you need to make the sphere invisible using the Ray Visibility settings under Visibility: disable all check-boxes (Camera, Diffuse, Glossy, Transmission, Volume Scatter and Shadow)

Now that you prepared the mesh object to preform the Boolean operation with, you can continue setting up the Boolean modifier.

  1. Select the fishskin mesh object and go to the Modifiers tab in the properties editor to reveal the already added Boolean modifier.
  2. Now under Object, select the Sphere mesh object.
  3. Before we want to start moving the clipping Sphere around we want to change the Solver to Fast. This is more simpler and better performing solver and in our case, with the underlying broken patched mesh, also a better option since this solver is able to handle this type of geometry.
  4. Now if you select the Sphere object and translate and scale it over the fishskin mesh object you can clip away any desired part as the Boolean modifier updates in real time.

As you might have noticed, this Boolean modifier does have some problems with this current mesh and placement of the clipping sphere must be precise. This off course is not always the cause but it should be kept in mind when working with the Boolean modifier.

Finally you can view your results with Cycles with Rendered shading (Z > Rendered) for better lighting and materials. Or you can give the camera a better position and make a nice final render.

"},{"location":"news/","title":"News","text":""},{"location":"news/2023/09/19/new-courses-being-planned-for-q4-2023/","title":"New courses being planned for Q4 2023","text":"

We are in the process of finalizing dates for a new set of Basics and Advanced Blender courses at the end of 2023. These will be held online. Watch this news section, or the schedule.

"},{"location":"news/2023/11/20/basics-course-starting-4-december-2023/","title":"Basics course starting 4 December 2023","text":"

A new Basics course will be held starting 4 December 2023. The course is self-paced using our online materials, supported by a kick-off meeting followed by weekly check-in moments. The course runs over a 3-week period and will be held online.

See the schedule for precise dates and times. You can register for the course through this page.

"},{"location":"overview/about/","title":"About us","text":"

We are members of the High-Performance Computing & Visualization (HPCV) group at SURF, and are based in Amsterdam. SURF is a cooperative association of Dutch educational and research institutions in which the members combine their strengths to acquire or develop digital services, and to encourage knowledge sharing through continuous innovation.

Within the HPCV group we support users of the Dutch National compute infrastructure with visualization expertise and software development, on topics such as data visualization, remote visualization, 3D modeling and rendering and use of eXtended Reality (XR) for research and education.

Part of our jobs is to provide courses on topics related to visualization in HPC. This Blender course was created for the PRACE Training Center and first provided (in-person) in 2018, and has since been repeated at least once a year.

"},{"location":"overview/about/#paul-melis","title":"Paul Melis","text":"

Paul Melis has an MSc in Computer Science from the University of Twente in The Netherlands and worked on topics in scientific visualization and VR at the University of Groningen and University of Amsterdam before joining SURFsara in 2009 (which has since become part of SURF).

At SURF he is involved in several activities related to visualization, including realizing visualization projects for end-users, teaching courses and providing user support for visualization tasks on our HPC systems. As part of the SURF innovation portfolio he is involved in the use of extended reality (XR) for research and education. He likes to use Blender for all things 3D, but also works with ParaView, and sometimes develops a bit of code in Python, C++ or Julia.

"},{"location":"overview/about/#casper-van-leeuwen","title":"Casper van Leeuwen","text":"

Casper has a MSc in Computer Science from Delft University of Technology where he graduated on the topic of medical visualization. He has been at SURFsara since 2014.

He mainly works on web-based 2D/3D visualization, including Jupyter Notebooks and loves to work on Blender projects when the goal is to make something look aesthetic! Besides that he also knows his way around Unity and Unreal Engine.

"},{"location":"overview/about/#ben-de-vries","title":"Ben de Vries","text":"

Ben de Vries has a PhD in Astrophysics from KU Leuven. He joined SURF in 2019. He focuses on 2D/3D visualization projects using Blender, Unity and general 3D programming.

"},{"location":"overview/conventions/","title":"Text conventions","text":"

The conventions on these pages follow those used in the official Blender documentation as much as possible:

  • Keyboard and mouse actions, menu names, literal text to enter, etc are shown in monospaced bold, e.g. X or Shift-MMB
  • LMB = left mouse button, MMB = middle mouse button, RMB = right mouse button, Wheel = scrolling the mouse wheel
  • Menu actions are shown as View > Cameras > Set Active Object as Camera, for View menu, Cameras submenu, \"Set Active Object as Camera\" option.
"},{"location":"overview/conventions/#exercises","title":"Exercises","text":"

We highlight exercise sections by prefixing their titles with a \ud83d\udcbb symbol.

"},{"location":"overview/introduction/","title":"Introduction","text":"

This Blender course consists of two parts, that are each taught separately online over the course of a number of weeks:

  • In the Basics part we assume no prior knowledge of Blender. We will introduce Blender from the ground up, starting with the user interface and basic functionality. We cover the 3D scene, cameras, lights and materials and some basic mesh editing and animation.

    It helps to have some familiarity with basic 3D graphics concepts, such as 3D geometry, transformations and rendering. But if not, you will probably pick those up quite quickly during the course.

  • In the Advanced part of the course, we assume participants already have basic knowledge of Blender, preferably by following our basics course. We assume participants are familiar with the Blender user interface, basic functionality and concepts like the 3D scene, cameras, lights, materials and some basic mesh editing and animation.

    The advanced part goes into detail on the Python API for scripting, node-based materials, mesh editing and animation. The main goal of the Advanced course is for you to realize your own project with Blender, based on data you choose.

"},{"location":"overview/introduction/#context","title":"Context","text":"

This course is aimed at scientists and researchers of all levels. We don't make many assumptions on use cases for Blender, but do assume the context to be an academic setting. So we won't go into creating visual effects for putting a massive CGI tornado in your backyard that scoops up your neighbours. But if you happen to write a tornado simulation for your research we will be more than happy to see how we can use Blender to make attractive visuals of the data.

This doesn't mean that we only assume to apply Blender to existing scientific data. Sometimes certain concepts are best explained by creating a 3D scene, say to produce a nice looking cover image for your PhD thesis, or to illustrate or visualize a certain concept.

From previous editions of the course we know many participants bring their own data and want to apply Blender to it. We encourage you to do that as well, as it will also help in providing some focus to your use of Blender.

"},{"location":"overview/introduction/#blender-version","title":"Blender version","text":"

Update in progress

We are currently (Q4 2023) in the process of updating all the course material to Blender 3.6

We currently use Blender 3.1 for this course and the materials provided.

Blender as a software package is a fast moving target, usually with lots of shiny new features and bug fixes in each release (and multiple releases per year). This is great, of course, but with each release usually also a lot of small tweaks and improvements are made, especially in the user interface and workflow.

We originally planned to only base this course on the Blender LTS (Long-Term Support) releases, which remain more-or-less unchanged regarding UI and features for roughly 2 years. But there have been some major improvements in certain versions that would only become available in the next LTS release much later. Hence, we chose to update the course to more regularly.

Course videos using previous Blender versions

Some of the videos used in the course might still show an earlier Blender version. In those cases we have estimated that the video is still (largely) up-to-date and have chosen not to update the video, as this is quite time-consuming.

Specifically for Linux users that use their Linux distribution's package of Blender

Sometimes a the blender package from a distro gets built with slightly different versions of software libraries, compared to the official Blender distribution. This is known to sometimes cause different behaviour or even bugs, for example in the handling of video files by the FFmpeg library. In case you find strange issues or bugs with your distro's Blender you might want to try downloading the official Blender binaries to see if that fixes those issues.

"},{"location":"overview/introduction/#issues-with-course-materials","title":"Issues with course materials","text":"

We try to keep this course up to date to match the specific version mentioned above. But we might have missed small things. If so, please let us know through Github by reporting an issue.

If you don't have a Github account, or would rather not create one, then telling us through Discord is fine as well.

"},{"location":"overview/introduction/#prerequisites","title":"Prerequisites","text":"

You will need:

  • A system (PC or laptop) to work on. This can be a Linux, macOS or Windows system. It is preferred to use a system with a somewhat recent GPU (or at most 10 years old) with working OpenGL 4.3 support. See the section \"Hardware Requirements\" on this page for the official requirements for running Blender.
  • Blender 3.6 installed on the above system. You can download it from here, or you can use your system package manager to install it.

    Warning

    It is in general not recommended to use a wildly different Blender version for this course, due to possible mismatches in the user interface and functionality with the course material. A different minor version, e.g. 3.1.1 when it becomes available, should not cause issues, but a future 3.2 release might have some major changes.

  • Please test the Blender installation before the course starts using the instructions sent by e-mail. This will tell you if Blender is working correctly and can save you (and us) time fixing any system-related issues during the course period.

Recommended:

  • Using a 3-button mouse is preferred, as not all Blender functionality is easily used through 2-button mouse or laptop track-pad.
"},{"location":"overview/introduction/#feedback","title":"Feedback","text":"

We will ask for feedback on this in the online sessions, but if you have remarks then please let us know. You can do this either through Github by reporting an issue., or in the Discord sessions.

"},{"location":"overview/schedule/","title":"Schedule","text":"When What Where Purpose Mon 04-12-23 \u2022 10:00 - 11:30 Basics session #1 Online Intro to the course, getting to know each other Mon 11-12-23 \u2022 10:00 - 11:30 Basics session #2 Online Feedback on first week, Q&A Mon 18-12-23 \u2022 10:00 - 11:30 Basics session #3 Online Feedback on course, Q&A, closing"},{"location":"overview/setup/","title":"Course setup","text":"

Course period

Although this course material is available online at any time, we only provide the support mentioned at scheduled course periods throughout the year. Please check the EuroCC Training Agenda when the next Blender course is scheduled.

We use a combination of different media within the course, but the basis is for you to follow the training in your own pace over a period of two weeks. During this period we provide support where needed.

The online material consists of:

  • Videos that introduce and demonstrate new concepts and features within Blender.
  • Slides (also presented as part of the videos) for explanations. These are basically presentations we would otherwise do plenary.
  • Exercises for you to explore new topics and to train your skills

We have scheduled a few short plenary online sessions in the course period to provide general feedback and/or guidance.

"},{"location":"overview/setup/#support","title":"Support","text":"

During the course period we provide support through our Discord server, see this page. On Discord there's a plenary chat channel, but also the possibility to have a 1-on-1 video chat in cases where we need to look more closely over your shoulder to solve a particular issue.

"},{"location":"overview/setup/#data-files","title":"Data files","text":"

Most of the exercises require you to load a Blender scene file that we provide. These files can be found at https://edu.nl/8n7en.

It is best to download the full content of the share to your local system using the Download button in the upper-right.

This share contains:

  • data - Blender files (and other data) for the assignments, split into basics and advanced parts, with a sub-directory per chapter
  • slides - The slides (in PDF)
  • walkthroughs - Some of the files used in the videos, again split split by basics and advanced
  • cheat-sheat-3.1.pdf - A 2-page cheat sheet with often used operations and their short cuts
"},{"location":"overview/setup/#time-investment","title":"Time investment","text":"

The precise amount of time needed to follow this course depends for a large part on how much effort you devote to each topic, your available time, your learning pace, etc. However, the in-person course setup we used in previous years was a full-day course (with quite a high pace).

For the Basics course the time spent on the different subjects and their assignments in that setup is shown below. This might give you some idea on the relative depth of the topics.

Topic Time in schedule (previous in-person course) Videos (this course) Introduction 30 minutes 5 minutes Blender basics 120 minutes 45 minutes Importing data 30 minutes 30 minutes Rendering, lighting & materials 105 minutes 65 minutes Simple mesh editing 30 minutes 20 minutes Basic animation 45 minutes 35 minutes

For the Advanced course it is hard to give a general indication of the expected time investment needed for the course. It depends partially on your own goals and ambitions for the main task: the project of visualizing your own data in the way you see fit.

In terms of topics the Advanced materials and Animation chapters are relatively straightforward and can probably be completed in a day. In contrast, Python scripting in Blender is a very extensive topic and can end up taking a lot of time if you want to work with the more complex parts of the API.

"},{"location":"overview/support/","title":"Support","text":"

Support hours

We will be active on Discord during office hours (CET time zone) and will try to also be on-line outside of those hours. Note that this is all on a best-effort basis.

Detailed interaction and support during the course period is provided through our Discord server. Here you can ask questions by chat, upload an image or (if needed) start a video session or share your screen with one of us.

Depending on the course you're following (basics or advanced) you need to use the category called BASICS BLENDER COURSE or ADVANCED BLENDER COURSE. Within these categories you will find two support channels:

  • A shared text chat channel (e.g. 2023-12-blender-basics) for interacting with the course teachers and other course participants. Here you can ask questions, show your work, or anything else you feel like sharing.
  • A video channel (video channel), in case we want to share something through Discord

For one-on-one contact, including the option for screen sharing, right-click on one of our names as shown in the picture above and pick either the button for voice chat or video chat.

"},{"location":"references/cheat_sheet/","title":"Cheat sheet","text":"

With this course we provide a 2-page cheat sheet that lists basic and often-used operations and their shortcut keys. It also includes a summarize of major interface elements.

The cheat sheet can be found here as a double-sided PDF, which can easily be printed.

"},{"location":"references/community/","title":"Community resources","text":"

On blenderartists.org lots of Blender users and artists are hanging out. There you can ask questions or feedback, show off your work or check out the vast amount of knowledge, tips and Blender renderings in the forums.

BlenderNation gathers information on different topics and includes video tutorials, blog posts on art created with Blender and a lot more.

The Blender subreddit contains many different posts, ranging from simple questions to artists show off their amazing work.

Well-known artists and gurus working with Blender are:

  • Jan van den Hemel shares many tips and tricks through Twitter, both on Blender usage as well as making a scene look a certain way. He also publishes these tricks in an e-book.
  • Andrew Price (twitter) aka \"Blender Guru\" provides many cool tutorials on https://www.blenderguru.com/ and his YouTube channel. He is well-known for a multi-part tutorial series on modeling a realistic donut!
  • Glex Alexandrov (twitter and twitter) aka \"Creative shrimp\" has some very creative and inspirational tutorials on his YouTube channel.
  • Ian Hubert (YouTube and twitter), famous for his Lazy tutorials (very efficient 1 minute tutorials), has videos on advanced green screen techniques and VFX in Blender.
  • Simon Thommes (twitter and YouTube) is a materials wizard, he is able to create complex geometry out of one cube or sphere with just the Shader editor.
  • Steve Lund has some great Blender tutorials on his YouTube channel.
  • Zach Reinhardt has some great modeling, texturing and VFX tutorials on his YouTube channel
  • Peter France is the Blender artist at the Corridor Crew which just started his own YouTube channel with some instructive tutorials.
  • YanSculpts does not fit this course material perse but it goes to show how versatile Blender can be, this artist creates some amazing sculptures in Blender of which he shows the process on his YouTube channel.
  • Josh Gambrell shares a lot of tips and tricks for advanced mesh editing on his youtube channel (mostly hard surface modeling).
"},{"location":"references/interface/","title":"User Interface elements","text":"

The default layout of the Blender user interface is shown below. Note that the layout is fully configurable.

* Scene statistics

By default the status bar at the bottom only shows the Blender version number. You can add extra statistics, such as the number of 3D objects in the scene and memory usage in the preferences.

You can either right-click on the status bar to enable display of extra values. Or use the application menu Edit > Preferences, select the Interface tab, in the Editors > Status Bar section and check all marks (Scene Statistics, Scene Duration, System Memory, Video Memory, Blender Version).

"},{"location":"references/interface/#editor-type-menu","title":"Editor type menu","text":"

The yellow highlight indicates often used ones for this course

"},{"location":"references/official/","title":"Official sources","text":"

The official home for Blender is blender.org

"},{"location":"references/official/#manuals","title":"Manuals","text":"

The Blender Reference Manual for version 3.6 can be found here. The documentation on the Python API is here.

Access help from within Blender

You can open the Blender documentation pages from within Blender itself, using the options in the Help menu.

"},{"location":"references/official/#demo-files","title":"Demo files","text":"

Official demo files showing off lots of cool features and scenes can be found here, including the scene files used to render the splash images of different Blender versions.

"},{"location":"references/official/#blender-development-and-news","title":"Blender development and news","text":"

If you are interested in following recent development in Blender then the weekly Blender Today Live sessions on YouTube are a good resource.

Videos on lots of different topics, including videos from the yearly Blender Conference, can be found on the official Blender YouTube channel.

Blender has official accounts on Mastodon an Twitter/X. The hashtag to use for Blender is #b3d (although sometimes also #blender).

"},{"location":"references/official/#mastodon","title":"Mastodon","text":"

On Mastodon the official account is @blender@mastodon.social.

"},{"location":"references/official/#twitterx","title":"Twitter/X","text":"

On Twitter you can follow @Blender for official Blender news or @BlenderDev for more in-depth development information.

"},{"location":"references/scene/","title":"Scene resources (3D models, materials, textures)","text":"

Here we list a number of online resources for 3D models, textures, shaders, etc.

In general certain 3D models might be free for download, while others might only be available paid (usually for a small amount). Usually, the nicer the 3D model the higher the cost. Also, different licenses are used for the models and these will describe how you can use the models and any attribution you might need to give when using it.

"},{"location":"references/scene/#examples","title":"Examples","text":"
  • Blender provides a set of demo files, either made by artists or to demonstrate new features. They can be found here.
"},{"location":"references/scene/#3d-models","title":"3D Models","text":"
  • Released together with Blender 3.6 an asset bundle with various human base meshes was made available. The assets can be found here.
  • Turbosquid is one of the oldest 3D model websites and provides models in all sort of topics, some free, some paid.
  • Sketchfab hosts a large collection of 3D models from many different categories. Many 3D models are textured and some are even animated.
  • 3D Model Haven distributes freely usable 3D models, many of them textured. It is not as extensive as other websites, but the upside is that all models can be freely used.
  • CGTrader also hosts many 3D models, some of them free, some paid
  • There's a section on BlenderNation where Blender models are shared. Again, some of these might be free, others will involve some payment.
  • BlenderMarket contains a section with 3D models
  • Quixel's Megascans is a great \"paid\" source for 3D models as well as textures which can be used for free when it's attached to an Epic account and the assets are only used for an Unreal Engine application. It's great for personal use but if you publish anything containing an asset from Quixel without Unreal Engine attached to it you have to pay for the asset.
"},{"location":"references/scene/#textures-and-images","title":"Textures and images","text":"
  • Texture Haven provides textures to be used in materials and shaders. All textures available are free.
  • CC0 Textures has many high-quality textures
  • BlenderMarket has a section with shaders, materials and textures.
  • HDRI Haven is similar to Texture Haven, but contains many freely available HDRI 360 images that can be used for realistic environment lighting in Blender
  • Poliigon, where the CEO is the Blender Guru himself, has some great looking free samples and otherwise high quality paid textures.
  • textures.com has some high quality, high resolution, movie grade textures under a paid subscription or credit-based payment model.
"},{"location":"references/scene/#blenderkit","title":"BlenderKit","text":"

BlenderKit is an online repository of materials, 3D models and a few other things. It used to come bundled with Blender as an add-on, but since Blender 3.0 this is no longer the case. You need to download and install the add-on yourself, for which instructions can be found here.

When the add-on is installed and enabled it provides some extra elements in the Blender interface for searching, say a material or 3D model, by name, which can then be easily used in a Blender scene:

Note that many of the assets in BlenderKit are free, but some are only available by buying a subscription.

The add-on has quite a few options and performs certain operations that you would otherwise do manually or maybe not use at all. As such, it can set up the scene in more exotic ways, for example by linking to another Blender file. Also, the materials provided by BlenderKit can use pretty complex shader graphs, involving multiple layers of textures, or advanced node setups.

Warning

When applying a BlenderKit material on your own object the rendering might not look like the material preview in all cases. Especially use of displaced materials involves specific settings for the Cycles renderer and use of subdivision on the object.

Warning

Textures from BlenderKit are by default stored in a separate directory on your system (~/blenderkit_data on Linux). There is an option to pack the textures within the Blender file, making it larger in size but also completely independent of any external files, which is useful if you want to transfer the Blender file to a different system. The option for packing files is File > External Data > Pack All into .blend.

"},{"location":"news/archive/2023/","title":"2023","text":""}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 760bd8e..4000a44 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,357 +2,357 @@ https://surf-visualization.github.io/blender-course/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/privacy/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/advanced_materials/advanced_materials_assignment/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/advanced_materials/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/advanced_materials/node-wrangler/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/advanced_materials/vertex_colors/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/animation/2_assignment_cars/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/animation/3_assignment_flipbook/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/animation/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/animation/shape_keys/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/final_project/final_project/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/mesh_editing/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/mesh_editing/mesh_editing_assignment/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/python_scripting/1_api_basics/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/python_scripting/2_accessing_data/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/python_scripting/3_geometry_colors_and_materials/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/advanced/python_scripting/4_volumetric_data/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/10000_foot_view/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/bpy_data_and_friends/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/custom_properties/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/data_block_users_and_gc/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/materials/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/meshes/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/object_transformations/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/often_used_values_and_operations/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/operators/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/parenting/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/api/selections/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/animation/everything/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/animation/example_flipbook_animation/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/animation/exercise_manual_camera_orbit/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/animation/exercise_parented_camera_orbit/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/animation/exercise_track_to/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/animation/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/animation/tradeoffs_settings_output/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/1_assignment_interaction_selections/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/2_assignment_transformations/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/3_assignment_camera_and_views/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/avoiding_data_loss/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/cameras_and_views/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/first_steps/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/objects_3d_cursor_undo/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/scene_hierarchy/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/transformations/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/blender_fundamentals/ui/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/importing_data/exercise_vertex_colors/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/importing_data/exercise_your_data/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/importing_data/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/rendering_lighting_materials/composition/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/rendering_lighting_materials/gpu_rendering/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/rendering_lighting_materials/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/rendering_lighting_materials/rlm_assignment/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/simple_mesh_editing/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/basics/simple_mesh_editing/sme_assignment/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/news/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/news/2023/09/19/new-courses-being-planned-for-q4-2023/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/news/2023/11/20/basics-course-starting-4-december-2023/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/overview/about/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/overview/conventions/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/overview/introduction/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/overview/schedule/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/overview/setup/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/overview/support/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/references/cheat_sheet/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/references/community/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/references/interface/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/references/official/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/references/scene/ - 2023-11-24 + 2023-11-27 daily https://surf-visualization.github.io/blender-course/news/archive/2023/ - 2023-11-24 + 2023-11-27 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 5613b3d..a0941c9 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ