I set out to explore how AI could transform the traditional video design process. Specifically how tools like generative design and AI-driven animation could unlock new ways of creating.
The goal was to deliver Instagram-friendly reels with the desire to demonstrate and push the creative boundaries of what AI-assisted visuals could look like. Each video had its own unique hurdles ranging from handling unexpected “glitches” in the code to embracing happy accidents that arise when human creativity meets machine learning.
This case study explores AI-assisted creative coding inside a 3D workflow using ChatGPT to translate cultural and procedural references into executable Python scripts for Blender. The focus was on building motion studies that are controllable, loopable, and visually intentional, while testing where AI speeds up iteration versus where human creative direction still matters most.
A secondary objective evaluates render efficiency using interpolation tools to push stepped-rate renders toward smoother playback without re-rendering everything at full frame rates.
I started with either a cultural reference (artist-inspired visual language) or a procedural reference (algorithmic concepts like Perlin noise or fractal geometry). From there, I used ChatGPT to generate a first-pass Python script, imported it into Blender, and iterated through parameters (timing, scale, materials, lighting, camera) until the result felt cohesive and loop-ready.
For render tests, I tested interpolation workflows to convert stepped/low-FPS outputs into smoother motion reducing iteration time while keeping clips presentable for review and delivery.
Constraints first: lock in the loop duration, framing, and rules before aesthetic tuning.
Iteration speed: small script edits provided fast visual feedback inside Blender
Finish pass: interpolation used selectively to improve motion quality during look development
A cultural-reference translation test: can AI interpret a recognizable visual language (minimal form, repetition, light-as-structure) and produce a Blender-ready script that captures the feel through procedural rules without manual modeling or keyframing.
I prompted for a simple tube-light system with emissive materials and rhythmic motion, then iterated toward loopability and more controlled color transitions.
The goal was a reusable “light study” scaffold that could be scaled up and art-directed with parameters.

1import bpy
2import math
3
4# Function to create a tube light with emissive material
5def create_tube_light(name, location, color):
6 bpy.ops.mesh.primitive_cylinder_add(vertices=32, radius=0.1, depth=2, end_fill_type='NGON', location=location, rotation=(math.pi/2, 0, 0))
7 tube = bpy.context.object
8 tube.name = name
9
10 # Add emission shader
11 mat = bpy.data.materials.new(name="EmissionMaterial_{}".format(name))
12 mat.use_nodes = True
13 mat.node_tree.nodes.remove(mat.node_tree.nodes.get('Principled BSDF'))
14
15 emission = mat.node_tree.nodes.new(type='ShaderNodeEmission')
16 emission.inputs["Strength"].default_value = 5
17 emission.inputs["Color"].default_value = (*color, 1.0)
18
19 output = mat.node_tree.nodes.get('Material Output')
20 mat.node_tree.links.new(output.inputs[0], emission.outputs[0])
21
22 tube.data.materials.append(mat)
23 return tube, emission
24
25# Create tube lights
26num_tube_lights = 10
27spacing = 3
28tube_lights = []
29
30# RGB colors for the tube lights
31colors = [
32 (1, 0, 0), (1, 0.5, 0), (1, 1, 0), (0.5, 1, 0),
33 (0, 1, 0), (0, 1, 1), (0, 0.5, 1), (0, 0, 1)
34]
35
36# Animation settings
37fps = 25
38total_seconds = 15
39total_frames = fps * total_seconds
40
41# Wave settings
42wave_length = total_frames # A wave completes in 15 seconds
43amplitude = 2
44
45for i in range(num_tube_lights):
46 x = i * spacing
47 color = colors[i % len(colors)]
48
49 tube_light, emission_node = create_tube_light("Tube_{}".format(i), (x, 0, 0), color)
50 tube_lights.append((tube_light, emission_node))
51
52# Animate tube lights in a wave pattern
53for frame in range(total_frames + 1):
54 bpy.context.scene.frame_set(frame)
55
56 for i, (tube_light, emission_node) in enumerate(tube_lights):
57 x = i * spacing
58
59 # Move in a wave formation
60 z = amplitude * math.sin(2 * math.pi * (frame / wave_length + x / (num_tube_lights * spacing)))
61 tube_light.location.z = z
62 tube_light.keyframe_insert(data_path="location", index=2, frame=frame)
63
64 # Smoothly change color
65 color_phase = 2 * math.pi * (i / num_tube_lights + frame / total_frames)
66 r = math.sin(color_phase) * 0.5 + 0.5
67 g = math.sin(color_phase + 2 * math.pi / 3) * 0.5 + 0.5
68 b = math.sin(color_phase + 4 * math.pi / 3) * 0.5 + 0.5
69 emission_node.inputs["Color"].default_value = (r, g, b, 1.0)
70 emission_node.id_data.keyframe_insert(data_path='nodes["Emission"].inputs[0].A procedural-reference translation test: turning the concept of Perlin noise into an art-directable motion system—organic variation that still stays controlled, loopable, and readable.
I used AI to generate a first-pass script built around noise-driven parameters (offsets, amplitude, speed), then tuned the system to keep motion smooth and compositionally consistent.
The aim was a repeatable emulation of “noise” that can be dialed in.

1import bpy
2import colorsys
3import math
4import bmesh
5import random
6import mathutils
7
8# set frames and fps
9bpy.context.scene.render.fps = 30
10bpy.context.scene.frame_end = 450
11
12# "noise" scale
13NOISE_SCALE = 0.3
14
15# function to create a circle with points and colors based on harmony
16def create_colored_circle(name, points, radius, location, harmonies):
17 # create mesh and object
18 mesh = bpy.data.meshes.new(name)
19 obj = bpy.data.objects.new(name, mesh)
20
21 # link object to scene
22 bpy.context.collection.objects.link(obj)
23
24 # create bmesh
25 bm = bmesh.new()
26
27 spheres = []
28 materials = []
29 for i in range(points):
30 # calculate position
31 angle = 2.0 * math.pi * (i / points)
32 pos = [radius * math.cos(angle), radius * math.sin(angle), 0]
33
34 # create vertex
35 v = bm.verts.new(pos)
36
37 # create material
38 mat = bpy.data.materials.new(name=name + str(i))
39 mat.use_nodes = True
40 mat.node_tree.nodes.remove(mat.node_tree.nodes.get('Principled BSDF'))
41
42 emmision_node = mat.node_tree.nodes.new(type='ShaderNodeEmission')
43 emmision_node.inputs[1].default_value = 1.0 # Strength
44
45 output_node = mat.node_tree.nodes.get('Material Output')
46 mat.node_tree.links.new(emmision_node.outputs[0], output_node.inputs[0])
47
48 materials.append(mat)
49
50 # create sphere
51 bpy.ops.mesh.primitive_uv_sphere_add(radius=0.05, location=pos)
52 sphere = bpy.context.object
53 sphere.data.materials.append(mat)
54 spheres.append(sphere)
55
56 # animate material colors
57 for frame in range(bpy.context.scene.frame_end + 1):
58 for i, mat in enumerate(materials):
59 t = frame / bpy.context.scene.frame_end
60 harmony = harmonies[i % len(harmonies)]
61 hue = (t + harmony) % 1.0
62 color = colorsys.hsv_to_rgb(hue, 1.0, 1.0)
63 animate_material(mat, frame, color)
64
65 # animate sphere positions
66 for frame in range(bpy.context.scene.frame_end + 1):
67 for i, sphere in enumerate(spheres):
68 t = frame / bpy.context.scene.frame_end
69 sphere.location.z = mathutils.noise.noise(t + i / points) * NOISE_SCALE
70 sphere.keyframe_insert(data_path="location", frame=frame)
71
72 # update mesh
73 bm.to_mesh(mesh)
74 bm.free()
75
76 # move circle to location
77 obj.location = location
78
79# create circles
80create_colored_circle('circle1', 36, 1.0, [0, 0, 0], [i / 36 for i in range(36)]) # all colors
81create_colored_circle('circle2', 20, 0.7, [0, 0, 1], [0, 0.25, 0.5, 0.75]) # double-split complementary
82create_colored_circle('circle3', 12, 0.49, [0, 0, 2], [0, 0.33, 0.67]) # square
83
84# refresh scene
85bpy.context.view_layer.update()A geometry-to-procedural test: translating a fractal/recursive concept into a scene script that generates the structure reliably, while keeping the inputs procedural and the output renderable and visually coherent.
I used AI to scaffold the generative logic, then iterated on depth, scale, and spacing to maintain readability. This was about proving that conceptual geometry can become a practical and repeatable motion study pipeline.

1import bpy
2import mathutils
3import math
4
5def create_pyramid(size, location):
6 height = size # Set height equal to the base length
7
8 # Define vertices for an equilateral pyramid
9 verts = [
10 mathutils.Vector((-size/2, size*math.sqrt(3)/6, 0)),
11 mathutils.Vector((size/2, size*math.sqrt(3)/6, 0)),
12 mathutils.Vector((0, -size*math.sqrt(3)/3, 0)),
13 mathutils.Vector((0, 0, height))
14 ]
15
16 edges = []
17 faces = [(0, 1, 2), (0, 1, 3), (1, 2, 3), (2, 0, 3)]
18
19 mesh_data = bpy.data.meshes.new("pyramid")
20 mesh_data.from_pydata(verts, edges, faces)
21 mesh_data.update()
22
23 obj = bpy.data.objects.new("Pyramid", mesh_data)
24 bpy.context.collection.objects.link(obj)
25 obj.location = location
26
27 return obj
28
29def sierpinski_pyramid(level, size, location):
30 if level == 0:
31 return [create_pyramid(size, location)]
32
33 half_size = size / 2
34 height_offset = half_size
35 offset = mathutils.Vector((0, 0, height_offset))
36
37 pyramids = []
38 for i in range(3):
39 angle = i * math.radians(120)
40 rotation_matrix = mathutils.Matrix.Rotation(angle, 4, 'Z')
41 new_location = location + rotation_matrix @ mathutils.Vector((half_size, 0, 0))
42 pyramids.extend(sierpinski_pyramid(level - 1, half_size, new_location))
43
44 pyramids.extend(sierpinski_pyramid(level - 1, half_size, location + offset))
45 return pyramids
46
47# Clear existing mesh objects
48bpy.ops.object.select_all(action='DESELECT')
49bpy.ops.object.select_by_type(type='MESH')
50bpy.ops.object.delete()
51
52# Set the level of the fractal
53level = 6 # Increase the fractal level to 6
54size = 2.0
55location = mathutils.Vector((0, 0, 0))
56
57pyramids = sierpinski_pyramid(level, size, location)
58
59# Set the start and end frame for the animation
60start_frame = 10
61end_frame = 300
62
63# Animate each pyramid individually
64for pyramid in pyramids:
65 bpy.context.view_layer.objects.active = pyramid
66 bpy.ops.object.origin_set(type='ORIGIN_CENTER_OF_MASS', center='BOUNDS')
67
68 # Animate the whole pyramid
69 original_location = pyramid.location.copy()
70 pyramid.location.z = 0
71 pyramid.keyframe_insert(data_path="location", index=2, frame=start_frame)
72 pyramid.location = original_location
73 pyramid.keyframe_insert(data_path="location", index=2, frame=end_frame)
74
75 # Animate individual vertices
76 mesh = pyramid.data
77 for vertex in mesh.vertices:
78 original_z = vertex.co.z
79 vertex.co.z = 0
80 vertex.keyframe_insert(data_path="co", index=2, frame=start_frame)
81 vertex.co.z = original_z
82 vertex.keyframe_insert(data_path="co", index=2, frame=end_frame)
83
84bpy.context.scene.frame_end = end_frameA workflow optimization test: improving perceived smoothness and presentation quality without paying the full cost of high-FPS renders during iteration.
I tested interpolation tooling (FlowFrames) to convert lower frame rate renders into smoother motion for review and delivery. This is especially useful when you’re iterating on look-dev allowing for fast feedback first and polish later.