Implementing A Minimum 1V1 Camera in UE (akin to FramingTransposer + Composer in Cinemachine)

The camera system inside Unreal Engine provides very basic functions. It merely enables the camera to follow a target with limited adjustable paramters such as offset, camera distance, lag, etc. We would like a more powerful camera system, like the Cinemachine toolset in Unity, to facilitate creating more interesting and compelling visual experience for gameplay. In most 3D adventure games, a 1v1 camera is required when you fight with a BOSS and you need to look at the BOSS however you move and cast skills. Unfortunately, 1v1 camera is not a built-in function in UE, so we have to implement our own version of 1v1 camera.

In this post, I will share with you how to implement a minimum 1v1 camera in UE using only blueprints, based on some simple mathematics. If you are more familiar with coding, you can also implement this 1v1 camera with only few lines of code.

Start with A Simple Case: Mathematical Derivation for Camera Location and Rotation

Constructing the equation

The most crucial two parts of any camera behavior are the location and rotation, the former determines where the camera is and the latter controls where the camera looks at.

Let us first begin with a simple case: assume the follow location is , the look-at location is , and the look-at point is fixed at the center of the screen. We introduce a Follow Screen X parameter (denoted by ) controlling the relative x-axis offset of the follow point on the screen space. If , the follow point will be on the center of the screen; if , the follow point will be on the rightmost position of the screen; if , it will be on the leftmost position of the screen. At this stage, we do not take Follow Screen Y into consideration for simplication.

To determine camera location and rotation, consider a sphere centered at the origin. Assume camera distance (the distance of the camera to the follow point) is , pitch angle , yaw angle , we can write down the camera location before applying the follow point offset:

Note that is the raw camera location relative to the origin. The look-at direction, however, is , and the actual camera location is . For integrating , we can assume the camera offset is , hence, the final camera location is , and the look-at direction from camera to is . Because does not change the orientation of camera, the camera's look-at direction, as we have stated above, is still . It is obvious that we have the following equation:

where is an unknown coefficient satisfying as we want the look-at target is farther than the follow target to our camera. Before wo go ahead solving this equation, we should determine at the first place.

Determining

We can easily express in terms of and the field of view of the camera. As shown in the following figure, we have:

where is the world-space length of half screen. Then, the offset amount is (note that when , the offset amount will be ), and the offset direction will be the opposite of the sign of . That is, is , camera will translate left, otherwise it will translate right.

Plug into , the directional offset magnitude will be .

A remaining question is which unit vector this offset will be applied along? The answer is the camera's local right direction. It can be readily computed by taking the cross product between camera's local unit forward vector and world space up vector :

Normalizing the result, and considering that UE's coordiante system is based on the left-hand rule, the local unit right vector will be . Thus, the consequent offset vector is:

Determining and

With , we can now determine and . Expanding , we have:

Simplifying , we have:

which leads to three equations:

Rewrite , we have:

Plug into and :

Combining and , we have:

To note, when or , the above equation also holds. Divided by on both sides, we have:

The tinted values and should be calibrated according to the sign of and . It's particularly noteworthy that when , should be:

  • and ,
  • and ,
  • and ,
  • and ,

However, the value returned by lies within and should be further altered according to the observation above. It can be easily concluded that:

For , we notice that when is positive and gets larger, the camera's yaw will decrease, so the real beta will be . Summing up the corrected values, we reach the true camera yaw .

The second problem, which is very palpable, is that can be smaller than or larger than , when the denominator — the distance between follow point and look-at point — is small. If this value exceeds the bound and we clip it, the resulting yaw and the subsequent pitch would be incorrect, making odd camera artifacts.

Here are several potential workarounds to deal with this issue:

  • Dynamically adapts to ensure the absolute value of is no larger than .
  • Introduce the concept of soft zone and apply damping just as Cinemachine does.

Method two might be a better way because it has smooth result. We will get to this in later sections of this post, and now we focus on implementing method one, which is much simpler to achieve using blueprint. All we need to do is to set a BeginAdaptDistanceX and EndAdaptDistanceX that adapts when is within the range of . More concretely, when the distance is within the range, new will be . When both values are set to zero, no scale is applied, when is negative, will have its minimum (the absolute value) greater than zero. It is very effective and flexible to use this value in order for avoiding zero-division as will be introduced in . Directly clamping between will not bring about satisfactory results.

Note that we leave out the case when since for most of the time the player character and the enemy will not be on the up axis at the same time.

Once we have , we can easily find out . By (or equivalently), we have:

You may ask what will happen if is close to zero? Well, this could not be a problem as we have already dynamically adapted with respect to the follow-lookat distance, as long as the values of BeginAdaptDistanceX and EndAdaptDistanceX are properly set.

What are when ? From , we know ; then from and using the fact that , we have .

Final camera location and rotation

To summarize, we first compute and with :

Then we calibrate and :

the addition is the yaw we want: . To remedy the issue of overflowing when is small, we impose an range in which the raw screen offset is dynamically adjusted and achieve smooth camera motion.

The camera pitch can be readily computed:

Eventually, we can compute the camera location and rotation. Location is , and rotation can be set to look at .

Here is a video showcasing this camera with different parameter values.

Here is the blueprint I make to implement this camera. It is now a little out of order and I will make it more readable and extensible in next sections.

In the main graph, we get camera yaw and pitch, get camera / rotation and set them in each frame.

The GetCameraYawAndPitch function starts with applying the follow position and look-at position offset. Note that the follow position offset is based on the follow target's local coordinate.

Then, we store temporary variables including and .

Last, we compute yaw and pitch, and return them.

The GetT1 function computes and returns . We dynamically scale according to the current follow-lookat distance (in the XY plane).

The GetCameraLocation sums up the three components.

The GetCameraRotation forces the camera to orient to the look-at target.

Finer Control Over The Screen Space

To get a finer control of the follow point and look-at point on the screen space, we would like to introduce three more paramters Follow Screen Y, Lookat Screen X and Lookat Screen Y respectively denoting the Y axis screen position of the follow point, the X axis screen position of the lookat point and the Y axis screen position of the lookat point. We first talk about Follow Screen Y.

Determining

Assume is the Y-axis offset applied to the camera, is the value of Follow Screen Y. When , the follow point lies at the bottom edge of the screen, and at the top edge when . Following what we do for , we can easily express as:

where is camera's local up vector from the cross product of camera's local forward vector and camera's local right vector (note again UE uses the left-hand rule). is camera's aspect ratio (usually 16:9).

Determining new and

Going back to and adding , we have:

Then we have:

Interestingly, the resulting by combining and remains the same as . So we do not need to change the way we compute .

For , it's a little tricky. First, we rewrite as , we plug it into :

This leads to (we let ). We can use the same technique for computing to solve . That will be:

When and with the identity , we have . The only thing left is to determine the real value of and . Through experiment, we know both signs are negative, which means the final should be .

Besides, we also introduce BeginAdaptDistanceY and EndAdaptDistanceY, in analogy to what we do for ScreenX, to dynamically scale when the follow point and the look-at position is close. In this way, we achieve smooth camera move for both ScreenX and ScreenY.

But what about the look-at position...?

We introduce four new parameters, , the yaw angle based on camera's local space and , and , the pitch angle based on camera's local space and . We adjust and (i.e., rotating camera at its own local coordinate) to accommodate look-at position manupulation on screen space.

Nonetheless, when biasing the look-at point on screen space, we will encounter some problems. If we allow for camera rotation at its local coordinate, the values of and computed above would be wrong, because the local orientation of camera can significantly influence the follow position on screen space. Now that the determination of and the determination of camera's local orientation are entangled, establishing and solving the equation can be very difficult, particularly for an explicit solution.

(*: There might be a nice explicit solution, but for now I won't manage to solve it out. Perhaps for someday in the future I will take a shot.)

What we gonna do to mitigate this issue is to increment camera position and orientation, rather than hard-set it at its "correct" position and orientation. This is exactly what Cinemachine does for camera motion.

Emulate Cinemachine by using Incremental Motion and Adding Damping

Our solution is to emulate Cinemachine in Unity through incrementally changing camera position and rotation. This paradigm also enjoys the benifit of adding damping easily. To increment camera motion, we only need to calculate the desired position and rotation, and interpolate between its current state and its desired state.

Let us go through the process by showing the blueprints.

High-level workflow and the Initialize function

The high-level steps are pretty simple: we first determine and set camera rotation, then set camera position. FirstFrame is a boolean variable used to indicate whether the current tick is the first frame during execution. If it is, no damping will be applied.

The Initialize function integrates the follow offset and look-at offset to get the real follow position and lookat position.

Set camera rotation

The Set Camera Rotation function gets the delta rotation (after damping), and then rotates camera accordingly.

Similarly, the Set Camera Position function gets the delta position (after damping), and then shifts camera in its local reference frame, not changing camera orientation.

The first part of the Get Delta Rotation function examines whether camera is too close to the look-at target. If it is the case, the camera will not update its rotation.

Then, the second part of Get Delta Rotation computes the difference between current rotation and the desired rotation, taking the given look-at screen offset into account.

Last, the third part of Get Delta Rotation optionally applies damping to the difference rotation and returns the damped result.

Going inside the Damp Rotation function, we find it separately damps each of the rotation component roll, pitch and yaw, all using the Damper function.

The Damper function, however, is a simple exponential decay operator that leaves a particular negligible residual after Damp Time. It can be formulated as:

where is the negligible residual, say , and is the expected damp time.

Set camera position

The Get Delta Position has a similar workflow. It first computes the scaled follow screen offset. We have introduced it in the first section (see here).

Then, it converts the follow position from world space to camera's local space. It is a little bit relating to mathematics. Put it simply, assume the camera's local forward vector is , right vector is and up vector is , and the world space follow position is . The local coordinate of would be:

A local space coordiante facilitates computing the difference between current camera position and desired position. Follow screen offset is also incorporated.

Last, we damp the difference position and returns the result. The Damp Position function manipulates three Raw Delta Position components instead of two as we did in Damp Rotation.

Result

OK, let us enjoy our achievements! We can freely play with various parameters and see how the camera responses. You may notice that only setting up the follow damping leads to camera jitter, not drastic but still perceptible. A possible reason for this phenomenon is the unstable tick rate on my PC. We can, of course, increase the frame rate, but a more robust solution is to modify our damping algorithm. Rather than directly use DeltaTime as the damping step size, we can further split DeltaTime into several sub-delta time steps, and simulate damping for each time step upon the last simulation, finally having a much more smooth damping result. This is exactly what Cinemachine does when DeltaTime is unstable.

Another potential enhancement is to add the concept of Soft Zone, which defines a rectangular area in screen space where follow / look-at point is allowed to move around and the rest screen space where follow / look-at point will never show up in. In other words, the follow / look-at position is hard restricted in the soft zone. It provides a more flexible screen space control over our point of interest.

(*: I've already added soft zone and the improved damping algorithm in both blueprints and code implementation. Feel free to use and modify as whatever you like.)

Complementary note

The improved damping algorithm is not difficult to implement. Suppose we want to split DeltaTime into equal sub-parts, each of which is . Then the decay factor is . The original delta amount is , and the split segment will be . The simulation progressively damps each segment using the decay factor .

In the first iteration, the residual is , or the actor traverses in other words. In the second iteration, the residual will be:

This process terminates until the last -th iteration, where the final residual will be:

Compared with the original residual without multi-step simulation , the simulated result will almost always be larger than the non-simulated conterpart, implying that the actor moves less within duration . This makes the actor behave more smooth under deltatime variability.

Code

Our last step is to code-implement the Cinemachine-like 1v1 camera system. It is not hard but as we want a more practical, robust and extensible camera system, we would like to organize the code in a more systematical way. Cinemachine makes a great example (thank you, Cinemachine).

Implementing FramingFollow

FramingFollow is akin to Framing Transposer in Cinemachine. Here is my simple-to-understand implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
void UFramingFollow::UpdateComponent(float DeltaTime)
{
Super::UpdateComponent(DeltaTime);

if (FollowTarget != nullptr)
{
FVector FollowPosition = GetRealFollowPosition();

/** Get real screen offset. */
FVector AimPosition = FVector(0, 0, 0);
FVector2f RealScreenOffset = FVector2f(0, 0);
if (GetOwningSettingComponent()->GetAimComponent() != nullptr && GetOwningSettingComponent()->GetAimComponent()->GetAimTarget() != nullptr)
{
AimPosition = GetOwningSettingComponent()->GetAimComponent()->GetRealAimPosition();
RealScreenOffset = GetAdaptiveScreenOffset(FollowPosition, AimPosition);
}
else RealScreenOffset = ScreenOffset;

/** Transform from world space to local space. */
FVector LocalSpaceFollowPosition = GetLocalSpacePosition(FollowPosition);

/** Temporary (before damping) delta position. */
FVector TempDeltaPosition = FVector(0, 0, 0);

/** First move the camera along the local space X axis. */
SetForwardDelta(LocalSpaceFollowPosition, TempDeltaPosition);

/** Then move the camera along the local space YZ plane. */
float W = SetYZPlaneDelta(LocalSpaceFollowPosition, TempDeltaPosition, RealScreenOffset);

/** Get damped delta position. */
FVector DampedDeltaPosition = DampDeltaPosition(LocalSpaceFollowPosition, TempDeltaPosition, DeltaTime, RealScreenOffset, W);

/** Apply damped delta position. */
GetOwningActor()->AddActorLocalOffset(DampedDeltaPosition);
}
}

FVector UFramingFollow::GetRealFollowPosition()
{
FVector ActorLocation = FollowTarget->GetActorLocation();
FRotator ActorRotation = FollowTarget->GetActorRotation();
FVector LocalOffset = UKismetMathLibrary::GreaterGreater_VectorRotator(FollowOffset, ActorRotation);

return ActorLocation + LocalOffset;
}

FVector2f UFramingFollow::GetAdaptiveScreenOffset(const FVector& FollowPosition, const FVector& AimPosition)
{
FVector Diff = FollowPosition - AimPosition;
float ProjectedDistance = FMath::Sqrt(FMath::Square(Diff.X) + FMath::Square(Diff.Y));

FVector2f OutRange = FVector2f(1.0f, 0.0f);
FVector2f RealScreenOffset;
RealScreenOffset.X = ScreenOffset.X * FMath::GetMappedRangeValueClamped(AdaptiveScreenOffsetDistanceX, OutRange, ProjectedDistance);
RealScreenOffset.Y = ScreenOffset.Y * FMath::GetMappedRangeValueClamped(AdaptiveScreenOffsetDistanceY, OutRange, ProjectedDistance);

return RealScreenOffset;
}

FVector UFramingFollow::GetLocalSpacePosition(const FVector& FollowPosition)
{
FVector Diff = FollowPosition - GetOwningActor()->GetActorLocation();

FVector ForwardVector = GetOwningActor()->GetActorForwardVector();
FVector RightVector = GetOwningActor()->GetActorRightVector();
FVector UpVector = GetOwningActor()->GetActorUpVector();

FVector LocalSpaceFollowPosition =
UKismetMathLibrary::MakeVector(ForwardVector.X, RightVector.X, UpVector.X) * Diff.X +
UKismetMathLibrary::MakeVector(ForwardVector.Y, RightVector.Y, UpVector.Y) * Diff.Y +
UKismetMathLibrary::MakeVector(ForwardVector.Z, RightVector.Z, UpVector.Z) * Diff.Z;

return LocalSpaceFollowPosition;
}

void UFramingFollow::SetForwardDelta(const FVector& LocalSpaceFollowPosition, FVector& TempDeltaPosition)
{
TempDeltaPosition.X = LocalSpaceFollowPosition.X - CameraDistance;
}

float UFramingFollow::SetYZPlaneDelta(const FVector& LocalSpaceFollowPosition, FVector& TempDeltaPosition, const FVector2f& RealScreenOffset)
{
float W = UKismetMathLibrary::DegTan(GetOwningCamera()->FieldOfView / 2.0f) * CameraDistance * 2.0f;
float ExpectedPositionY = W * RealScreenOffset.X;
float ExpectedPositionZ = W / GetOwningCamera()->AspectRatio * RealScreenOffset.Y;

TempDeltaPosition.Y = LocalSpaceFollowPosition.Y - ExpectedPositionY;
TempDeltaPosition.Z = LocalSpaceFollowPosition.Z - ExpectedPositionZ;

return W;
}

FVector UFramingFollow::DampDeltaPosition(const FVector& LocalSpaceFollowPosition, const FVector& TempDeltaPosition, float DeltaTime, const FVector2f& RealScreenOffset, float& W)
{
FVector DampedDeltaPosition = FVector(0, 0, 0);
UMECameraLibrary::DamperVectorWithDifferentDampTime(DampMethod, DeltaTime, TempDeltaPosition, FollowDamping, DampedDeltaPosition, DampResidual);
EnsureWithinBounds(LocalSpaceFollowPosition, DampedDeltaPosition, RealScreenOffset, W);

return DampedDeltaPosition;
}

void UFramingFollow::EnsureWithinBounds(const FVector& LocalSpaceFollowPosition, FVector& DampedDeltaPosition, const FVector2f& RealScreenOffset, float& W)
{
float LeftBound = (RealScreenOffset.X + ScreenOffsetWidth.X) * W;
float RightBound = (RealScreenOffset.X + ScreenOffsetWidth.Y) * W;
float BottomBound = (RealScreenOffset.Y + ScreenOffsetHeight.X) * W / GetOwningCamera()->AspectRatio;
float TopBound = (RealScreenOffset.Y + ScreenOffsetHeight.Y) * W / GetOwningCamera()->AspectRatio;

FVector ResultLocalSpacePosition = LocalSpaceFollowPosition - DampedDeltaPosition;
if (ResultLocalSpacePosition.Y < LeftBound) DampedDeltaPosition.Y += ResultLocalSpacePosition.Y - LeftBound;
if (ResultLocalSpacePosition.Y > RightBound) DampedDeltaPosition.Y += ResultLocalSpacePosition.Y - RightBound;
if (ResultLocalSpacePosition.Z < BottomBound) DampedDeltaPosition.Z += ResultLocalSpacePosition.Z - BottomBound;
if (ResultLocalSpacePosition.Z > TopBound) DampedDeltaPosition.Z += ResultLocalSpacePosition.Z - TopBound;
}

Implementing TargetingAim

TargetingAim serves the same function as Composer in Cinemachine. It only sets the camera rotation and keeps the aim target at a fixed position on screen. Here is the implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
void UTargetingAim::UpdateComponent(float DeltaTime)
{
Super::UpdateComponent(DeltaTime);

if (AimTarget != nullptr)
{
/** Get the *real* aim position, based on actor's local space. */
FVector AimPosition = GetRealAimPosition();

/** If camera is too close to aim target, return. */
if (CheckIfTooClose(AimPosition)) return;

/** Temporary delta rotation before damping. */
FRotator TempDeltaRotation = FRotator(0, 0, 0);

/** Set delta rotation. */
SetDeltaRotation(AimPosition, TempDeltaRotation);

/** Get damped delta rotation. */
FRotator DampedDeltaRotation = DampDeltaRotation(TempDeltaRotation, DeltaTime, AimPosition);

/** Apply damped delta rotation. */
GetOwningActor()->AddActorLocalRotation(FRotator(DampedDeltaRotation.Pitch, 0, 0));
GetOwningActor()->AddActorWorldRotation(FRotator(0, DampedDeltaRotation.Yaw, 0));
}
}

bool UTargetingAim::CheckIfTooClose(const FVector& AimPosition)
{
float Distance = UKismetMathLibrary::Vector_Distance(GetOwningActor()->GetActorLocation(), AimPosition);
return UKismetMathLibrary::NearlyEqual_FloatFloat(Distance, 0, 0.001);
}

void UTargetingAim::SetDeltaRotation(const FVector& AimPosition, FRotator& TempDeltaRotation)
{
FRotator CenteredDeltaRotation = UKismetMathLibrary::NormalizedDeltaRotator(UKismetMathLibrary::FindLookAtRotation(GetOwningActor()->GetActorLocation(), AimPosition), GetOwningActor()->GetActorRotation());
TempDeltaRotation.Yaw = CenteredDeltaRotation.Yaw - ScreenOffset.X * GetOwningCamera()->FieldOfView;
TempDeltaRotation.Pitch = CenteredDeltaRotation.Pitch - ScreenOffset.Y * 2.0f * UKismetMathLibrary::DegAtan(UKismetMathLibrary::DegTan(GetOwningCamera()->FieldOfView / 2) / GetOwningCamera()->AspectRatio);
TempDeltaRotation.Roll = 0;
}

FRotator UTargetingAim::DampDeltaRotation(const FRotator& TempDeltaRotation, float DeltaTime, const FVector& AimPosition)
{
FRotator DampedDeltaRotation = FRotator(0, 0, 0);
UMECameraLibrary::DamperRotatorWithDifferentDampTime(DampMethod, DeltaTime, TempDeltaRotation, AimDamping, DampedDeltaRotation, DampResidual);
EnsureWithinBounds(DampedDeltaRotation, AimPosition);

return DampedDeltaRotation;
}

void UTargetingAim::EnsureWithinBounds(FRotator& DampedDeltaRotation, const FVector& AimPosition)
{
double VFieldOfView = 2.0f * UKismetMathLibrary::DegAtan(UKismetMathLibrary::DegTan(GetOwningCamera()->FieldOfView / 2) / GetOwningCamera()->AspectRatio);
double LeftBound = (ScreenOffset.X + ScreenOffsetWidth.X) * GetOwningCamera()->FieldOfView;
double RightBound = (ScreenOffset.X + ScreenOffsetWidth.Y) * GetOwningCamera()->FieldOfView;
double BottomBound = (ScreenOffset.Y + ScreenOffsetHeight.X) * VFieldOfView;
double TopBound = (ScreenOffset.Y + ScreenOffsetHeight.Y) * VFieldOfView;

FQuat DesiredQuat = GetOwningActor()->GetActorRotation().Quaternion();
DesiredQuat = FQuat(FRotator(0, DampedDeltaRotation.Yaw, 0)) * DesiredQuat * FQuat(FRotator(DampedDeltaRotation.Pitch, 0, 0));
FRotator DesiredRotation = DesiredQuat.Rotator();

FRotator ResultRotationDiff = UKismetMathLibrary::NormalizedDeltaRotator(UKismetMathLibrary::FindLookAtRotation(GetOwningActor()->GetActorLocation(), AimPosition), DesiredRotation);
if (ResultRotationDiff.Yaw < LeftBound) DampedDeltaRotation.Yaw += ResultRotationDiff.Yaw - LeftBound;
if (ResultRotationDiff.Yaw > RightBound) DampedDeltaRotation.Yaw += ResultRotationDiff.Yaw - RightBound;
if (ResultRotationDiff.Pitch < BottomBound) DampedDeltaRotation.Pitch += ResultRotationDiff.Pitch - BottomBound;
if (ResultRotationDiff.Pitch > TopBound) DampedDeltaRotation.Pitch += ResultRotationDiff.Pitch - TopBound;
}

Use in Unreal Engine

Use in UE is really simple. What you need to do is:

  • Create a new blueprint class inherited from MECameraBase;
  • Set up the parameters in the CameraSettingsComponent component, e.g., set the FollowComponent as FramingFollow and AimComponent as TargetingAim;
  • Use the CallCamera node in blueprint to instantiate an actor of the blueprint class you just created.