This framework was developed with the intention of avoiding repetitive tasks and adding new functionality that we implement in every project without needing to write code. How? Let’s say there’s a task where every UI Button throughout the game should now play 3 different sounds (Click, Highlighted, and Selected), but you’re too busy with another task. How can a non-programmer take charge of adding this functionality without knowing how to write a single line of code or how your audio system works internally? This is where this framework comes in. Non-programmers can add or edit functionality to the game without needing a programmer, making it perfect for designers who want to test ideas, artists who want to easily change their UI art style, or even for programmers who may be working solo but want to quickly and easily add functionality that others can modify. The more you use and extend this framework to meet your project’s needs, the more functionality you’ll have to reuse in your next project, making it even faster and easier to develop. I’ve used this in both real production projects and personal projects, so it’s been battle-tested, and I can say with 99% confidence that it will improve your project iteration time. Should you use it? I would say yes, but sparingly. Be careful not to create all your game functionality inside the inspector with action components. Complex gameplay and systems will still require you to write code. Remember, it’s called the "CommonFramework" because it focuses on common logic that can be applied to gameplay, UI, and more.
This project was developed using Unity 6000.0.23f1. But it should work just fine as long as the version you're using supports the dependencies, the rest of the framework itself is built on top of Unity built-in functionality.
Also, this framework is still under development. It has not been completed yet, but it contains the core base that you can use it. I'm still doing some internal refactoring of some classes, exporting some actions packages to the extensions, and developing the demo project but you can take a look to get an overall idea of how this works in practice.
This project contains some dependencies that were not originally developed by me:
AwesomeAttributes: https://assetstore.unity.com/packages/tools/gui/awesome-attributes-296859?srsltid=AfmBOooUD9O9gGfEYgp5yIywGFp9x00S_gnREDlbLpsrfuva5JKoqSWr
EasyButtons Attribute: https://github.com/madsbangh/EasyButtons
MackySoft.SerializeReferenceExtensions: https://github.com/mackysoft/Unity-SerializeReferenceExtensions
UniTask: https://github.com/Cysharp/UniTask
Extenject: https://github.com/Mathijs-Bakker/Extenject
Thanks to these plugins, I was able to develop this project. All other parts of the framework itself were developed by me and are under the project's license: CC0 (public domain)
All these dependencies are already included in the project.
I'm still writing this project documentation. it's extensive and I'm trying to cover the most important features first, but still lacks some important parts. I highly recommend taking a look at the demo project. Even though it’s incomplete, it will help you understand how it works
When creating UIs you can create Styles, Bindings, Data Interpreters, Effects, and UI Baking
When managing canvases in the scene, it's a common workflow to create a UI Scene and load it additively, while this works and I still think it's a good practice, there are cases where UIs are world-space and attached to a GameObject, or even dynamic UIs that are created from your gameplay scene, and with that you would have to create multiple references to the same UIs and manually manage their creation on the appropriate objects. To solve this problem there's a component called "UI Creator", which is basically a UI Container that can hold information of which UIs you want to create and when to create. Let's take a look:
Definitions Container: Here you can define which UI prefabs will be created and what their default active state should be
Default UI Active State: Defines whether or not your UI prefabs are created as disabled or enabled GameObjects. You can override this option in the UI Definition
Parent Transform: Defines where the prefabs will be placed as children. You can leave this as null, and they will be created at the scene root
Creation Type: Defines when the UIs should be created. There are 3 options:
-
Runtime: It will instantiate the prefabs in the Awake method
-
Baked: Instantiates the UIs while inside the editor, serializing within the Object/Scene. This may increase performance during gameplay at the cost of memory usage
-
Manually: The UIs will not be created until you explicitly call it
Bake Settings:
- Readonly Bake: UI object flags will be set to "NotEditable", preventing any changes in the inspector
Styles are data containers used to tell your components how they should behave and be displayed visually. Current components that support styles are: Text & Button
Example of Button Style:
You can create as many styles as you want and once you apply the style to your component parent prefab, all your derived UIs from that component will change accordingly
You can also create a specific style for a component by setting "Use Custom Style" to true in the component inspector:
by using this option, it will ignore the current style data set and will be exclusively used by that component, not sharing between other instances
Actions are classes that can be used to execute encapsulated logic. They can belong to any category, such as Application, Audio, Physics, etc.
Example of an action: Quit Application
Once the action is executed, it will perform the logic to exit the game:
Action implementations are usually pretty simple and focused, but they can become more complex as you add more customizable options to the same logic
If actions are used to execute logic, what is used to trigger these actions? In this case, there's a set of components called 'Action Components', which includes a list of actions to be performed and define when they will be performed. They all inherit from the base class 'CommonActionComponent', which contains the list of actions that should perform, and whether the component should be removed after execution.
But how do they decide when to perform these actions? It’s not exactly a decision (unless you choose to create one with your own logic, which we'll cover later). Instead, it’s usually a callback or a response to some event.
Example: Execute Actions On Start
All this component does is take the list of actions you've created and execute them in the Start method:
When is this useful? Let's say that you want your bullet to spawn a particle system once it spawns, and when it hits something and gets destroyed. We can use action components that respond to these events (Enable, Destroy) and then execute the action of spawning the particle.
1 - First, we add the corresponding callback components:
2 - Then, we add the action that we want to execute in both components. In this case: 'Instantiate Particle System'
3 - Lastly, we configure the parameters to correctly spawn the particle system:
And you're done! Now, your bullet will spawn the particles on the corresponding events
There's an Action Component for each Unity Message (OnCollisionEnter, OnBecameInvisible, Update, etc.), but what about custom ones? To create a new one, you can inherit from 'CommonActionComponent' and implement your own logic to determine when to execute the actions. The project already includes some custom ones, such as 'ExecuteActionsDelayed'.
Now, let's say you're making a horror game and want to create an ambience where the light keeps toggling between enabled and disabled. (You could also use animation for that, but I'll use the component for the example.)
You can add the ExecuteActionsDelayed component and include the corresponding action to toggle the light's active state:
In the project demo, this component is used to spawn obstacles from an object pool.