Having trouble with the component based architecture

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Having trouble with the component based architecture

      Hello there,
      The first thing I want to say: The book (got the 4th edition) is really great. I have it for 2 weeks now, currently reading chapter 10. I marked everything what I want to find quickly again by sticking little notes at the pages and write some short note on it.
      Furhtermore I can say: The book is written easy enough for a foreigner to understand (I'm german) but not on a too low level so it would be boring. Really great work.

      So now I was curious about the source code, because I had some trouble in the past especially with component based architecture.
      I tried to write a Pong (I am using Java and Slick2D/LWJGL) with a component based approach. So I had some components, like position, transformation, movement, physics, render etc.
      And for every component which needed a system, there was a system. Like one system for move an object around. One system for calculating physics etc.
      So the components themselves did only contain data and had no real functions than getter/setter and constructors.
      The systems iterated through every actor, asking if he has the components the system needs and then working on these components.

      For example:
      The physics system should manage the ball to reflect from walls and from the paddles.
      Furhtermore the physics system should manage the paddles to stop moving when touching a wall.
      So the first problem was to distinguish between the ball and the paddles.
      Then the system had to differ between a collision between the ball and a paddle and the ball and a wall because they needed different calculations.
      So somehow the system had to now which actor was a paddle and which was a wall.
      But in my opinion this cannot be a good solution as a CBS should be modular and distinguishing between object types this way cannot be really modular.
      Also I had some headache because of the following reason: I wanted to implement some powerups, for example one which made the ball reflecting from the wall and the paddles in some strange way, for example coming back really fast or reflecting in a actual wrong angle.
      But this made the physics system just more messy. Because now I had one big if-statement which checked if this powerup was active, and if yes there was a totally different physics calculation which looked nearly as the one without powerup but with some other values.

      So now I opened the code of the book and looked for the code of the CBS and... it helped nothing. I see that there is a different approach, so the code uses no systems to operate on the components, but the components do everything themselves.
      In my opinion this changes nothing, because for example the physics component would have the same problem as my physics system: Is it attached to the ball or a paddle? If yes, how should it behave?
      Is my approach wrong?
      Should there be a standard physics system for movement and then two different physics systems/components, one for the paddle and one for the ball?

      Also I have problems like this: If there are components which add additional abilities to an actor, then the AI should now about these components. How would you tell the AI which components its actor has and how to use them? This is surely no problem if there are some standard components like moving and shooting.
      But the CBS approach is said to be very modular and the components should just be modules which add new functionality or remove them from an actor if they are removed. So for example one AI controlled actor only can walk, but another AI controlled actor has a 'fly' component - How should you tell the AI and how could the AI use this?
      Just iterating through a huge list of possible components, checking which are there and then starting some AI routine depending on the world state and the attached components seems not very modular to me.

      I would be happy to see some answers to this problem as it is a big problem for me. In general, it is a big problem for me to add functionality into a game (not only with CBS approach but as well with classical inheritance) without ending up in a huge mess where I have strange and big dependencies.

      Greetings,
      M0rgenstern

      PS: Sorry if my english is not so great.
    • RE: Having trouble with the component based architecture

      Originally posted by M0rgenstern
      Hello there,
      The first thing I want to say: The book (got the 4th edition) is really great. I have it for 2 weeks now, currently reading chapter 10. I marked everything what I want to find quickly again by sticking little notes at the pages and write some short note on it.
      Furhtermore I can say: The book is written easy enough for a foreigner to understand (I'm german) but not on a too low level so it would be boring. Really great work.

      Thanks!


      So now I was curious about the source code, because I had some trouble in the past especially with component based architecture.
      I tried to write a Pong (I am using Java and Slick2D/LWJGL) with a component based approach. So I had some components, like position, transformation, movement, physics, render etc.
      And for every component which needed a system, there was a system. Like one system for move an object around. One system for calculating physics etc.
      So the components themselves did only contain data and had no real functions than getter/setter and constructors.
      The systems iterated through every actor, asking if he has the components the system needs and then working on these components.

      For example:
      The physics system should manage the ball to reflect from walls and from the paddles.
      Furhtermore the physics system should manage the paddles to stop moving when touching a wall.
      So the first problem was to distinguish between the ball and the paddles.
      Then the system had to differ between a collision between the ball and a paddle and the ball and a wall because they needed different calculations.
      So somehow the system had to now which actor was a paddle and which was a wall.
      But in my opinion this cannot be a good solution as a CBS should be modular and distinguishing between object types this way cannot be really modular.
      Also I had some headache because of the following reason: I wanted to implement some powerups, for example one which made the ball reflecting from the wall and the paddles in some strange way, for example coming back really fast or reflecting in a actual wrong angle.
      But this made the physics system just more messy. Because now I had one big if-statement which checked if this powerup was active, and if yes there was a totally different physics calculation which looked nearly as the one without powerup but with some other values.

      So now I opened the code of the book and looked for the code of the CBS and... it helped nothing. I see that there is a different approach, so the code uses no systems to operate on the components, but the components do everything themselves.
      In my opinion this changes nothing, because for example the physics component would have the same problem as my physics system: Is it attached to the ball or a paddle? If yes, how should it behave?
      Is my approach wrong?
      Should there be a standard physics system for movement and then two different physics systems/components, one for the paddle and one for the ball?

      There are three major questions here that I will attempt to rephrase. The first is a question about whether you should have all the functionality inside the component or whether you should create systems to that consume the data on components and process the actor. The second is a question about components that are similar and their behavior. The third question is one about cross-component communication.

      Should the functionality be inside components or inside systems? I think it's a little of both, though most of the functionality should probably be in the component. I think it's a mistake to say that components only contain data, but I also don't think it's correct to always put all the functionality inside the component. For example, in my engine, the renderable component doesn't actually know how to draw the actor. All it knows is how to build a render command, which the render system takes and submits to the render queue. This queue is processed on another thread and is completely decoupled from the actor. Once this information is taken from the component, the two no longer have any interaction. That means I can destroy the object without having to worry about updating the render thread.

      Another way to handle this would be to have the renderable component contain only data and have the rendering system build up that command object. This isn't good, because then the render system has to have a bunch of complexity it doesn't need. You'll end up having a big switch/case block for all the different types of renderables you have. A better way is to have multiple types of renderables, each conforming to a simple interface. That way, the render system just asks for the render command. It doesn't have to care how that command is built.

      This brings me to the second question. The way I prefer building component systems is by using interfaces. I'll create a purely abstract base class that inherits from ActorComponent and defines the interface for that type of component. There are multiple subclasses of this interface, each defining a full-fledged component that implements the interface. For example, I have at least three different transform components depending on what I need it to do which all inherit from the base TransformComponent interface. I have the same for renderables, AI components, and anything else I want. When I ask an actor for a component, it's always by type. So when I ask an actor for its transform component, I get a TransformComponent pointer. Polymorphism takes care of the rest. :)

      One concept I have in my component system is the ability to register ComponentSystem objects. ComponentSystem is an interface that looks like this:

      Source Code

      1. class ComponentSystem : public Process
      2. {
      3. public:
      4. enum ComponentSystemType
      5. {
      6. PASSIVE, // never ticked; this component system will not be attached to a process manager
      7. ACTIVE_SYSTEM, // active system-level component systems are always updated and never paused or accelerated
      8. ACTIVE_GAMEPLAY, // active gameplay component systems are subject to pausing and time compression
      9. };
      10. virtual ~ComponentSystem(void) {}
      11. // Initializes the component system.
      12. virtual bool Init(void) { return true; }
      13. // Adds a component to the system.
      14. virtual void AddComponent(EntityComponentPtr pComponent) = 0;
      15. // Returns the type of component system this is, which determines which process manager this system should
      16. // be attached to (if any). Sprite rendering is an example of a system-level component system while animation
      17. // would be a gameplay system.
      18. virtual ComponentSystemType GetType(void) const { return PASSIVE; }
      19. protected:
      20. virtual void OnUpdate(unsigned long deltaMs) { UNUSED_PARAM(deltaMs); }
      21. };
      Display All


      Notice the AddComponent() function. This is called whenever an actor is created with that component and passes in a weak ref to that component. That way, I can have a list of all the components this system cares about without having to loop through every object in my game. This is really important for performance reasons. Since they're weak refs, I can prune the list every iteration if an actor is destroyed.

      These systems are usually VERY simple. I tend to do most of the work in the component because it's nice and self-contained. It also lets me take advantage of the fact that the actual component can do whatever it wants, it just needs the interface.

      As an example, here's my AnimationSystem:

      Source Code

      1. class AnimationSystem : public ComponentSystem
      2. {
      3. typedef WeakPtr<AnimationComponent> AnimationComponentPtr;
      4. typedef std::list< AnimationComponentPtr, MainThreadStlAllocator<AnimationComponentPtr> > AnimationComponentList;
      5. AnimationComponentList m_animationComponents;
      6. public:
      7. // ComponentSystem interface
      8. virtual void AddComponent(EntityComponentPtr pComponent);
      9. virtual ComponentSystemType GetType(void) const { return ACTIVE_GAMEPLAY; }
      10. protected:
      11. // Process interface
      12. virtual void OnUpdate(unsigned long deltaMs);
      13. };
      Display All

      Source Code

      1. void AnimationSystem::AddComponent(EntityComponentPtr pComponent)
      2. {
      3. if (pComponent.IsValid())
      4. {
      5. AnimationComponentPtr pAnimComp;
      6. pAnimComp.AssignCast(pComponent);
      7. m_animationComponents.push_back(pAnimComp);
      8. }
      9. }
      10. void AnimationSystem::OnUpdate(unsigned long deltaMs)
      11. {
      12. AnimationComponentList::iterator it = m_animationComponents.begin();
      13. while (it != m_animationComponents.end())
      14. {
      15. AnimationComponentList::iterator workIt = it;
      16. ++it;
      17. AnimationComponentPtr pAnimComp = (*workIt);
      18. // if this component has become invalid, remove it from the list
      19. if (!pAnimComp.IsValid())
      20. {
      21. m_animationComponents.erase(workIt);
      22. continue;
      23. }
      24. pAnimComp->Update(deltaMs);
      25. }
      26. }
      Display All


      (more in next post)
    • That's it. All it does it loops through the list and tells each component to update itself. It doesn't care what kind of animation it is; the component can take care of that.

      This isn't to say that all functionality should be in the component all the time. On The Sims 4, I wrote an AutonomyComponent class that deals with a lot of the AI for a Sim. None of the scoring is in there, that's all done by the manager that consumes these objects. The component mostly deals with scheduling an AI update. If different Sims had very different AI, I would have built it differently. For example, if I were making an RPG, I probably would put the core AI scoring in the component because I'd want it to change based on the concrete component that was attached.

      Your final question is about cross-component communication. This is tricky. There are basically two ways to handle this, both with their ups and downs. The first is to just grab the component you need and do whatever is required. Most games I've worked on do this. It's certainly the easiest, but you end up getting coupling between components. You'll have one component dependent on two others, which are dependent on another two, and so on. Soon, every game object will require nine components just to build. If you do use this method, you have to make sure you guard against not having the right components attached and that you keep the coupling to a minimum.

      The second method is to use an event model where the actor becomes a mini event system. If your AI component wants to move the actor, it sends an event through the actor, which in turns pushes it down to the transform component. This is nice because it decouples components from each other and allows you to have very different components handle similar messages. The down side is that messages can have weird side effects. For example, what happens if your AI component wants to move right but your physics component is telling you to move left? You end up having to deal with priorities, which can be a real pain (sometimes you want AI to override physics, sometimes not).

      I generally use the first method, as do most systems I've worked with in my career. As long as you're careful to minimize dependencies, it works out just fine.


      Also I have problems like this: If there are components which add additional abilities to an actor, then the AI should now about these components. How would you tell the AI which components its actor has and how to use them? This is surely no problem if there are some standard components like moving and shooting.
      But the CBS approach is said to be very modular and the components should just be modules which add new functionality or remove them from an actor if they are removed. So for example one AI controlled actor only can walk, but another AI controlled actor has a 'fly' component - How should you tell the AI and how could the AI use this?
      Just iterating through a huge list of possible components, checking which are there and then starting some AI routine depending on the world state and the attached components seems not very modular to me.

      I would be happy to see some answers to this problem as it is a big problem for me. In general, it is a big problem for me to add functionality into a game (not only with CBS approach but as well with classical inheritance) without ending up in a huge mess where I have strange and big dependencies.

      You definitely wouldn't iterate through every component, that wouldn't make sense. You would update the specific components necessary:
      1) Update the renderable component with the new model.
      2) Update the animation component with the new animation set.
      3) Update the AI component to take into account the fact that it can fly.
      4) Update the statistic component to grant a +2 to agility and a -50% fire resistance (wings are flammable).

      Does this break layering or encapsulation? Not a bit. The potion code does have to know about the interfaces for those four components, but those components have no idea what the potion is. The renderable component probably already has a way to update the model while the animation component probably has a way to change the animation set. This would be used all over (changing armor or riding a horse, for instance). You wouldn't write a function called ApplyFlyPotion(), you would just call ChangeModel() and ChangeAnimSet(). This is same with the AI and the statistic component. You would call a function that would allow the AI to be less constrained, that's it. As for the statistic component, you're just adding a new effect, which is something I guarantee you'll need.

      If you delete or change the potion, you only have to update the potion. None of the components care at all, which is the point.

      Hope that helps.

      -Rez
    • Wow.

      That was a great answer. It really helped me to understand these things finally. Really. I read through many tutorials for these stuff but after all of them I was not really sure if I understood everything.

      Thank you very much.

      But I did not understand something in your last paragraph:
      Let's say the player collects the potion, so the potion which enables flying is applied to the actor.
      You wrote one should call ChangeModel() ChangeAnimation() etc.
      Clearly you would give these methods a parameter which tells which animation and which model has to be set as the new model. So for example:
      ChangeModel(Boy_With_Wings)
      ChangeAnimation(Flying)
      But: Who calls these methods? As the potion itself knows what it does (enables flying) the potion actually should as well know what changes. So the potion has to know which animation/model is the new and how the statistics are changed. And therefor the potion would need to call the methods of the components.
      Is this right or does it go another way?

      Greetings,
      M0rgenstern
      And thank you again.
    • I would personally say that the potion would know how to notify the actor it should be flying, while the actor itself will have a component which receives the notification, and knows which changes need to be made on its actor. This way, several different types of actors can have different models/animations to be switched, while the potion code remains the same.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • I think I have some related questions that have been bothering me.

      First up, great book! I love it and refer to it often. I put a lot of time into my overall architecture and that's the reason the following bothers me. And, I apologize in advance for my ignorance...

      I have a 2D game where sprites are actors and I have a sprite animation system that listens for all sprite animation component created and destroyed events and handles the updates which is fine.

      I read somewhere that animations should be handled in the game view because sprite animation has nothing to do with game logic, which makes sense. But, I think this means that the animation system should be initialized and attached to the human view's process manager. The problem with this is that it contains and handles animation components that reside within an entity.

      Should components only reside in actors and their systems only reside in the gamelogic's process manager even when something is only related to a view? Should components like this exist outside of the actor in scene node for example?

      (in my design I also have similar confusing relating to ai and rendering)
    • The component architecture can really be applied anywhere, I have used it for actors, users (views), etc. It all depends on what you are doing, for instance Physics should be simulated in its on system class, and interfaced through components to the respective rigid body, collision hull, joint classes.

      When I did sprite animation, I essentially had a sprite definition class which held the information for the animation, however, each animation instance was a component which interfaced into the animation definition. The animation definition could contain

      - The texture to use
      - A list of animations
      - Depending on how you do it, individual frames, or interpolation data for each animation on where to start, end, how many frames etc.

      The animation instance however would contain

      - A default animation to use (maybe even a static image)
      - Which animation is currently playing
      - The current time of the animation
      - etc

      With this you could essentially

      - Bind the texture to be used
      - Retrieve the texture coordinates from the animation definition based on the animation instances current time
      - Build your vertex buffer and draw the quad

      This will of course work for sprite blitting as well, but would involve using a list of sprites or something and changing texture coordinates for sprite indices.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • Cheers for the swift reply!

      My animation works almost exactly as you describe. What I mean is that the entity is composed of components, one of those being an animation component, the entity resides within some data structure in the game logic. But, animation is game view related and thus the system that controls the component should be attached to the views process manager. Then what i have is a component system in the game view that updates an animation component in the game logic and the entity then has an animation component with data that it doesn't care about. Really the only things that care about animation are the renderer for what frame to render.

      I can easily put all the component systems in my game logic and all will work fine but i feel like i am missing some core understanding.

      I'm also not fond of how i handle where a system needs multiple components. For example if physics needs a physics component and a position component. So when updating an entity it uses a call like this:
      g_gameLogic->GetEntity(id)->GetComponent(type);

      The post was edited 1 time, last by quin ().

    • That is almost exactly how I access my actors as well (although mine is a bit more complex due to multiple actor sets, etc).

      As far as where everything should be, either in the logic or view layer, I really am unsure, it is kind of a deep question, should the game logic understand that a player is in mid throw of a grenade? I think so personally, it should know that is is currently throwing a grenade, and that is is 50% of the way there, but how the view interprets that is entirely arbitrary, it could be an AI receiving the event when the player is within its view cone, that it is ready to launch a grenade right at him, so the AI could react and dive away, however a render system could take that same info and use it to display the correct animation.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • Originally posted by mholley519
      I would personally say that the potion would know how to notify the actor it should be flying, while the actor itself will have a component which receives the notification, and knows which changes need to be made on its actor. This way, several different types of actors can have different models/animations to be switched, while the potion code remains the same.


      Well, it depends on the game design. If the player was the only one who could collect and use potions of various types, it would be different. Remember that you're always trying to figure out how you can make changes with the least amount of effort. You reduce coupling between systems that need to change.

      In my example, I would probably have the potion directly set those things on the player in its Potion component. In the component definition XML, you'd tune the appropriate model, animation set, etc. and they would be applied to the player. This would most likely be an attachment rather than a complete model swap. Removing a potion from the game just requires you to delete the potion the component and adding a new potion requires writing a new component. Everything is nice and self-contained. It also gives junior engineers a nice little sandbox to create all sorts of new pickups.

      If the design were to have potions collectible by everyone, my technique might change, but probably not. Again, it's really just an attachment that needs to animate as well as some AI and stat changes. I would probably still just do everything in the potion component.

      Now, if there was a concept of flying that multiple things would apply, then I'd create a Flyer component that did all of the modifications on initialization and the potion (and other stuff) would add that component dynamically at run-time.

      What you want to avoid is the one component that applies all effects. Then you get into the dreaded switch/case statement, which you really don't want.

      -Rez

    • Should components only reside in actors and their systems only reside in the gamelogic's process manager even when something is only related to a view? Should components like this exist outside of the actor in scene node for example?


      Regarding the question of where things should live (view vs logic), I actually think it's best to get rid of those concepts, or at least draw a different line.

      For my game, the difference between view and logic really comes down to the threading. All of my rendering happens on another thread. All of my actors and their components live on the logic thread, including the rendering components. You see, my renderable component doesn't actually render anything. All it does is manages the graphical information and deal with various graphical properties. It also knows how to build up a render command, which the rendering system asks for. This system lives on the logic thread as well and is just another ComponentSystem like I explained above. Every frame, it loops through all of the renderable components it knows about, asks for the render command, then builds a list of things to render. During my thread synchronization phase, the render command list swaps over to the render thread and it's rendered next frame (my render thread is always rendering one frame behind).

      So in the strictest sense, the render thread is my view. You could also consider the render system that contains all the renderables to be part of the view as well. My renderable component is like a bridge to the view since the logic system pokes at it, as is my Scene interface.


      I'm also not fond of how i handle where a system needs multiple components. For example if physics needs a physics component and a position component. So when updating an entity it uses a call like this:
      g_gameLogic->GetEntity(id)->GetComponent(type);

      If you're inside the physics system, you shouldn't have to do this at all. You should strive to have the data you need already. In my example above, note how the animation system has a list of all the animation components that it cares about. This greatly reduces your set of things to care about.

      For example, in my physics system, I would register the system with the physics component. Then, in AddComponent(), I would store the physics component as well as the transform component for that actor, which might look something like this:

      Source Code

      1. void PhysicsSystem::AddComponent(EntityComponentPtr pComponent)
      2. {
      3. if (pComponent.IsValid())
      4. {
      5. // cast the physics component
      6. PhysiscsComponentPtr pPhysicsComp;
      7. pPhysicsComp.AssignCast(pComponent);
      8. // get and cast the transform component
      9. TransformComponentPtr pTransformComp;
      10. pTransformComp.AssignCast(pComponent->GetOwner()->GetComponent(TransformComponent::ID));
      11. if (!pTransformComp.IsValid())
      12. {
      13. _ERROR("No transform component for " + ToStr(pComponent->GetEntity()->GetId()));
      14. return;
      15. }
      16. // add to the list
      17. m_components.push_back(std::make_pair(pPhysiccComp, pTransformComp));
      18. }
      19. }
      Display All


      That way, you have the data you need right there and aren't going back to the logic system and the entity. Your physics system probably doesn't even have to know about your game logic at all.

      Of course, this is not always possible. I find that I end up using your pattern of digging through the game logic for a particular entity when I'm dealing with Lua/C++ glue. In my system, Lua entities do have access to the C++ entity, but it doesn't know anything about smart pointers and passing around naked pointers to smart-pointered objects is bad. So in those cases, I'll pass the entity id. This is common when trying to spin up a process from Lua to act on an entity. Even then, I'm only looking up the entity and pulling the components I need in the OnInit() function, then discarding the rest.


      As far as where everything should be, either in the logic or view layer, I really am unsure, it is kind of a deep question, should the game logic understand that a player is in mid throw of a grenade? I think so personally, it should know that is is currently throwing a grenade, and that is is 50% of the way there, but how the view interprets that is entirely arbitrary, it could be an AI receiving the event when the player is within its view cone, that it is ready to launch a grenade right at him, so the AI could react and dive away, however a render system could take that same info and use it to display the correct animation.

      Again, this depends on implementation and what you're after. Personally, I think the animation system is another one of those things that can live in both worlds. On The Sims 4, it actually does and we even split the team. MoTech (motion technology) is a team that deals with the view side of things. We also have 2 - 3 animation programmers on the gameplay team who deal with the gameplay (logic) side of things, and all the object engineers have to hook into that system in one way or another.

      For the grenade problem, you often represent those things as animations. What happens if the grenade animation is 10 seconds but we decide we want it to take 5 seconds to throw? Do we accelerate the animation? That might be fine for throwing grenades, but not for everything. What about making a meal?

      On The Sims, we tell a Sim to run an animation and it gets back to us when it's done. The length of a interaction is directly dependent on the length of the animations. They really have to be or else it'll look really bad and the animators will come after us with pitchforks.

      My own personal engine works similarly, though since it's a 2D engine that uses sprites, I have direct control over the length of each frame of animation in the tuning definition file. My animation system lives 100% on the logic side. All it does it progress the currently running animation and switch to a new frame when necessary. The renderable asks the animation component for the current frame index when sending stuff off to the view. But it's still the same thing; I run an animation and wait for it to complete, usually inside of a Process.

      Of course, all bets are off with looping animations.

      -Rez
    • Originally posted by mholley519
      The component architecture can really be applied anywhere, I have used it for actors, users (views), etc. It all depends on what you are doing, for instance Physics should be simulated in its on system class, and interfaced through components to the respective rigid body, collision hull, joint classes.


      I forgot to comment on this. Matt is completely correct here, the component architecture is just another pattern that can be applied to anything. Its purpose is to allow you to compose a final object from a set of pieces and treat that final object as a single thing. For example, we used a set of components to handle the player model in RatRace. She was actually composed of four seperate meshes, each with their own texture. That's how we handled clothing changes.

      I've also used it in AI quite a bit where a character is given a number of considerations that it cares about. You might implement the flying AI that way, where you're adding a flying component to the AI. On The Sims 4, I have components called Autonomy Modifiers that allow you to change the AI in various ways.

      Components aren't just limited to actors. :)

      -Rez
    • I ran into an issue with implementing a physics abstraction system because of the existing class hierarchies and the way I wanted my physics system to work. In Bullet, there is a btRigidBody, and a btCollisionObject, you need at least one to map a collision shape to the world. If you want to have objects without simulation ie. Trigger volumes, than you have to use btCollisionObject.

      I am not a master of abstraction techniques, and I ran into a ton of issues trying to directly map bullets hierarchy into an abstracted one, do to conflicts between inheritance (virtual inheritance didn't solve this either). I decided that forcing it to work that way didn't help anything, so instead I changed to physics objects themselves being component based, they could have rigid bodies, collision objects, collision shapes, etc. All adding a piece behind the scenes to Bullet. I have no clue whether this sort of thing is a good solution to solving abstraction issues like that, but it worked for me :D.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • Wow, thanks rez. Those are massive in depth responses! And exactly what I need to hear!

      I would register the system with the physics component. Then, in AddComponent(), I would store the physics component as well as the transform component for that actor

      I did have my systems act like this at some point and re-wrote away from it, for whatever long since passed reason... It helps to read suggestions and examples, as it is a little validation (even if i do have to refactor)

      A thought, a rare case (i cant even really think of an example) could be where you have 2 systems each handling a different component, but requiring the component of the other also. There would be no order that you could instantiate them because the other wouldn't exist yet. I guess this also depends on how one raises the created events.

      You see, my renderable component doesn't actually render anything. All it does is manages the graphical information and deal with various graphical properties

      Mine is similar except it does ask the animation component which frame in the sprite sheet to render, or maybe it happens through an animationFrameChanged event... but again, good to read.

      All this reminds me why I started thinking about where should the animation reside. I was thinking of a case where an actor dies in the game, event is sent out, view starts a 'dying' animation but then the gamelogic cleans up that actor, then I would have a problem where any systems in the view are accessing components in the gamelogic and the actor is gone.

      You might implement the flying AI that way, where you're adding a flying component to the AI.

      I hadn't thought about extending component architecture in any other way, but this example has definitely enlightened me.

      I appreciate all the insight! I am learning a great deal from both the book and the forums! And I find that there really isn't enough time in the day to do all the coding that I wish
    • Originally posted by quin
      A thought, a rare case (i cant even really think of an example) could be where you have 2 systems each handling a different component, but requiring the component of the other also. There would be no order that you could instantiate them because the other wouldn't exist yet. I guess this also depends on how one raises the created events.

      This shouldn't matter. You only notify the registered systems when the entity has been fully created and initialized.


      Mine is similar except it does ask the animation component which frame in the sprite sheet to render, or maybe it happens through an animationFrameChanged event... but again, good to read.

      Mine does the same thing. I have a specific type of renderable that knows to ask the animation system for the frame index. It then applies the correct frame texture to the render command.


      All this reminds me why I started thinking about where should the animation reside. I was thinking of a case where an actor dies in the game, event is sent out, view starts a 'dying' animation but then the gamelogic cleans up that actor, then I would have a problem where any systems in the view are accessing components in the gamelogic and the actor is gone.

      When an entity "dies", it doesn't actually get destroyed. It's still perfectly alive, it's just in a different state. Some games will swap entities. Ultima VII did this; when you killed an entity, it was replaced with a container object and the entity itself was teleported to a special place.

      What you really want is a state machine. The entity isn't destroyed until there's truly nothing left.


      I appreciate all the insight! I am learning a great deal from both the book and the forums! And I find that there really isn't enough time in the day to do all the coding that I wish

      No prob. :) And remember, you should be programming a hell of a lot more than you are reading about programming. In my class, I tell them that for ever 3 hour lecture, they need to do at LEAST 10 hours of programming. You should do at least 10 hours of programming for each chapter in the book. There's no other way to really learn it. The only reason I'm an expert on actor/component systems is because I've written four or five in my career and used one on most games I've worked on.

      -Rez
    • Well, this thread is a great inside in how things should work in a game architechture.
      I see I did many things wrong in the past (like: every actor knows how to draw itself).

      Additionaly, I printed this part and hanged it on my wall for motivation:
      And remember, you should be programming a hell of a lot more than you are reading about programming. In my class, I tell them that for ever 3 hour lecture, they need to do at LEAST 10 hours of programming. You should do at least 10 hours of programming for each chapter in the book. There's no other way to really learn it. The only reason I'm an expert on actor/component systems is because I've written four or five in my career and used one on most games I've worked on.

      Because I tend to plan and read more than I program.

      Greetings

    • Because I tend to plan and read more than I program.

      That's the #1 mistake I see people make.

      -Rez
    • So what do you think about planning the code?
      How much time should you spend for planning the class hierarchy?
      And how much should you plan? Do you need to do the complete planning before starting to write code?
      Because that's something I read very often: To have the complete planning part done before writing the first line of code.
      But my experience is, that there come up some problems while programming which you were not aware when planning. So it happens you have to replan some parts after starting writing code.

      Greetings
    • Originally posted by M0rgenstern
      So what do you think about planning the code?
      How much time should you spend for planning the class hierarchy?
      And how much should you plan? Do you need to do the complete planning before starting to write code?
      Because that's something I read very often: To have the complete planning part done before writing the first line of code.
      But my experience is, that there come up some problems while programming which you were not aware when planning. So it happens you have to replan some parts after starting writing code.

      Greetings


      This is a hard question to answer because it's different for everyone and for every feature. If you told me to write another actor/component system, I wouldn't write out any diagrams, I would just start typing code because I already know how I'd want to solve it. Most features I deal with on The Sims these days are similar; I just start writing code.

      In the words of Helmuth Von Moltkem "No plan survives contact with the enemy." Now, that doesn't mean that plans aren't good, but it does mean that plans need to be agile and be able to change very easily. The same is true for code.

      In my opinion, there are three reasons to plan your code:

      1) So you can be sure you understand it.
      2) So you can iterate on your architecture decisions.
      3) So you can make other people understand your architecture.

      In order to ensure that you truly understand the system you're about to build, you should write out some class diagrams or flow charts or whatever your favorite method of planning is so that you can be sure you truly understand the problem. This is important for large systems, like rendering or AI. I wrote a very detailed sequence diagram for how rendering and logic processing works in my engine.

      This brings me to the second point, because the reality is that I drew about 20 of those diagrams of varying details until I finally settled on one idea, which is what I've implemented and has worked for the last four years or so. It's a hell of a lot easier to iterate on paper than it is to iterate in code. It's important to use the right tool for the job. For example, when I wrote the scoring formulas for The Sims 4 AI, I used an online graphing calculator called Desmos:
      desmos.com/calculator

      (As a slight aside, this tool is amazing. I use it for pretty much everything and it allows me to iterate on scoring functions really fast. Excel is also a great tool.)

      Honestly, I think point #2 is the most important. The real value of a plan is to be able to iterate on your core idea very quickly without having to write code. That's the whole idea of this sort of thing: take the part of your design that is the most risky or the least known and get it up and running quickly so you can test out your ideas.

      Oh, and no matter what you choose, you will likely end up doing something different. The AI code for The Sims 4 has been completely rewritten half a dozen times. The architecture has stayed exactly the same though, which means I only had to rewrite the scoring code. Good architecture is worth its weight in gold.

      My third point is really only important if you're working with people or want some kind of history. This is more of the technical design doc, which will become out of date very quickly (usually before the system is finished).

      -Rez