Optimal resource usage for USG firewalls using GeoIP and policy control?

Zyxel_USG_User
Zyxel_USG_User Posts: 42  Freshman Member
First Comment

I have defined many IPv4 entities based on subnets, ranges, geography.

These entities are partially gathered in address groups, partially are used individually.

For some single entities and for the groups, there are policy controls defined- mostly containing the same rules.

Are there limitations for building and processing groups, where lots of subnets ranges geographies come together?

From an ease-of-use and management perspective, it seems a better overview to have tens individual rules which can be activated or de-activated, instead of using groups for same purpose. Browsing through groups can be tricky, especially if one sees just a bunch of short names.

My question to you is- which approach is easier on resources eg memory CPU speed of processing etc?

  1. Shall I focus on gathering the subnets ranges geographies in groups, and have therefore less rules in policy control?

2. Or, it does not matter whether I build policies for groups and ranges and countries?

3. Or, is it better to have less individual rules (around 50), and group the subnets ranges geographies in few groups and then have only a few rules for these groups?

Are there any limitations one way or another?

All Replies

  • PeterUK
    PeterUK Posts: 3,605  Guru Member
    100 Answers 2500 Comments Friend Collector Seventh Anniversary

    I would say its what works for you your only limited by how many rules can be added and the resources of processing should work out the same so if you have 10 subnets in one group and bind that to one policy control rule vs 10 subnets with 10 policy control rules either way its much the same processing at least thats what I think how you would test such a thing is another.

  • mMontana
    mMontana Posts: 1,421  Guru Member
    50 Answers 1000 Comments Friend Collector Fifth Anniversary

    Quoting your question

    which approach is easier on resources eg memory CPU speed of processing etc?

    AFAIK the answer is "short list of complex rules in a optimized sorting" but it's not a "all winning" approach.

    Any rules processed is a time consuming task, dumbly repeated for mostly every single packet stream. When a rule is hit, the remaining list will be discarded, so the "top streams" if hit the top rules are processed faster.

    Refining the firewall rules list is a time consuming task, and being really, really thorought in

    -writing down the goals
    -writing down the changes (with forecast of the riesult)
    -let the rules sit for a good amount of time (sometimes 1 day is enough most of times not)
    -track the performances and the system resource used (the tracking has to be in place before the process!)
    -evaluate the results
    -understand the causes (both for success an failures)
    -restart from "writing down the goals"

    is painstaking and will lead to issues… but with time, mistake, experience and thinking rewire it will lead usually to optimizations.

    Remind the mantra BBB (backup before begin) outside the firewall, with notes and changelogs. Having a "less optimizied while working" configuration is always a godsend in trouble.

  • Zyxel_USG_User
    Zyxel_USG_User Posts: 42  Freshman Member
    First Comment

    Brilliant thoughts, thank you for answering. I do not have a resource issue currently, it is sometimes simply - as in this case- the desire to understand more about how things are processed.

    If condition met ⇒ discard everything after the rule is processed, I guess it is irrelevant what is placed where. Knowing the architecture deeply could only help- at least theoretically- solve this quest for optimisation.

    As for having many individual rules makes life easier to inactivate/activate regions when traveling to these, and is the better overview.

    Having the entities bound into groups resulting in less rules makes it more prone to error: taking the wrong entity out of a group, then forgetting to add it after the travel.

    I guess when unknown variables may influence or not the outcome of resource optimisation, the proximity-to-human error comes to play, basically usability/handling wins over resource optimisation, especially when there is no problem of resources.