Before the game begins you can choose to be a chaotic personality or a lawful personality.
If you choose the former, you can do whatever you like regardless of diplomacy, you can declare wars at the drop of a hat, reject tech trades you don't like even though a computer AI in your position would , basically what galciv 2 is now . On the negative side, because the computer Ai knows you are chaotic, your own diplomacy techs also don't influence their behavior when you are dealing with them. But their techs would be fully functional when dealing with 'lawful' personalities.
I mean fair's fair right? If the computer AI doesn't gain from diplomacy techs against you, you shouldn't gain the advanatages of the tech either against them.
A lawful personality would take away choices from the human player. If the AI offers a trade and it's fair, you can't decline. If you want to go to war, you might not be allowed if it is too 'crazy'. |
Your solution more or less takes what I was saying to its logical conclusion in a way that permits the player to retain a significant degree of choice over the kind of game he plays, but it feels unsatisfying for a game that prides itself so much on its AI. After thinking about it for a while, what I'd really like to see is something that deals with the second problem:
2) Human player does not take into account diplomacy when accepting deals.
I'm not too worried about #2. |
It's easy not to be, because it doesn't seem to be game-breaking, but it's the big reason that tech trading has been such an issue, to the point that there's a No Tech Trade button in 1.1. If you fix this problem and force the human player to accept deals, you might start seeing what it's like to be the AI facing a human. Imagine that the AI had a high natural diplomacy while your racial bonuses were focused on population and morale and econ. It focuses on diplomacy tech, and gets Diplomatic Translators, and won't trade them away. All of a sudden it starts offering you the kind of trades that you used to offer it. Suddenly you're giving Photon Torpedos and cash for Habitat Improvement, and the AI is siphoning your money and technology away. This is a messed up situation, but I didn't quite nail the solution in my earlier post; I think I'm closer now.
The problem is with the AI, and I think it's not that tough of a fix. Time intensive for the developers? Maybe. But definitely doable, maybe even for 1.2. The problem is that the AI evalutates value, but not need. If I want an iPod, I value that iPod at $300, or whatever the price is these days. There's the market price for services and IF I WANT SOMETHING I'll pay it. I don't want an iPod, so even though I know what it's worth, I won't necessarily pay $300 for it just because the offer comes along.
The AI needs a secondary layer of value built in so that it can evaluate whether it wants/needs something. The very first check the AI should make before accepting a deal is whether it wants/needs the thing it's getting; if yes, proceed. The second check should be a comparison check of whether the thing it's giving up is something it needs more than what it's getting. The final check should be whether the deal is for fair value, which is the only check it currently does. Note that those last two could be combined, with the AI discounting or marking up value based on need, but I think the flat evaluation is better; just because I need a car to get to work doesn't mean I'd trade my home, which I need to live in, for 5 Aston Martin Vanquishes. I'd rather work for the money, then buy the car myself, then have BOTH things I need.
In principle, that sounds like a tall order, but the AI already makes the hard judgments underlying need/want all the time. The AI evaluates what it wants to research in a given situation, whom it wants to attack, what it wants to build, what boosts it wants from trade goods, etc. It shouldn't be terribly difficult (I imagine, but what the hell do I know) to divert the values the AI checks to decide these issues into programming that decides what the AI is willing to trade for. There would need to be a few extra values added, and some new decisions to be made (for example, what percentage of its money is the AI willing to spend to buy new tech?), but given the quality of the AI that Stardock has put out thus far, I can't imagine that this sort of thing is beyond their abilities.
As far as AI wars of aggression go, I'd like to see more variance and backstabbing based on alignment, with less emphasis on relationship status by the AI. If I'm playing as evil, I'm clearly not to be trusted, so our "historic friendship" shouldn't keep the AIs from watching their backs. The whole point of the alignment system, in my mind, should not just to give interesting bonuses, but to give the AI some clues as to who can be trusted, and who can't, as well as to give the AI some "moral" flexibility.
Edit: in fact, going back to Richrf's notion regarding players choosing their level of freedom: choosing alignment should do that. Good folk should never be able to attack people who are not already their enemies or at low relations with. Neutral folk shouldn't be able to attack people when their relations are better than neutral. Evil folk should be able to attack anyone (except maybe allies). The counterpoint to this sacrifice of freedom would be that Good civs would get boosts to diplomacy and even bigger boosts to relations with other civs than they do now. The extent of the bonuses would, naturally, have to be determined by playtesting.