Python for Power Systems

A blog for power systems engineers to learn Python.

Turn Back Time Using Python and PSSE

Roads? Where we’re going, we don’t need roads.

Doc Back to the Future II

I have a recurring problem, I spend time carefully adding a new load or generation to my case only to have it blow up when I try to solve. It turns out I wasn’t so careful and didn’t check that the voltage levels were similar or my new bus’ voltage angle was set to the default 0 degrees and the connecting bus was 15 degrees.

There are a lot of reasons why adding new equipment to an existing PSSE case or creating a new switching configuration might fail. If we are lucky, our script stops and we can fix the problem, though I have seen some scripts that keep marching on getting nonsense answers right until the end.

I’d like to take you through a new model for altering a PSSE saved case in your Python scripts that will allow to you literally* turn back time when the case blows up and continue with the program.

  • disclaimer: not literally, I don’t have a flux capacitor.

Ok give me two examples so I know what you are talking about

I have a list of 30 network augmentations that need to be made. Each is a proposed new generating asset or replacement of existing plant. I want to try to add all of them to my saved case, and write in a log file the ones that failed. I’ll manually apply those failed ones later - or reject their connection application (hehe).

Another example is writing my own QV curve generator, I want to add a ficticious generating plant, and play with the Q output and voltage set point to measure the system response. Afterwards I want to remove the ficticious plant without having completely ruined the integrity of my case forever. It is well known that PSSE can be a difficult master to re-tune after we do something stupid like add a synchronous condensor and set the voltage set point to 1.1 per unit and re-solve allowing tap stepping.

So how would I benefit by turning back time in those cases?

OK with our 30 augmentations, say there was one dud with data that was subtly insane. So when we add that dud case, PSSE blows up! Start again? No we don’t need to, just turn back time to the last good augmentation and continue with the rest.

The QV curve benefits are quite obvious, and can be translated to many activities outside of QV curve generation. Imagine if it was an every-day simple task to turn back time and get back your last known good solved case I’d pay good money for that, and I know you would too. Instead though read on and you can write your own time-machine for free.

Some theory about database transactions

This entire method of turning back time to the last known good system state was inspired by database theory of all things. Databases have something called a “transaction” which essentially means:

Either everything works according to plan, or we rollback and nothing gets done at all

There is no half-way, either it all works, or we use a giant undo button that removes the series of steps that led to failure. Of course, PSSE has no undo button, but together you and I will build one soon enough, keep reading.

This translates to the PSSE world in the following way:

  • We group a series of instructions like the ones required to add a new generator into a single function.
  • That function is run inside a transaction
  • We attempt to solve at the end or during the transaction
  • We write a check to see if the transaction was successful (e.g. no blow up, voltages healthy etc.)
  • If successful we continue (and laugh quietly at our success)
  • If not successful, we log our failure, rollback our changes and move on (still laughing quietly)

Enough, show me the code

Ok, here it is I have called the function transaction. But you can call it something fancy like de_lorean or time_machine.

Here is a short example in the wild

The example is quite minimal. We create a function which adds a generator and expect that one to succeed. However the second function which changes the swing bus to a type 2, we expect that to fail.

Have a play around with the example as a starting point and see what else you can get the transaction to do.

Designing an Easier PSSE Subsystem Data Retrieval API

I want to take you through a small function that we have written that will combine the entire PSSE subsystem data retrieval API into a single, easier to use function.

What does the original subsystem data retrieval api look like?

Have you ever looked at the PSSE subsystem data retrieval API (Chapter 8 in the PSSE API guide)? With it you can get information about branches and buses (and other elements like machines) for an entire subsystem. So how would we go about getting a list of all of the bus numbers in subsystem 2?

1
ierr, busnumbers = psspy.abusint(sid=2, string="NUMBER")

Things become tricky when we want the bus names and the bus numbers though.

1
2
ierr, busnumbers = psspy.abusint(sid=2, string="NUMBER")
ierr, busnames = psspy.abuschar(sid=2, string="NAME")

We had to know that the NAME attribute uses a different API call to the NUMBER attribute. If we wanted to get the bus per unit voltage we would need another API call again, and if we wanted the total in service fixed bus shunt in MW and MVar a fourth API call would be required.

four function calls is three too many
1
2
3
4
ierr, busnumbers = psspy.abusint(sid=2, string="NUMBER")
ierr, busnames = psspy.abuschar(sid=2, string="NAME")
ierr, busvoltages = psspy.abusreal(sid=2, string="PU")
ierr, bus_shunts = psspy.abuscplx(sid=2, string="SHUNTACT")

Each of the return values from the API is a nested list. If you wanted to get the name and pu voltage for bus number 340:

getting name and pu voltage for bus number 340
1
2
3
bus_index = busnumbers[0].index(340)
voltage = busvoltages[0][bus_index]
name = busnames[0][bus_index]

The resulting code can be extremely difficult to read, and quite verbose.

A wrapper around the old API

Using the new subsystem_info function is easy. Lets get the bus numbers, names, pu voltages and actual shunt values for subsystem id 2:

1
2
3
4
5
>>> businfo = subsystem_info('bus', ['NUMBER', 'NAME', 'PU', 'SHUNTACT'], sid=2)
>>> print businfo
[(205, 'CATDOG', 1.01, complex(0.4, 0)),
 (203, 'CATDOG2', 0.99, complex(0, 0)),
 ... ]

All of the information we were looking for is organised neatly into rows, rather than separate columns. Here is how we made that function.

How does this work?

The new function relies on some helpful design choices in the original PSSE subsystem data retrieval api.

Each of the functions are named using a regular pattern:

1
2
3
4
abusint,
abuscplx,
amachint,
aloadint

a element type api data type

a bus int

There is a lookup function called abustypes (we’ll call it types) which will return a character string that represents each of the api data types. For example

1
2
>>> psspy.abustypes("NUMBER")
"I"

We ask the types function about each of the attributes the user has requested. So a query like ["NUMBER", "NAME", "PU"] might return ["I", "R", "C"], being int, real and char respectively.

Use a dictionary to store functions

Ok, so we can find a character "I", "R", "C" that represents the API type. Translating that character into the correct retrieval function to use is the clever part.

There is a Python dictionary to look up the corresponding API call for the attribute requested. So asking for "NUMBER" which returns "I" from the types function will retrieve the psspy.abusint function from the dictionary. Using a dictionary to look up a function like this is called the ‘dispatch table pattern’ (example 4.16 in the Python Cookbook if you have a copy)

Grouping related calls to the API together

The difficult part is grouping and returning the API calls in rows and in the order they were requested. The itertools groupby function is used to group related API calls together so if we requested ["NUMBER", "TYPE", "NAME"] we might get ["I", "I", "C"] from abustypes.

The groupby will group the two consecutive “I” api calls together so we can make one function call:

1
abusint(string=["NUMBER", "TYPE"])

instead of two function calls:

1
2
abusint(string="NUMBER") # and
abusint(string="TYPE")

Transpose columns to rows

Finally, we use the built in zip function to transpose a list of columns into a list of rows

1
2
>>> zip(*[[1,2,3], [4,5,6]])
[(1,4), (2, 5), (3,6)]

Run PSSE From Python and Not the Other Way Around

What this, the Slashie, mean is you consider me the best actor slash model and not the other way around.

Fabio Zoolander

How do you run your Python scripts on PSSE? Do you open the PSSE program and run from the ”Run Program Automation File..” menu? I know some of my colleagues have created macro buttons that sit on their customised toolbar. The macro button is linked to a Python file and when you click the button that Python file runs.

Both of these exemplify the case where PSSE is run first, and then your Python script is executed. PSSE is the boss and your Python script is the worker that is told when to run.

I’d like to take you through another way to run your Python scripts. The examples I’ll show you will enable your Python script to be the boss and for it to tell PSSE to run.

Why would I care who is the boss?

Many times, PSSE is just one small cog in your overall program. You need it to do all the heavy computations for your latest energy congestion forecast but you still want to send the results by email to your boss in Singapore, or publish your results on the interwebs (Yes you can do that, I’ll explain how in a later post). Then consider testing, debugging, calling your script from another program.

There are lots of cases where it absolutely makes sense to have Python firmly seated on the driver’s side with two hands on the steering wheel and its foot pushing the accelerator through the floor. PSSE is a great tool and running your programs from inside it is fine for some, but sometimes you need to let your Python scripts run free.

Show me the code

Ok, you asked and I’m not one to hold out on you. Here is some code that is suggested in the PSSE manual to get an External Interpreter Environment. I’ll take you through what it does, so you can see through the magic. I will suggest methods that will make including this in your own work a piece of cake.

First Python is its own language, complete with a full library of useful tools. The only way Python knows to find those useful tools is to look in every single directory listed in its path variable. Python does not know about PSSE at all. You need to tell it where to look to find the PSSE library files (psspy, redirect et al).

The first parts of the program, beginning with sys.path.append and os.environ['PATH'] are adding to the Python and system path variables. Only now does Python know where to look to find psspy. If you don’t believe me, try running the line import psspy from your Python shell (not inside PSSE). You should get an ImportError. Now tell Python where it can find psspy using the first three lines in the example and try import psspy again.. No error!

There are a couple of important other lines in the code before we are done, the rest are not helpful.

If you want output that PSSE would normally print while it is running to show up on the command line for your Python script use this

1
2
import redirect
redirect.psse2py()

Those lines are a matter of taste, your program won’t break without them.

1
import psspy

This was the whole point of the exercise, to get this psspy library. That is where the interaction with PSSE occurs. So here is the minimum that you need:

A word of warning about os.chdir

The best place for these commands is near the top of your Python script file or in another file in the same directory as your Python script file which you import. No one expects for a library they are importing to change the working directory, especially not to c:\work_dir so don’t use os.chdir.

Putting it all together

Here is a basic script that will open up the save case of your choice and run a fully coupled Newton-Raphson iterative solve. Just a starting point skeleton file to give you an idea of how all of this sits together:

Comments

Very helpful! thank you

Janath

this is brilliant!

KristjanP

Learn Python for PSSE

The whit team have developed a new training course specifically designed for power systems engineers to get a better handle on the Python language.
During the two or three day course you will learn advanced Python usage with examples and lessons featuring the PSSE package.
We run a maximum of 5 courses a year. Most events are for Melbourne, Sydney and Brisbane
Check out our dedicated training site for Python PSSE training.

Update:
For a brief introduction to getting started with Python for PSSE, check out our post on how to run PSSE from Python and not the other way around. Check out our other psse and python blog posts for inside information, tips and secrets that have never before been published.

whit now hosts a Python for PSSE Question and Answer forum. It is free and does not require you to sign up to ask, read or answer questions. We welcome you to come along and have a look around.

Patching Ancestor Revisions in Mercurial

Yesterday a bug was discovered in one of our production sites. The production site runs many Mercurial revisions behind the current preproduction head. We needed a way to fix the bug for the production site without requiring it update any further than that commit.

This post explains how we accomplished this task, which for some is a daily routine.

Mercurial graphlog extension

When running Mercurial from a terminal, the bundled graph log extensionis our favourite way to see an ascii drawing of our commit history. It really is a great extension to the default log command.

First: we updated to the tagged production head. Second: we applied the bug fix (and in the process created new head and branch). Third: we merged with the next descendant from the old production head. This was us merging back into the ‘mainline’.

At this point you will be left with two heads still. To remove that final head, we merged our changes with the master branch tip

Now to get that bug fix onto the production server, we need to update the code (hg up) to the revision where we merged back into the ‘mainline’. This revision contains only the bug fix changes.

Letter Pressed Business Cards Have Arrived

The whit. business cards were designed by our user experience lead Hima.

The letter press process required three passes over the design. One for each of the two colours and one for the blind stamp

Blind stamping is where the letter press plate has no ink and simply makes an indentation into the paper

a letter pressed whit. business card

Menu With a Difference

ngenworks’ menu runs on an increasing angle throughout their site

This approach works well for sites that do not have a secondary menu link and is an example of how to incorporate corporate branding with angles into design

The site uses the ff-tisa-web-pro font, which is a popular serif font

BackType Makes Sense of Billions of Tweets

Backtype (edit: they have since pivoted it seems) uses a good visual hierarchy to draw attention to the important data

Large row of numbers at the top provide important easy to digest stats like number of followers

Each of the charts, while useful on a web browser can only show so much information. The BackType team has included a link to download the data for every chart. Customers could then post process this using any tool they wanted.

Finally, making decisions when faced with a lot of information is difficult, so BackType have included some ‘recommendations’ at the bottom of the screen based on their analysis of the data

Add a Foreign Key to forms.Form

How do you get that standard ForeignKey select box with the “+” add another icon next to it on your form? This post will take you through the steps that I took to get the ForeignKey working like it does in the Django admin.

How does the Django admin create a ForeignKey select field

Django creates a forms.ModelChoiceField with queryset and to_field_name arguments as the default form field for a foreignkey. To see for yourself, have a look at django.db.models.fields.related.py under the ForeignKey class and method formfield here.

Now we know to use the ModelChoiceField. But what about those arguments queryset and to_field_name, and how does the “+” add icon appear?

The queryset argument should be a queryset or manager for the model your ForeignKey is to. For the following example:

1
2
class SpaceMen(models.Model):
  company = models.ForeignKey('SpaceCompany')

We are trying to build a form.Form with a ForeignKey type field to SpaceCompany we might use:

1
queryset = SpaceCompany._default_manager

I don’t know what to_field_name does. So I’ve left it out of this discussion. Don’t worry though, to_field_name, will appear later on.

How Django admin adds that “+” add icon

Django actually replaces the default select widget on the ModelChoiceField with its own django.contrib.admin.widgets.RelatedFieldWidgetWrapper. This swap occurs in django.contrib.admin.options near the top of the file in formfield_for_dbfield method of the BaseModelAdmin class. This code is interesting for two reasons:

  • It checks if the current user has permission to add a new SpaceCompany
  • The RelatedFieldWidgetWrapper requires knowledge of the admin_site

But where can I make this widget swap occur? and how do I get an instance of admin_site?

This will largely depend on how and why you are using the forms.Form in the first place. I was using the forms.Form as one of the FormWizard’s forms. So in render_template I was able to create a hook that checked the form for any ModelChoice fields and did the switch, and checked for permissions. I had a copy of the admin_site on my FormWizard thanks to this article onFormWizard in the Django admin. If you were not using ModelChoiceField in the admin, then you probably don’t need that “+” add another icon, because it is created using the reverse of admin_site.name amongst other things (more details in django.contrib.admin.widgets.py class RelatedFieldWidgetWrapper)

Python Based Link Checker

I recently needed a link checker to create a csv formatted list of all links (especially hosted pdfs) on a client site.

There is a tool called webcheck by arthur de jong which does a great job of checking all of the links on a website and creating a pretty html report.

This got me most of the way there, I could see that in the output there was a page dedicated to a list of every url that was encountered during the search, which looked like what I wanted but was formatted as html

I wrote a small file which will use webcheck’s own code to read in its stored .dat file and write all of the links to a csv file with the format:

1
path, extension, internal, errors

Where path is the url, extension is the url ending (for example .pdf, .html, ..), internal is a boolean True or False if the link is an internal link and errors is the error (for example 404, ..) if any for that link.