Groovy ranges and lists

Some days ago, a friend of mine sent this post. Some hours after seeing it, I found my own “wat!”

Let´s talk about Groovy.

Groovy have ranges.

[code language=”groovy”]
groovy> def a = [1..5]
groovy> def b = [1,2,3,4,5]
groovy> a == b

Result: false
[/code]

Wat?¡¡¡

[code language=”groovy”]
groovy> def a = [1..5]
groovy> def b = [1,2,3,4,5]
groovy> println a.class
groovy> println b.class

class java.util.ArrayList
class java.util.ArrayList
[/code]

Now a bit of light in the problem

[code language=”groovy”]
groovy> def a = [1..5]
groovy> def b = [1,2,3,4,5]
groovy> println a[0].class
groovy> println b[0].class

class groovy.lang.IntRange
class java.lang.Integer
[/code]

So the question is that “a” is a list with an IntRange in it. In order to work you have to get rid of bracket, which is a bit weird for me.

[code language=”groovy”]
groovy> def a = 1..5
groovy> def b = [1,2,3,4,5]
groovy> a == b

Result: true
[/code]

I came to this trying to use the range in a case clause.

Grails from a Django developer point of view

A Django developer

I have been writing Django apps for the last six years. It was my first and only development environment and I get really comfort with it. Writing Django apps is part of my DNA now, I shared some code with some open sources project, etc. Then one day I got a job opportunity as a Grails developer. As I need to get out of my comfort area, I decided to accept.

So I started writing grails apps three weeks ago. Things are pretty different at all levels. In this post I will try to explain the good things and the bad things from my point of view.

Grails is groovy

And groovy has some very interesting goodies. Probably the thing I like more is so called “safe navigation”. With this feature you can write really legible and compact code.

[code language=”python”]
foo = None
if bar and bar.baz and bar.baz.qux and bar.baz.qux.nerf:
for = bar.baz.qux.nerf
[/code]

becomes

[code language=”groovy”]
foo = bar?.baz?.qux?.nerf
[/code]

Another super-cool feature is the default constructor. It accepts a hash map to set the values of the instance member after instantiation, which is pretty handy. Let´s say you have a class like this

[code language=”groovy”]
class Foo {
String bar
Integer baz
Date qux

Foo() {
qux = new Date()
}
}
[/code]

Then you can instantiate that class passing values for every member (including “qux”) and it will be st

[code language=”groovy”]
Foo bar = new Foo(bar: "Puturru", qux: new SimpleDate("dd/MM/yyyy").parse("22/02/1981"))
[code]

A similar behavior can be achieved in python using default values

[code language="python"]
class Foo:

def __init__(self, bar=None, baz=None, qux=datetime.datetime.now())
self.bar = bar
self.baz = baz
self.qux = qux

a = Foo(bar="Puturru", qux=datetime.datetime(1981, 02, 22)
[/code]

There are many other cool features such as the elvis operator, the closures, and so.

Grails is a JVM language

Everybody knows that java is resource eager and groovy and grails are not singular. For me the most annoying thing is the time consuming. In the project I´m working on the full unit test suite takes about an hour to completed which is absolutely crazy, but even worst pass a simple unit test that test 2 + 2 equals to 4 takes 30s!. It´s really hard to do TDD (or something similar) in this environment.

Another annoying thing is the excess of modularization. In python more than one class can co-exists in the same file, thats the way people write python, however most grails developers come from Java world, and they are used to create a different file for every class, enum or trait they need, thus the projects have tons of files, although groovy can have more than one (public) class per file.

Tests en Django: Como escribir tests de manera más sencilla

In the last pybirras-tenerife I gave a brief talk about testing Django apps, and I promised to publish a post in spanish with more in-depth info about that. So if you can’t read spanish I apologize for the inconvenience.

Quizás quieras consultar el primer artículo de la serie.

El problema

Los tests son realmente difíciles de escribir si se escriben después del código, ya que una vez escrita una funcionalidad todos tenemos la tendencia a seguir escribiendo funcionalidades y no perder el tiempo probando las que ya tenemos. Para combatir, y muchas otras cosas, esto surgen metodologías como TDD (Test Driven Development) que proponen escribir las pruebas antes que el código. Si quieres saberlo todo sobre TDD te recomiendo el gran libro del gran Carlos Ble, también te recomiendo que obedezcas a la cabra.

Con todo TDD tiene un problema, los tests tienen que ser escritos por técnicos. Surge pues BDD (Behavior Driven Development). La idea aquí es que los tests sean muy cercanos al lenguaje natural, por tanto podrán ser escritos por los dueños de producto, y luego los técnicos convertirán esto en un test automatizado. Veamos como se especifica una tests.

Tests en Django con behave

[code language=”text”]
Feature: Historia de usuarios 1
Como usuario registrado
Yo quiero poder listar mis posts
Para saber que publiqué

@trivial
Scenario: Usuario sin posts publicados
Given Soy un usuario sin posts publicados
When Visito la página de listar posts
Then Dice que "No hay posts"
[/code]

Como se puede ver el idioma del tests, es muy similar al lenguaje natural, y lo sería aún más si usáramos inglés.

Este ejemplo es un test de behave válido si implementamos los pasos necesarios, utiliza una aproximación a las historias de usuarios, donde en la primera parte especificamos la historia y en la segunda desarrollamos los criterios de aceptación. Existen otras herramientas como RSpec que utilizan especificaciones en lugar de historias de usuarios.

La manera de escribir los pasos no podría ser más sencilla.

[code language=”python”]

from behave import *
from mock import Mock
from usuario.models import Usuario

@given("Soy un usuario sin posts publicados")
def step_impl(context):
context.user = Usuario.objects.create(username="Bar", password="Foo")
context.browser = webdriver.Chrome()

@when("Visito la página de listar posts")
def step_impl(context):
context.browser.get(‘http://localhost:8000/posts/list’)

@then(‘Dice que "{text}"’)
def step_impl(context, text):
assert context.browser.text_is_present(text)
[/code]

Este fichero debe encontrarse dentro del paquete “steps” de la carpeta features.

Como podemos ver en el último paso, es posible suministrar parámetros a los pasos, lo cual hace que podamos crear una librería de pasos para nuestros tests en django.

Para poder ejecutar los tests, necesitamos configurar el test runner, y ejecutar los tests como siempre.

[python]
TEST_RUNNER = ‘django_behave.runner.DjangoBehaveTestSuiteRunner’
[/python]

Una cosa interesante que diferencia a behave de lettuce es la posibilidad de marcar a cada escenario. En nuestro escenario de ejemplo el @trivial es un tag, podemos poner el que queramos. Para ejecutar solo los tests que tengan una marca podemos hacer

[bash]
$ python manage.py test app1 –behave_tags @trivial
[/bash]

Hay veces que es conveniente ejecutar algo antes o después de un paso, escenario, feature, tag o todo

Esto se puede hacer en el fichero environment.py que se debe encontrar en el directorio de las features. En este fichero podremos implementar los métodos before_step, before_scenerario, before_feature, before_tag y before_all (y lo mismo con after_). Ejemplos de cosas que se pueden hacer aquí incluyen levantar el webdriver de selenium, cargar con factory boy los datos que necesitemos para nuestros tests, etc.

Además una cosa importante es que podemos utilizar coverage junto con todo esto.

[bash]
$ coverage run –source "." –omit "*/features/*,*/migrations/*" manage.py test app1 app2
$ coverage report -m
Name Stmts Miss Cover Missing
——————————————————-
my_program 20 4 80% 33-35, 39
my_other_module 56 6 89% 17-23
——————————————————-
TOTAL 76 10 87%
[/bash]

En la próxima entrada de esta serie (que espero que sea la última), integraremos nuestro proyecto con jenkins para poder pasar los tests, sacar estadísticas de cobertura de código y examinar los ficheros python en busca de violaciones de pep8 y pylint.

Django Migrations and custom user models (admin.LogEntry.user lookup failed)

As I’m working with the Django 1.7 RC in one of my projects, I sometimes get unexpected (by me) behaviour. I just passed one of them. I start working with the default user model, but after that I realize that I need to change it in order to include some fields. I follow this steps as I just wanted to extend the model.

Lookup failed for model referenced by field admin.LogEntry.user

I created a model extending AbstractUser, added some fields, set AUTH_USER_MODELS setting according, created a new migration, and then apply the migrations, but then I get this error

[code language=”bash”]
ValueError: Lookup failed for model referenced by field admin.LogEntry.user: <app.Model>
[/code]

After some googling I found this ticket.

The reasons for this is the use of “swappable_dependecy” that just “Turns a setting value into a dependency.”. This is used because AUTH_USER_MODEL is a settings that holds the actual model referenced by LogEntry. swappable_dependency hardcodes the __first__ migration, so if you don’t define your custom model in the first migration you will get this error.

I could use “run_before” (which at this time is undocumented) to bypass this behaviour, I don’t know whether or not is a good idea to bypass expected and documented behavoiur using an undocumented feature (I guess it’s not a good idea 😉 ) but, as this is not critical to me at this time, I’m going to let it go, and maybe redo my migrations when needed before deploying it.

[code language=”python”]
run_before = [
(‘admin’, ‘__first__’),
]
[/code]

Data Driven Django apps: Managing data in django projects

The problem

Django comes with a fancy tool to manage data needed by the project, it’s called fixtures. However the problem with fixtures is that they are:

  • hard to create (¿dumpdata really?)
  • hard to evolve (¿what if the model gets changed?)
  • hard to modify (beyond trivial modifications)
  • they overwritte whatever in db (this can be really harmful)

Data Driven apps throught south data migrations

For all these reasons they don’t seem to be handy to manage data in data driven apps. So, what can we do then?. Actually the answer is a third party app you may know called south. As you may know south cames up with a solution to do schema evolution (aka schema migrations), but it also carries data migrations. Using this tool in your data driven apps has a number of advantages:

  • Your schema can evolve (this is the key feature of south)
  • When you apply a datamigration, only required data is loaded
  • The way you write data is much more similar to regular ORM queries

Let’s see how to create a data migration (schema migrations are out of the scope of this post).

The first thing you need to do is create the skeleton for the data migration.

[code language=”bash”]
$ ./manage.py datamigration <your_app> <migration_name>
[/code]

This will create a new data migration. You may note the class is inherited from DataMigration instead of SchemaMigration. This is important if you may write a migration by hand as SchemaMigrations don’t let you add, modify or delete data. It has a forward and a backward method as usual, and a dict representing the models.

Your migration belongs to an app. If you want to handle data from that app you can do simply prepending “orm.” to the usual Django Orm statement. If you want to handle data from a different app you should prepend “orm[‘other_app’]”. Let’s see it in an example

[code language=”python”]
class Migration(DataMigration):

def forwards(self, orm):
data1 = orm.<Model1>.objects.get_or_create(
arg1="arg1",
arg2=5)
data2 = orm[‘other_app].<Model2>.objects.get_or_create(
foo="bar",
arg2=data1)
[/code]

If your project use multiples databases, then you may experience some problems running your tests. You have two approaches the first one would set ‘SOUTH_TESTS_MIGRATE’ to False in settings, but then your data migrations won’t be applied, therefore you’ll need to create that data in DB by yourself. The second one (and my favorite) it’s to exclude multiple database from testing environment; to achieve this you just need to set ‘DATABASE_ROUTERS’ to the empty list.

With the new 1.7 release (but the time of this writing a release candidate yet) the migration syntax it’s different, but the idea underneath it’s the same.

Tests en Django. Como escribir software que dura en el tiempo

In the last pybirras-tenerife I gave a brief talk about testing Django apps, and I promised to publish a post in spanish with more in-depth info about that. So if you can’t read spanish I apologize for the inconvenience.

Dicho lo cual, al tajo.

Tests en Django

Django tiene su propio sistema de tests integrado, básicamente se trata de una extensión de unittest. Para ser justos también se pueden escribir “Doctests”, pero esto es algo que nunca me ha resultado útil así que no hablaré de ello.Por tanto los tests en django son relativamente fáciles de llevar a cabo. A continuación incluyo un TestCase muy simple que espero sirva como ejemplo

Unittest: Los tests más simples

[code lang=”python”]
from django.test import TestCase

class SomeTest(TestCase):
fixtures = [‘the_data’]

def setUp(self):
self.foo = Bar()

def tearDown(self):
self.foo.close()

def test_emptylist_is_false(self):
self.assertTrue([] == False)
[/code]

Este código tendrá que estar dentro del módulo tests de alguna aplicación instalada. Para ejecutar este test lo suyo sería ejecutar el siguiente código

[code language=”bash”]
$ python manage.py test <nombre_de_la_app>
[/code]

Vale ya tenemos un test, pero esto no sirve para mucho. Los tests son muy útiles, pero si tenemos que escribirlos así, también son un tostón. ¿Cuáles son los principales problemas?. En mi opinión el principal problema de escribir los tests así son los datos.

Los datos

El problema es que las fixtures no son muy manejables, tienen referencias a claves concretas (aunque esto puede ser mitigado con las clave naturales), y si necesitas un conjunto de datos complejos es difícil escribirlas a mano.

Una alternativa mejor es utilizar factory_boy. De esto he hablado en una entrada anterior. Básicamente se trata de disponer de una factoría que genera objetos de base de datos asignando valores predeterminados a los atributos que no se suministran.

[code language=”python”]
class RolFactory(factory.django.DjangoModelFactory):
FACTORY_FOR = Rol

name = factory.Sequence(lambda n: u"rol%s" % n)
description = u"A test rol";

class CacheFactory(factory.django.DjangoModelFactory):
FACTORY_FOR = Cache

id = factory.Sequence(lambda n: n)
code = factory.Sequence(int)
document_number = factory.Sequence(lambda n: u"%s" % n)
name = u"John"
surname = u"Doe";

r = RolFactory.create(name="alumno")
CacheFactory.create(rol=r)
[/code]

Como vemos en el código anterior, se pueden crear los datos que necesitamos para nuestros tests de una manera mucho más simple. Además esto tiene la ventaja de que si el modelo fuera modificado, solo tendríamos que adaptar nuestra factoría y el resto de los tests seguiría funcionando igual.

Por supuesto podemos utilizar esta librería en nuestros tests en Django sin ningún problema.

El siguiente problema sería como testear código que no sé como se comporta. Esto es mucho más habitual es proyectos “brownfield” que en proyectos “greenfield”, lo que ocurre es que todos en algún momento tenemos que afrontar este problema.

El código legado

Antes hemos visto que utilizábamos una clase hija de “TestCase”, en realidad existen más clases base, pero creo que la más interesante es “LiveServerTestCase”. Esta clase lo que hace es que “levanta” un servidor local durante la fase de testeo para que pueda utilizar un navegador externo en lugar de tener que utilizar el cliente de django. La principal ventaja es que podemos utilizar Selenium IDE para crear nuestro test y luego exportarlo a python (es una de las opciones de Selenium IDE). Este tipo de tests tiene un problema, son lo que llamamos tests débiles, desde que cambies la cosa más mínima el test se “romperá”. Por contra son muy útiles para asegurarse de que las cosas funcionan como antes, y creo que se puede asumir la debilidad para este tipo de casos.

[code language=”python”]
from django.test import LiveServerTestCase
from selenium import webdriver
#from pyvirtualdisplay import Display

class BuscarTests(LiveServerTestCase):
fixtures = [‘tests’, ]

def setUp(self):
super(BuscarTests, self).setUp()
#self.display = Display(size=(1280, 1024))
#self.display.start()

def tearDown(self):
#self.display.stop()
super(BuscarTests, self).tearDown()

def test_buscar(self):
browser = webdriver.Firefox()
browser.get(self.live_server_url + "/buscar/")
browser.find_element_by_id("id_query").clear()
browser.find_element_by_id("id_query").send_keys("Juan")
browser.find_element_by_css_selector("button[type="submit"]").click()
element = browser.find_element_by_tag_name(‘h1’)
assert element.text == ‘Juan’
browser.close()
[/code]

En el anterior código es de reseñar que las líneas comentadas permitirían pasar este test en un entorno “headless”, lo cuál es interesante si tenemos un sistema de integración continua.

Por otro lado hay veces en las que no queremos o no podemos testear un elemento. En una de las aplicaciones que desarrollo tengo un método que hace unas llamadas a procedimientos almacenados en base de datos. En el entorno de desarrollo/pruebas no dispongo del sistema que corre los procedimientos almacenado, por tanto no puedo permitir que se ejecute dicho código. ¿Cómo hago para hacer un test de eso?

Doubles

Los doubles son objetos que se hacen pasar por otros objetos. Los “mocks”, “stubs” y “spys” son algunos de los tipos de doubles. En python hay numerosas librerías para esto, pero hay una que ha sido incluida en la librería estándar en la versión 3.3 (en versiones previas está disponible con librería de terceros). En el siguiente ejemplo se comprueba que se ha llamado a una función.

[code language=”python”]
@patch.object(Persona, "call_stored_proc")
def test_metodo_called_when_obj_created(self, mock_method):
n = Obj(numero=5,
string="PAS")
n.save()
self.assertTrue(mock_method.called)
[/code]

En este caso hay una listener vinculado a la señal post_save de la clase “Obj” que sólo llama al método “call_stored_proc” bajo determinados circunstancias.

Otra forma muy común de utilizar esta librería es modificando los métodos que deseemos para que devuelvan una salida concreta.

[code language=”python”]
from mock import Mock

a = Foo()
a.bar = Mock(return_value=5)
[/code]

Por supuesto los tests en Django son un tema que da para mucho, pero hoy lo vamos a dejar por aquí. En una próxima entrada hablaré sobre una maravilla llamada behave que permite hacer BDD de una manera cómoda y sencilla y también sobre como integrar todas estas cosas con Jenkins.

Django and Celery: async tasks

The problem

Signals in django are synchronous, therefore the user doesn’t get a response until all the signals are run. When you have a software where you can chain lots of signals, and every signal can perform some database queries, that can lead to slowness, and slowness lead to poor user experience. Most times, you don’t need to run synchronously, there is when using Celery makes sense.

Django and Celery

Celery is a distributed task queue based on distributed message passing. It can use severals backends such as RabbitMQ and Redis. It allows to run tasks on background speeding up the response to user, thus improving the user experience.

You can install Celery with a simple pip command. Although it isn’t strictly needed I also recommend to install django-celery. django-celery provides several handy tools such as a database scheduler and a database result backend.

[code lang=”bash”]
$ pip install celery
$ pip install django-celery
[/code]

Remember to include “djcelery” in your INSTALLED_APPS.

The celery configuration is done in project.celery file unless you want to use the --config flag. My celery.py file looks like this

[code lang=”python”]
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings

os.environ.setdefault(‘DJANGO_SETTINGS_MODULE’, ‘project.settings’)
app = Celery(‘project’)
app.config_from_object(‘django.conf:settings’)
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
[/code]

I have some lines in the settings.py file too.

[code lang=”python”]
# CELERY STAFF
CELERY_RESULT_BACKEND = ‘djcelery.backends.database:DatabaseBackend’
CELERY_ACCEPT_CONTENT = [‘json’, ‘pickle’]
CELERY_TASK_SERIALIZER = "json"
CELERY_RESULT_SERIALIZER = "json"
CELERYBEAT_SCHEDULER = ‘djcelery.schedulers.DatabaseScheduler’
[/code]

Note that using pickle as a serialiazer has some security concerns, so don’t use it unless you really need it.

Using the autodiscover feature the tasks should live in tasks.py inside an app. A task may looks like this

[code lang=”python”]
@shared_task(serializer=’json’)
def adds(num1=0, num2=0):
return num1 + num2

adds.delay(2,3)
[/code]

In order to the task get executed, you need to run a worker. If you have django-celery installed you can run it using the manage.py

[code lang=”bash”]
python manage.py celery worker -A project -l info
[/code]

Next steps?

There are a number of things to do:

  • Monkey patching signals
  • Running celery tasks on other hosts (possibly without all the project in it)
  • Benchmark the whole system

Elasticsearch and Django Haystack

I have been developing Django apps for a lot of time, and almost every project I wrote has some search funcionality. Most time it’s just about searching on a model field but, whenever more than one field or model is involved, I use the reduce approach to search using data in database.

[code lang=”python”]
or_queries = [models.Q(**{orm_lookup: bit})
for orm_lookup in orm_lookups]
qs = qs.filter(reduce(operator.or_, or_queries))
[/code]

However, when your database is big enough, this approach can lead to long queries. Long queries leads to a poor user experience. What can you do when this happens?. Just use django-haystack and elasticsearch to dramatically improve the user experience when searching.

So let’s begin, first thing is to install some Django packages

Django-haystack

[code lang=”bash”]
$ pip install django-haystack
$ pip install elasticsearch
$ pip install pyelasticsearch
[/code]

Add django-haystack to INSTALLED_APPS. Next step would be to create a search_index.py. Let’s say you have a model called Users and you want to search it, your search_indexes.py may looks like this one.

[code lang=”python”]
from haystack import indexes
from someapp.models import User

class UserIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True)
username = indexes.CharField(model_attr=’username’)

def get_model(self):
return User
[/code]

Don’t forget to include haystack.urls in URLconf. Django-haystak autodiscover will do the magic of finding all indexes. So far we are almost done with django-haystack. You need to configure elasticsearch backend.

[code lang=”python”]
HAYSTACK_CONNECTIONS = {
‘default’: {
‘ENGINE’: ‘haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine’,
‘URL’: ‘http://127.0.0.1:9200/’,
‘INDEX_NAME’: ‘haystack’,
}, }
[/code]

Let’s provide a convenient template to make the indexing process smooth. You have to include search/indexes/someapp/users_text.txt in your templates loaders. Maybe something like {{ object.username }} will works for you.

Elasticsearch

We need to install elasticsearch in order to use it. This is a super simple task in Ubuntu and Debian. Just pick the deb from download site, and install it using your favorite package manager, locate the config file, and start it (you may want to start it as a service, but that is out of the scope of this post).

[code lang=”bash”]
$ dpkg -i elasticsearch-1.1.1.deb
$ dpkg -L elasticsearch | grep .yml
<edit config file>
$ elasticsearch -f -D es.config=<path to YAML config>
[/code]

Now, we must populated the index with some users. We use the rebuild_index management command.

My project doesn’t have lots of transactions to be indexed, so I’m using

[code lang=”python”]
HAYSTACK_SIGNAL_PROCESSOR = ‘haystack.signals.RealtimeSignalProcessor’
[/code]

and a weekly full rebuild using the management command through crontab. That may not be suitable to you as real time processor run in-process and your user experience can drop off under load.

Next steps?

Haystack has lots of features I’d like to use, such as autocomplete, queued indexing, “more like this”, and much more. I’d like to benchmark elasticsearch vs solr vs xapian (whoosh is terrible slow for my index size) as a backend too.

Tenerife startup weekend

El pasado fin de semana, tuve la suerte de poder asistir a un evento de esos que cambian la forma de ver la vida. En esta entrada voy a tratar de resumir qué significó para mi, poder asistir, y esbozaré algunas de las cosas que aprendí.

Aquellos que me conocen personalmente saben que soy emprendedor por naturaleza, me gusta marcarme nuevos objetivos y soy propenso a aceptar grandes desafíos. Asistir a un evento de esta clase era algo que tenía que pasar tarde o temprano. Desde aquí quiero agradecer a todos los que lo han hecho posible, y no me refiero solo a los organizadores del evento, sino a mi encantador mujer que se hizo cargo de los niños un fin de semana sin mi. De todo corazón ¡GRACIAS!.

Factory Boy

I develop a Django application which has been rolling for three years now. I barely have a 40% of code coverage through tests, which is pretty low. When writing tests, the most annoying thing in my opinion is to get a dataset to work with. Using fixtures is the direct way, but it’s not that easy to write them by hand, also it’s difficult to get them in sync with migrations. But we have factory boy to save the day.

I first met factory boy in

As soon as I could, I gave it a try.

I created some Factories

[code language=”python”]
class RolFactory(factory.django.DjangoModelFactory):
FACTORY_FOR = Rol

name = factory.Sequence(lambda n: u"rol%s" % n)
description = u"A test rol"

class CacheFactory(factory.django.DjangoModelFactory):
FACTORY_FOR = Cache

id = factory.Sequence(lambda n: n)
code = factory.Sequence(int)
document_number = factory.Sequence(lambda n: u"%s" % n)
name = u"John"
surname = u"Doe"
[/code]

You just need to include those attributes that need default values. In my case I included all the attributes not nullable or without a default value. So now I can start using the factories. One interesting thing is that you can use attributes not defined in factory class, but existing in model.

So lets use them a bit

[code language=”python”]
r = RolFactory.create(name="alumno")
CacheFactory.create(rol=r)
[/code]

It’s that simple. If my models are modified, then I may need to modify the factory class, but it would be just in one place.

As factories are classes, you can import and use them where needed, so I guess I may write them under every app directory, not sure if in tests.py file or in a separate file, but that would be another story.