Skip to content

🐍 An OpenAI Gym to benchmark AI Reinforcement Learning algorithms in fisheries-related control problems

Notifications You must be signed in to change notification settings

DanielGetter/gym_fishing

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Install & Train Project Status: WIP – Initial development is in progress, but there has not yet been a stable, usable release suitable for the public.

Here is the fishing gym. Work IN PROGRESS!

Environments

This repository provides OpenAI-gym class definitions for the fisheries management problem. See Creating your own Gym Environments for details.

Install the python fishing_gym module by cloning this repo and running:

python setup.py sdist bdist_wheel
pip install -e .

So far, we have:

Simple fishing model defined in a continuous state space of fish biomass, with:

  • Discrete action space with three actions: maintain harvest level, increase harvest by 20%, decrease harvest by 20%
  • Discrete action space with n > 3 actions: action is taken as quota, quota = action / n_actions * K
  • Continuous action space, action = quota.

Examples

Examples for running this gym environment in several frameworks:

Note that different frameworks have seperate and often conflicting dependencies. See the requirements.txt file in each example, or better, the official doucmentation for each framework. Consider using virtual environments to avoid conflicts.

Theory

The optimal dynamic management solution for the stochastic fisheries model is a "constant escapement" policy, as proven by Reed 1979. For small noise, this corresponds to the same 'bang-bang' solution for the determinstic model, proven by Clark 1973. Ignoring discounting, the long-term harvest under the constant escapement solution corresponds to the Maximum Sustainable Yield, or MSY, which is the optimal 'constant mortality' solution (i.e. under the constraint of having to harvest a fixed fraction F of the stock each year), as demonstrated independently by Schaefer 1954 and Gordon 1954.

The biomass at MSY can trivially be solved for by maximizing the growth function $X_{t+1} = f(X_t)$. Discretizing the state space, the dynamic optimal harvest can easily be found by stochastic dynamic programming.

Here, we seek to compare the performance of modern RL methods, which make no a-priori assumptions about the stock recruitment function, to this known optimal solution (given the underlying population dynamics).

About

🐍 An OpenAI Gym to benchmark AI Reinforcement Learning algorithms in fisheries-related control problems

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.8%
  • Makefile 2.6%
  • Jupyter Notebook 0.6%