[MAP Logo]

Materials Algorithms Project
Program Library



Program MAP_STEEL_IRRADIATED_CHARPY

  1. Provenance of code.
  2. Purpose of code.
  3. Specification.
  4. Description of subroutine's operation.
  5. References.
  6. Parameter descriptions.
  7. Error indicators.
  8. Accuracy estimate.
  9. Any additional information.
  10. Example of code
  11. Auxiliary subroutines required.
  12. Keywords.
  13. Download source code.
  14. Links.

Provenance of Source Code

R. Kemp
Theory and Modelling,
UKAEA Fusion,
Culham Science Centre,
Abingdon, OX14 3DB, U.K.
richard.kemp@ukaea.org.uk

The neural network program was produced by:

David MacKay,
Cavendish Laboratory,
University of Cambridge,
Madingley Road,
Cambridge, CB3 0HE, U.K.

Added to MAP: August 2007

Top | Next

Purpose

Estimation of change in ductile-to-brittle transition temperature (DBTT) of irradiated reduced activation ferritic/martensitic (RAFM) steels under small-specimen Charpy testing as a function of irradiation conditions (e.g. overall damage (dpa), irradiation temperature).

Top | Next | Prev

Specification

Language: FORTRAN / C
Product form: Source code / Executable files
Operating System: Solaris 5.5.1; Macintosh OSX; Linux (PC)

Top | Next | Prev

Description

MAP_CHARPY_IRR  contains a suite of programs which enable the user to estimate the DBTT of a 1/3-scale Charpy specimen of irradiated RAFM steel as a function of irradiation conditions. It makes use of a neural network program called generate44, which was developed by David MacKay and is part of the bigback5 program. The network was trained using a database of experimental results [1, and references therein]. 15 different models are provided, which differ from each other by the number of hidden units and by the value of the seed used when training the network. It has been found that a more accurate result could be obtained by averaging the results from all the models [2]. This suite of programs calculates the results of each model and then combines them, by averaging, to produce a committee result and error estimate, as described by MacKay [page 387 of reference 3]. The source code for the neural network program can be downloaded from David MacKay's website; the executable files only are available from MAP. Also provided are FORTRAN programs (as source code) for normalising the input data, averaging the results from the neural network program and unnormalising the final output file, along with other files necessary for running the program.

Programs are available which run on Linux as gzipped tar archives. These need to be unpacked ("tar -zxf charpy_irr.tgz") and run from the command line. These programs assume the presence of the f77 and gcc compilers. If you have different compilers, edit the FCOMP and CCOMP variables in the make_model.gen script as necessary. A set of program and data files are provided for the model, which calculate the DBTT (K). The files are included in a directory called CHARPY_IRR. This directory contains the following files and subdirectories:

README
A text file containing step-by-step instructions for running the program, including a list of input variables.
MINMAX
A text file containing the minimum and maximum limits of each input and output variable. This file is used to normalise and unnormalise the input and output data.
input_data.dat
An input text file containing the input variables used for predictions.
make_model.gen
This shell script compiles the ancilliary programs for the model. It can be executed by typing ./make_model.gen  in a terminal. It only needs to be run once during installation of the model. This script assume the presence of the f77 and gcc compilers. If you have different compilers, edit the FCOMP and CCOMP variables as necessary before running the script.
model.gen
This is a shell script containing the command steps required to run the model. It can be executed by typing ./model.gen  at the command prompt in a terminal. This shell file runs all the programs necessary for normalising the input data, executing the network for each model, unnormalising the output data and combining the results of each model to produce the final committee result. make_model.gen must have been run prior to attempting to run this script.
spec.t1
A dynamic file, created by spec.ex, which contains information about the module and the number of data items being supplied. It is read by the program generate44.
norm_test.in
This is a text file which contains the normalised input variables. It is generated by the program normtest.for in subdirectory s.
generate44/generate55
This is the executable file for the neural network program. It reads the normalised input data file, norm_test.in, and uses the weight files in subdirectory c. The results are written to the temporary output file _out. If the model fails to run, it is probably because this program has not been compiled correctly for your processor. Download the code from https://wol.ra.phy.cam.ac.uk/mackay/README.html#Source_code and recompile, or contact the author for help.
_ot, _out, _res, _sen
These files are created by generate44 and should be deleted automatically.
final_result
Contains the final un-normalised committee results for the predicted elongation, along with a calculation of the modelling uncertainty.
SUBDIRECTORY s
spec.c
The source code for program spec.ex.
normtest.for
Program to normalise the data in test.dat and produce the normalised input file norm_test.in. It makes use of information read in from no_of_rows.dat and committee.dat.
gencom.for
This program uses the information in committee.dat and combines the predictions from the individual models, in subdirectory outprdt, to obtain an averaged value (committee prediction). The output (in normalised form) is written to com.dat.
treatout.for
Program to un-normalise the committee results in com.dat and write the output predictions to unnorm_com.
committee.dat
A text file containing the number of models to be used to form the committee result and the number of input variables. It is read by gencom.for, normtest.for and treatout.for.
extradata.for
Program to calculate additional network inputs dependent on the "raw" data, such as Arrhenius functions. Reads from run_data.dat and writes to test.dat.
no_of_lines.c
Program to count the number of input data.
SUBDIRECTORY c
_w*f
The weights files for the different models.
*.lu
Files containing information for calculating the size of the error bars for the different models.
_c*
Files containing information about the perceived significance value [2] for each model.
_R*
Files containing values for the noise, test error and log predictive error [2] for each model.
SUBDIRECTORY d
outran.x
A normalised output file which was created when developing the model. It is accessed by generate44 via spec.t1.
SUBDIRECTORY outprdt
out1, out2 etc.
The normalised output files for each model.
com.dat
The normalised output file containing the committee results. It is generated by gencom.for.


Detailed instructions on the use of the program are given in the README file. Further information about this suite of programs can be obtained from reference 4.

Top | Next | Prev

References

  1. T. Yamamoto, G. R. Odette, H. Kishimoto, and J. W. Rensman. Compilation and Preliminary Analysis of an Irradiation Hardening and Embrittlement Database for 8Cr Martensitic Steels, Technical Report DOE/ER-0313/35, ORNL, 2003.
  2. H. K. D. H. Bhadeshia, Neural Networks in Materials Science, ISIJ International 39 (1999) No. 10, 966 - 979.
  3. D.J.C. MacKay, Mathematical Modelling of Weld Phenomena 3 (1997), eds. H. Cerjak & H.K.D.H. Bhadeshia, Inst. of Materials, London, pp 359.
  4. D.J.C MacKay's website at https://wol.ra.phy.cam.ac.uk/mackay/README.html#Source_code
  5. R. Kemp, Alloy Design for a Fusion Power Plant, PhD thesis, University of Cambridge, 2006.
Top | Next | Prev

Parameters

Input parameters

The input variables for the model are listed in the README file in the corresponding directory. The maximum and minimum training values for each variable are given in the file MINMAX.

Output parameters

These program gives the DBTT in K. The corresponding output file is called final_result. The format of the output file is:
Prediction     +/-Uncertainty
   (K)              (K)
It is necessary, of course, to rename this file between runs to avoid previous predictions being overwritten.

Top | Next | Prev

Error Indicators

None.

Top | Next | Prev

Accuracy

This model was trained on a limited dataset, and close attention should be paid to the modelling uncertainties, particularly for compositions and irradiation conditions well outside the training dataset. See reference [5] for more details.

Top | Next | Prev

Further Comments

A report on the creation of this model and analysis of its behaviour is available [5].

Top | Next | Prev

Example

1. Program text

       Complete program.


2. Program data

See sample data file: run_data.dat.

3. Program results

See sample output file: final_result.

Top | Next | Prev

Auxiliary Routines

None

Top | Next | Prev

Keywords

neural networks, charpy embrittlement, embrittlement, irradiation, ferritic steel, martensitic steel, RAFM

Top | Next | Prev

Download

Note - these are large files!
Linux PC or Mac OSX:
Download CHARPY_IRR model (tar file, 20 Mb)
Download additional notes
Top | Prev

MAP originated from a joint project of the National Physical Laboratory and the University of Cambridge.





Top | Index | MAP HomepageValid HTML 3.2!