Statistical design of experiments might seem the natural approach here but casual selection of factors to vary in a DOE study could in such cases produce a design space in which it was undesirable or even impossible to operate. The desired scale-up similarity with this approach is hoped for, rather than designed for. Knowledgable practitioners take a mechanistic approach; in addition to the genuine process understanding this creates, the corresponding experimental program is more focused and compact. When in addition hypotheses to explain the data are tested using a mechanistic model alongside experiments, the effect is a further condensation of the experimental program.
This new way of working, in which trial and error and mystery are replaced by faster, more systematic approaches and deeper process understanding, places new demands on information systems, which will be the subject of other posts in this blog.
Multi-phase reactions make up the majority of reactions, especially those which are problematic on scale and in this case I will use a slurry-phase hydrogenation, one of the most common steps in API synthesis routes.
As a first step in thinking about this reaction, a process scheme can be used to visualize the roles of chemical kinetic rates, mixing / mass transfer and heat transfer rates; in multi-phase systems (here gas-liquid-solid) these interact in ways that determine the overall progress of hydrogenation. Each rate has a rate constant that is a function of other process variables:
-kinetic rate constant k(T) for each reaction
-mass transfer rate constant kLa(N, V, with fixed vessel geometry)
-heat transfer rate constant Ua (N, V, with fixed geometry and jacket conditions)
where N is agitator speed and V is the volume of the reaction mixture.
In this case the chemistry is a nitrile reduction (requiring 2 moles of H2 per mole of substrate) to make an amine, with a side-reaction between the substrate and the imine intermediate to give an undesired impurity. We see these types of reaction routinely, in which unfavourable conditions lead to excessive impurity levels.
If the impurity level is our CQA for the design space, we want our CPPs to be narrowly defined enough to achieve the target level or better, but broad enough to give latitude in how we can run the process. Reaction pressure (P) and temperature (T) may for example be defined tightly but for operational flexibility (e.g. limited available vessel sizes) we may prefer not to fix substrate concentration and catalyst loading (a significant cost factor). Note that higher values of any of the aforementioned variables will make the chemistry faster and to ensure quality we will need mass transfer (kLa) to keep pace; so the acceptable range for kLa depends on our choice of the other operating conditions and vice versa.
Finally heat transfer needs to keep pace with the reaction rate; the acceptable range for Ua depends on all of the previous variables and on the available temperature difference between the reactor and the jacket. Heat transfer is rarely an issue in small scale lab reactors, as the ratio of surface area to volume ('a' in Ua) is very high; this becomes an issue in larger vessels. To simulate this behaviour, the delta T (between reactor and jacket) at lab scale can be limited to mimic larger scale cooling rates.
The above mechanistic considerations provide a sound basis for achieving similarity between scales and definition of the required design space.
Without the process scheme and a clear mechanistic approach to this reaction, likely choices of factors in a statistical experimental design are P, T, substrate concentration, catalyst amount, agitator speed (N) and time. Misleading conclusions about the effect of agitator speed can easily be obtained, especially when it is varied over an insensitive range; agitator speed is also an unsuitable CPP as it does not sufficiently characterise agitation. Time is also irrelevant as a factor in this sense: what matters are the relative rates of kinetics, mass transfer and heat transfer. The dissimilarity of lab and plant cooling may not even be considered. A DOE study will frequently vary more than one parameter at a time between experiments, making individual mechanistic effects harder to distinguish. Finally, as the focus is typically on end-point results, too few samples will ordinarily be taken to allow the reaction to be properly followed.
A mechanistically based experimental program will vary P, T, substrate concentration, catalyst amount, and kLa over expected ranges. In many cases, only one variable setting will change between experiments, so that individual effects, e.g. the temperature-dependence, can be isolated and understood. This approach implies that the scientist is aware of effects like mass transfer and furthermore has characterized his equipment with respect to its mass transfer ability. The jacket temperature will be limited and monitored for heat transfer scale-up purposes. Samples will be taken that allow the reaction progress to be followed. Analysis of the data will also be on a mechanistic basis and CQAs are likely to be correlated rationally with CPPs. Analysis occurs in parallel with the experimental program, enabling the experiments to be redirected as new information comes to light. The dependence of the required kLa on each of the other conditions is likely to emerge and that relationship captured in the definition of the design space.
A third approach is enabled when the scientific method is applied, i.e. a hypothesis to explain the experimental data is incorporated in a mechanistic model (based on the process scheme) and model parameters are fitted to the data. This iterative procedure causes the user to rethink their model several times, until it fits their data as well as possible. The result is a higher level of understanding, documented in a model with the data and a tool that can be used to reduce the need for experimentation. In fact the model can be used to predict for a given P, T, substrate loading, catalyst level, kLa and Ua what the impurity level will be and to explore any combination of those factors that meet the target CQA. In that sense the design space is whatever the model says meets the CQA, giving wider latitude in selecting operating conditions.
Another consequence is that different experiments are now called for: one set to characterise the chemical rate constants, k and the other set to characterize the targeted scale-up equipment. Lab experiments no longer need to mimic plant scale operation, they can focus on determining intrinsic chemical kinetics in association with the model; spiked experiments to deliberately heighten the impurity level can allow its formation kinetics to be better determined. This approach is increasingly applied in Pharma API and has been standard in other parts of the chemical process industries for many years.
All of which leaves the practical question as to how to achieve the required equipment rate constants kLa and Ua in a given vessel; otherwise the process could operate outside the design space, producing off-spec material. Success here relies on good equipment characterization using solvent tests and chemical engineering (especially agitation) calculations, supported by a process engineering-oriented equipment database in which accumulated equipment performance knowledge is stored and can be reused when required. Here, a great advantage of the pharma industry is that the same multi-purpose equipment is used for each product, meaning that useful data for characterization is being collected all the time, even during cleanouts.
No comments:
Post a Comment