Using simple linear regression models to determine stable network activity
This document was a discussion of the theoretical application of simple linear regression as a means of determining in real time when the electrophysiological activity of network stabilizes.  Objectively understanding when this point is reached is a serious concern because the derived from these points in an experiment are the data used to develop dose-response curves and other comparative measures.



Introduction
Statistically, a regression model computes a line of best fit and compares its slope to the population variation to determine if this slope is significant.  All statistical analysis programs test regression models for significance.  Practically speaking, significance is generally desired when such a model is developed because it would imply that there is a real trend across time or a relationship between the variables involved.  However, for the purpose of determining when activity is stable, the slope should not deviate significantly from zero as this would imply that the network is continuing to drift toward a higher or lower level of activity.


Defining the parameters

Ideally this test would be applied to the most recent period of activity.  Thus, a dynamic or “floating” window of time should be designated as the “population” of bins from which the regression line will be calculated.  The boundaries of this window would be defined by x and (x - n) where x is the most recent 60 second bin of activity and n is the duration (in minutes) of the window (and, consequently, the population of the model).  This duration is the first of two main parameters that must be considered and optimal values determined through laboratory experience.

The second parameter is the alpha level, that is, the level of significance required before an experimenter (or robot) is satisfied that the activity of a network is stable.  Traditionally, a level of 0.05 would be required for significance.  This value represents the ability of the model to demonstrate a trend across roughly two standard deviations of a normally distributed series of points.  By contrast, a smaller alpha level, say, 0.01, would demand a greater degree of significance for identifying a model.  Returning to the earlier point, this is the opposite of the intent for our purposes as we wish to know if any consistent trend exists.  A relatively minor “drift” over a few minutes may, for example, skew multiple points in a dose-response experiment.  A series of points may be shifted beyond or contrary to the actual effects of a pharmacological manipulation as the effect of the drug application "rides" this drift.


Where’s the p?

While the calculation of the regression model is relatively easy to accomplish, determination of the exact probability of significance is not.  As with all statistical tests, the critical value upon which comparisons are made between the experimental and theoretically derived models is dependent upon the number of cases (which yields the degrees of freedom) and the desired alpha level.  Out of ignorance, I will assume LabView (the program being used to develop software for this purpose) does not contain such a library of critical values.  If both of these parameters were held static for all experiments, a single critical value (or series of values representing “red,” “yellow,” and “green” levels of significance) might be obtained for comparison and this would not be a problem.  However, there are situations which might require adjustments of n, as is discussed below.  A solution for this problem is not apparent at this time.


A dynamic n

Ideally, the model should be calculated from a series of at least 20 bins.  However, network activity has been known at times to be fairly erratic independent of obvious outside influences such as temperature (or prayer and political lobbying).  A calculation based on a short window of time in which activity oscillates about a mean might indicate that activity is not stable, when, in the long term, it in fact, is.  Conversely, competing trends might obscure a real trend in a short period.  To guard against such cases, a test for excessively high coefficients of variation (CVs) would be desirable so that the duration of the window might be expanded.  Again, these parameters (what is a “high” CV? How much longer should the window be made?) can be determined later.


Benefits

One of the benefits of implementation of this code is that it may yield an additional measure of drug effect: the time to stabilization.  As this characteristic presents itself in the earliest stages of a culture’s exposure to a pharmacological manipulation, this measure could assist in the very rapid identification of a toxin in the application of neuronal networks as broad band biosensors.  This could be especially important in the field where personal are susceptible to the effects of such an agent.


Conclusion

There are additional directions these concepts might take, and any reader who understands the above discussion (and maybe a few who don’t) may see further applications of these procedures or, just as important, more efficient means of effecting these calculations.




Copyright Alexplorer.

Back to the index