markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
For interactive use, formulas can be entered as text strings and passed to the spot.formula constructor. | f = spot.formula('p1 U p2 R (p3 & !p4)')
f
g = spot.formula('{a;b*;c[+]}<>->GFb'); g | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
By default the parser recognizes an infix syntax, but when this fails, it tries to read the formula with the LBT syntax: | h = spot.formula('& | a b c'); h | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
By default, a formula object is presented using mathjax as above.
When a formula is converted to string you get Spot's syntax by default: | str(f) | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
If you prefer to print the string in another syntax, you may use the to_str() method, with an argument that indicates the output format to use. The latex format assumes that you will the define macros such as \U, \R to render all operators as you wish. On the otherhand, the sclatex (with sc for self-contained) format hard-codes the rendering of each of those operators: this is typically the output that is used to render formulas using MathJax in a notebook. | for i in ['spot', 'spin', 'lbt', 'wring', 'utf8', 'latex', 'sclatex']:
print("%-10s%s" % (i, f.to_str(i))) | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
Formulas output via format() can also use some convenient shorthand to select the syntax: | print("""\
Spin: {0:s}
Spin+parentheses: {0:sp}
Spot (default): {0}
Spot+shell quotes: {0:q}
LBT, right aligned: {0:l:~>40}
LBT, no M/W/R: {0:[MWR]l}""".format(f)) | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
The specifiers that can be used with format are documented as follows: | help(spot.formula.__format__) | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
A spot.formula object has a number of built-in predicates whose value have been computed when the formula was constructed. For instance you can check whether a formula is in negative normal form using is_in_nenoform(), and you can make sure it is an LTL formula (i.e. not a PSL formula) using is_ltl_formula(): | f.is_in_nenoform() and f.is_ltl_formula()
g.is_ltl_formula() | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
Similarly, is_syntactic_stutter_invariant() tells wether the structure of the formula guarranties it to be stutter invariant. For LTL formula, this means the X operator should not be used. For PSL formula, this function capture all formulas built using the siPSL grammar. | f.is_syntactic_stutter_invariant()
spot.formula('{a[*];b}<>->c').is_syntactic_stutter_invariant()
spot.formula('{a[+];b[*]}<>->d').is_syntactic_stutter_invariant() | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
spot.relabel renames the atomic propositions that occur in a formula, using either letters, or numbered propositions: | gf = spot.formula('(GF_foo_) && "a > b" && "proc[2]@init"'); gf
spot.relabel(gf, spot.Abc)
spot.relabel(gf, spot.Pnn) | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
The AST of any formula can be displayed with show_ast(). Despite the name, this is not a tree but a DAG, because identical subtrees are merged. Binary operators have their left and right operands denoted with L and R, while non-commutative n-ary operators have their operands numbered. | print(g); g.show_ast() | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
Any formula can also be classified in the temporal hierarchy of Manna & Pnueli | g.show_mp_hierarchy()
spot.mp_class(g, 'v')
f = spot.formula('F(a & X(!a & b))'); f | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
Etessami's rule for removing X (valid only in stutter-invariant formulas) | spot.remove_x(f) | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
Removing abbreviated operators | f = spot.formula("G(a xor b) -> F(a <-> b)")
spot.unabbreviate(f, "GF^")
spot.unabbreviate(f, "GF^ei") | tests/python/formulas.ipynb | hich28/mytesttxx | gpl-3.0 |
What We Expect Our Simulation Data Will Look Like:
The above code should generate a 100x100x100 volume and populate it with various, non-intersectting pointsets (representing foreground synpases). When the foreground is generated, the volume will then be introduced to random background noise which will fill the rest of the volume.
Easy Simulation Plots | #displaying the random clusters
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = foreground.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Foreground')
plt.show()
#displaying the noise
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = combinedIm.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Noise + Foreground')
plt.show() | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Why Our Simulation is Correct: Real microscopic images of synapses usually contain a majority of background noise and relatively few synapse clusters. As shown above, the generated test volume follows this expectation.
Difficult Simulation
We will now simulate data where our algorithm will not perform well on. We will generate a 100x100x100 test volume populated with background and foreground voxels containing the same intensity. Since the distribution of voxels is now unimodal (no clear difference between background and foreground), our filtering algorithm should not work well. However, the intensity values will not appear in our matplotlib plots. Therefore, our difficult simulation will appear to be the same as the Easy Simulation, but should fail after it goes through the connectLib pipeline.
Difficult Simulation Code and Plot | def generateDifficultTestVolume():
#create a test volume
volume = np.zeros((100, 100, 100))
myPointSet = set()
for _ in range(rand(500, 800)):
potentialPointSet = generatePointSet()
#be sure there is no overlap
while len(myPointSet.intersection(potentialPointSet)) > 0:
potentialPointSet = generatePointSet()
for elem in potentialPointSet:
myPointSet.add(elem)
#populate the true volume
for elem in myPointSet:
volume[elem[0], elem[1], elem[2]] = 60000
#introduce noise
noiseVolume = np.copy(volume)
for z in range(noiseVolume.shape[0]):
for y in range(noiseVolume.shape[1]):
for x in range(noiseVolume.shape[2]):
if not (z, y, x) in myPointSet:
noiseVolume[z][y][x] = 60000
return volume, noiseVolume
randImHard = generateDifficultTestVolume()
foregroundHard = randImHard[0]
combinedImHard = randImHard[1]
#displaying the random clusters
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = foregroundHard.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Foreground')
plt.show()
#displaying the noise
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = combinedImHard.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Noise + Foreground')
plt.show() | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Simulation Analysis
Pseudocode
Inputs: 3D image array that has been processed through plosLib pipeline, raw image file that hasn't been through plosLib
Outputs: List of synapse clusters | ####Pseudocode: Will not run!####
#Step 1 Otsu's Binarization to threshold out background noise intensity to 0.
for(each 2D image slice in 3D plos_image):
threshold_otsu on slice #uses Otsu's Binarization to threshold background noise to 0.
return thresholded_image
#Step 2 Cluster foreground using connected components
connected_components on thresholded_image #labels and clusters 'connected' regions in foreground
for(each labeled region):
MAKE Cluster object #instance that contains voxel members that made up labeled region
plos_ClusterList.append(Cluster) #list of synapse/foreground clusters
return plos_ClusterList
#Step 3 Use Naive Fencing (IQR Range Rule) to remove large background cluster that formed
IQR = getIQR(plos_ClusterList.getVolumes()) #calculate IQR of Cluster volumes
UpperOutlierFence = 75thpercentile(plos_ClusterList.getVolumes()) + 1.5*IQR #get upper volume threshold (third quartile + 1.5*IQR)
for (Cluster in plos_ClusterList):
if (Cluster.getVolume() > UpperOutlierFence) #if volume is considered an upper outlier, remove it
plos_ClusterList.remove(Cluster)
#Step 4 Coregister Degraded clusters found above with Raw clusters
threshold_otsu on raw_image #Thresholds raw image background
rawClusterList = connected_components on thresholded_raw_image #Clusters raw image
for raw_cluster in rawClusterList:
for plos_cluster in plos_ClusterList:
if plos_cluster in raw_cluster: #if degraded cluster is contained in the raw cluster
actualClusterList.append(raw_cluster) #add raw cluster to actual Cluster list.
return actualClusterList | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Algorithm Code | from skimage.filters import threshold_otsu
from skimage.measure import label
from cluster import Cluster
import numpy as np
import cv2
import plosLib as pLib
### Step 1: Threshold the image using Otsu Binarization
def otsuVox(argVox):
probVox = np.nan_to_num(argVox)
bianVox = np.zeros_like(probVox)
for zIndex, curSlice in enumerate(probVox):
#if the array contains all the same values
if np.max(curSlice) == np.min(curSlice):
#otsu thresh will fail here, leave bianVox as all 0's
continue
thresh = threshold_otsu(curSlice)
bianVox[zIndex] = curSlice > thresh
return bianVox
### Step 2: Cluster foreground using Connected Components
def connectedComponents(voxel):
labelMap = label(voxel)
clusterList = []
#plus 1 since max label should be included
for uniqueLabel in range(0, np.max(labelMap)+1):
memberList = [list(elem) for elem in zip(*np.where(labelMap == uniqueLabel))]
if not len(memberList) == 0:
clusterList.append(Cluster(memberList))
return clusterList
### Step 3: Remove outlier clusters using IRQ Rule
def thresholdByVolumePercentile(clusterList):
#putting the plosPipeline clusters volumes in a list
plosClusterVolList =[]
for cluster in (range(len(clusterList))):
plosClusterVolList.append(clusterList[cluster].getVolume())
#finding the upper outlier fence
Q3 = np.percentile(plosClusterVolList, 75)
Q1 = np.percentile(plosClusterVolList, 25)
IQR = Q3 - Q1
upperThreshFence = Q3 + 1.5*IQR
#filtering out the background cluster
upperThreshClusterList = []
for cluster in (range(len(clusterList))):
if clusterList[cluster].getVolume() < upperThreshFence:
upperThreshClusterList.append(clusterList[cluster])
return upperThreshClusterList
### Step 4: Coregister clusters with raw data.
def clusterCoregister(plosClusterList, rawClusterList):
#creating a list of all the member indices of the plos cluster list
plosClusterMemberList = []
for cluster in range(len(plosClusterList)):
plosClusterMemberList.extend(plosClusterList[cluster].members)
#creating a list of all the clusters without any decay
finalClusterList =[]
for rawCluster in range(len(rawClusterList)):
for index in range(len(plosClusterMemberList)):
if ((plosClusterMemberList[index] in rawClusterList[rawCluster].members) and (not(rawClusterList[rawCluster] in finalClusterList))):
finalClusterList.append(rawClusterList[rawCluster])
return finalClusterList
########## Complete Pipeline ##########
def completePipeline(image):
#Plos Pipeline Results
plosOut = pLib.pipeline(image)
#Otsu's Binarization Thresholding
bianOut = otsuVox(plosOut)
#Connected Components
connectList = connectedComponents(bianOut)
#Remove outlier clusters
threshClusterList = thresholdByVolumePercentile(connectList)
#finding the clusters without plosPipeline - lists the entire clusters
bianRawOut = otsuVox(image)
clusterRawList = connectedComponents(bianRawOut)
#coregistering with raw data
clusters = clusterCoregister(threshClusterList, clusterRawList)
return clusters | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Easy Simulation Analysis
What We Expect
As previously mentioned, we believe the pipeline will work very well on the easy simulation (See Simulation Data: Easy Simulation for explanation).
Generate Easy Simulation Data: See Simulation Data Above.
Pipeline Run on Easy Data | completeClusterMemberList = completePipeline(combinedIm) | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Easy Simulation Results | ### Get Cluster Volumes
def getClusterVolumes(clusterList):
completeClusterVolumes = []
for cluster in clusterList:
completeClusterVolumes.append(cluster.getVolume())
return completeClusterVolumes
import mouseVis as mv
#plotting results
completeClusterVolumes = getClusterVolumes(completeClusterMemberList)
mv.generateHist(completeClusterVolumes, title = 'Cluster Volumes for Easy Simulation', bins = 25, xaxis = 'Volumes', yaxis = 'Relative Frequency')
| pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Performance Metrics:
We will be judging our algorithm's performance through two metrics: average cluster volume and cluster density per volume. This is based off of the 2 parameters we used to generate the test volume (see Simulation Data: Easy Simulation).
If our algorithm was successful, the average volume of detected synapse clusters should be equal to the average volume of the total foreground clusters that we generated. That is, our pipeline labeled synapses into correctly sized clusters (27 voxels).
Cluster density basically returns how many clusters were detected given a certain volume size. This is to show how many of the synapse clusters our algorithm was actually able to label. If the algorithm performs correctly, the relative number of synapses clusters per volume should equal around 2% (the volumetric density of synapses we generated in the test volume). | #test stats
# get actual cluster volumes from foreground (for 'Expected' values)
def getForegroundClusterVols(foreground):
foregroundClusterList = connectedComponents(foreground)
del foregroundClusterList[0] #background cluster
foregroundClusterVols = []
for cluster in foregroundClusterList:
foregroundClusterVols.append(cluster.getVolume())
return foregroundClusterVols
def getAverageMetric(coClusterVols, foreClusterVols):
#no clusters found
if (len(coClusterVols)==0):
avgClusterVol = 0
else:
#average volume of detected clusters
avgClusterVol = np.mean(coClusterVols)
#average volume of total foreground clusters
avgExpectedVol = np.mean(foreClusterVols)
print 'Average Volume'
print "\tExpected: " + str(avgExpectedVol) + '\tActual: ' + str(avgClusterVol)
return avgExpectedVol, avgClusterVol
def getDensityMetric(coClusterVols, foreClusterVols):
#no clusters found
if (len(coClusterVols)==0):
coClusterVols.append(0)
print 'Cluster Density of Data By Volume'
print "\tExpected: " + str(np.sum(foreClusterVols)/(100*100*100.0)) + '\tActual: ' + str(np.sum(coClusterVols)/(100*100*100.0)) | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Quantify Performance for Easy Simulation | foregroundClusterVols = getForegroundClusterVols(foreground)
getAverageMetric(completeClusterVolumes, foregroundClusterVols)
getDensityMetric(completeClusterVolumes, foregroundClusterVols) | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
As shown above, our connectLib pipeline worked extremely well on the easy simulation. The small difference between the actual and expected values come from the generated synapse point sets. Foreground synapses can potentially be adjacent to each other in the test volume. Connected Components will label the multiple, connected synapses as one cluster, which explains the cluster volumes at roughly 56 (2 synapses) and 81 (3 synapses) [See Histogram in Easy Simulation Results].
Difficult Simulation Analysis
What We Expect: Since Otsu's Binarization depends on a bimodal distribution of voxel intensities, the background should not get thresholded for the difficult simulation. Furthermore, since all the voxels are identical in terms of intensity, connectedComponents should label the entire volume as just one cluster.
Generate Difficult Simulation Data: See Simulate Data: Difficult Simulation.
Pipeline Run on Difficult Data: | completeClusterMemberListHard = completePipeline(combinedImHard)
print len(completeClusterMemberListHard) | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Difficult Simulation Results: | #Plos Pipeline Results
plosOut = pLib.pipeline(combinedImHard)
#Otsu's Binarization Thresholding
bianOut = otsuVox(plosOut)
#Connected Components
connectList = connectedComponents(bianOut)
#get total volume for hard simulation clusters
totalClusterHard = []
for cluster in connectList:
totalClusterHard.append(cluster.getVolume())
#get coregistered (complete) cluster volumes
completeClusterVolumesHard = getClusterVolumes(completeClusterMemberListHard)
print 'Number of Clusters: ' + str(len(totalClusterHard))
print 'Cluster Volume: ' + str(totalClusterHard[0])
print 'Coregistered Clusters: ' + str(len(completeClusterMemberListHard)) | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Performance Metrics
See Easy Simulation Analysis: Performance Metrics.
Quantify Performance for Difficult Simulation | foregroundClusterVolsHard = getForegroundClusterVols(foregroundHard)
getAverageMetric(completeClusterVolumesHard, foregroundClusterVolsHard)
getDensityMetric(completeClusterVolumesHard, foregroundClusterVolsHard) | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
As predicted, the foreground and background was combined into one cluster through the connectLib Pipeline (see Results). This large cluster does not coregister with any of the original foreground clusters. Clearly, our pipeline performed very poorly on the difficult simulation as zero clusters were actually detected. This ultimately proves our earlier thesis that the connectLib pipeline is dependent on the foreground and background voxels having significantly different intensities.
Verify Simulation Analysis
Repeat Easy and Hard simulation analysis 10 times each. | easySimulationVolumes = []
hardSimulationVolumes = []
for i in range(10):
#Easy Simulation
randIm = generateTestVolume()
foreground = randIm[0]
combinedIm = randIm[1]
completeClusterMemberList = completePipeline(combinedIm)
completeClusterVolumes = getClusterVolumes(completeClusterMemberList)
foregroundClusterVols = getForegroundClusterVols(foreground)
easySimulationVolumes.append(getAverageMetric(completeClusterVolumes, foregroundClusterVols))
getDensityMetric(completeClusterVolumes, foregroundClusterVols)
#Hard Simulation
randImHard = generateDifficultTestVolume()
foregroundHard = randImHard[0]
combinedImHard = randImHard[1]
completeClusterMemberListHard = completePipeline(combinedImHard)
completeClusterVolumesHard = getClusterVolumes(completeClusterMemberListHard)
foregroundClusterVolsHard = getForegroundClusterVols(foregroundHard)
hardSimulationVolumes.append(getAverageMetric(completeClusterVolumesHard, foregroundClusterVolsHard))
getDensityMetric(completeClusterVolumesHard, foregroundClusterVolsHard) | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Plotting Expected and Average Cluster Volumes for each easy simulation.
Red = Expected Average Volume
Blue = Observed Average Volume | #separate expected and actual values into separate indices
esv = [list(t) for t in zip(*easySimulationVolumes)]
#outlier
del esv[0][6]
del esv [1][6]
fig = plt.figure()
plt.title('Easy Simulation: Average and Expected Cluster Volumes (10 Trials)')
plt.xlabel('Simulation #')
plt.ylabel('Volume (voxels)')
x = np.arange(9)
plt.scatter(x, esv[0], c='r')
plt.scatter(x, esv[1], c='b')
plt.show() | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Plotting Expected and Average Cluster Volumes for each difficult simulation. | hsv = [list(t) for t in zip(*hardSimulationVolumes)]
fig = plt.figure()
plt.title('Difficult Simulation: Average and Expected Cluster Volumes (10 Trials)')
plt.xlabel('Simulation #')
plt.ylabel('Volume (voxels)')
x = np.arange(10)
plt.scatter(x, hsv[0], c='r')
plt.scatter(x, hsv[1], c='b')
plt.show() | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Summary of Simulation Analysis
Our difficult and easy simulation data demonstrates how our connectLib pipeline is dependent on how different the background and foreground voxel intensity values are. When the background and foreground are not distinguishable, the connectLib cannot threshold and filter out the background clusters, thus creating one large cluster combining all the voxels in the volume. Thus, essentially no synpases (clusters) can be detected correctly. On the other hand, if the foreground voxels are very distinguishable from the background noise (easy simulation), our connectLib pipeline works extremely well. For the easy simulations, 100% of the background noise was filtered out and almost all of the foreground point sets (representing synapses) were clustered correctly. The only errors were from adjacent 'synapses' that were clustered together.
Real Data
Our sample data will come from different slices (z = 5) of a tiff image (3D). The tiff file is a photon microscope image of a mouse brain. The dimensions of our data will be 1024 x 1024 x 5 voxels^3 (x,y,z axis respectively).
Displaying Real Data | import pickle
realData = pickle.load(open('../data/realDataRaw_t0.synth'))
realDataSection = realData[5: 10]
plosDataSection = pLib.pipeline(realDataSection)
mv.generateHist(plosDataSection, bins = 50, title = "Voxel Intensity Distribution after PLOS", xaxis = 'Relative Voxel Intensity', yaxis = 'Frequency') | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Predicting Performance:
Mouse brains have a lot more activity than be portrayed in our simulated data. There are different captured cell types and a wide variation of background/foreground noise. Our Naive Fencing method and Otsu's Binarization could potentially not be enough to produce clean synapse clusters. Because of this added complexity present in mouse brain images, we believe our connectLib pipeline might not work perfectly on the real data. What is more concerning is that the distribution of voxel intensities is unimodal. The foreground does not appear to be significantly different from the background. Thus, Otsu's Binarization might not threshold the background successfully.
connectLib Algorithm Run on Real Data | print 'Running'
realClusterList = completePipeline(plosDataSection)
realClusterVols = getClusterVolumes(realClusterList) | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Results | mv.generateHist(realClusterVols, title = 'Cluster Volumes for Easy Simulation', bins = 50, xaxis = 'Volumes', yaxis = 'Relative Frequency')
print realClusterVols
del realClusterVols[0]
mv.generateHist(realClusterVols, title = 'Cluster Volumes for Easy Simulation', axisStart = 0, axisEnd = 200, bins = 25, xaxis = 'Volumes', yaxis = 'Relative Frequency') | pipeline_1/background/connectLib_revised.md.ipynb | NeuroDataDesign/pan-synapse | apache-2.0 |
Get acq stats data and clean | # Make a map of AGASC_ID to AGACS 1.7 MAG_ACA. The acq_stats.h5 file has whatever MAG_ACA
# was in place at the time of planning the loads.
# Define new term `red_mag_err` which is used here in place of the
# traditional COLOR1 == 1.5 test.
with tables.open_file(str(SKA / 'data' / 'agasc' / 'miniagasc_1p7.h5'), 'r') as h5:
agasc_mag_aca = h5.root.data.col('MAG_ACA')
agasc_id = h5.root.data.col('AGASC_ID')
has_color3 = h5.root.data.col('RSV3') != 0 #
red_star = np.isclose(h5.root.data.col('COLOR1'), 1.5)
mag_aca_err = h5.root.data.col('MAG_ACA_ERR') / 100
red_mag_err = red_star & ~has_color3 # MAG_ACA, MAG_ACA_ERR is potentially inaccurate
agasc1p7_idx = {id: idx for id, idx in zip(agasc_id, count())}
agasc1p7 = Table([agasc_mag_aca, mag_aca_err, red_mag_err],
names=['mag_aca', 'mag_aca_err', 'red_mag_err'], copy=False)
acq_file = str(SKA / 'data' / 'acq_stats' / 'acq_stats.h5')
with tables.open_file(str(acq_file), 'r') as h5:
cols = h5.root.data.cols
names = {'tstart': 'guide_tstart',
'obsid': 'obsid',
'obc_id': 'acqid',
'halfwidth': 'halfw',
'warm_pix': 'n100_warm_frac',
'mag_aca': 'mag_aca',
'mag_obs': 'mean_trak_mag',
'known_bad': 'known_bad',
'color': 'color1',
'img_func': 'img_func',
'ion_rad': 'ion_rad',
'sat_pix': 'sat_pix',
'agasc_id': 'agasc_id',
't_ccd': 'ccd_temp',
'slot': 'slot'}
acqs = Table([getattr(cols, h5_name)[:] for h5_name in names.values()],
names=list(names.keys()))
year_q0 = 1999.0 + 31. / 365.25 # Jan 31 approximately
acqs['year'] = Time(acqs['tstart'], format='cxcsec').decimalyear.astype('f4')
acqs['quarter'] = (np.trunc((acqs['year'] - year_q0) * 4)).astype('f4')
# Create 'fail' column, rewriting history as if the OBC always
# ignore the MS flag in ID'ing acq stars.
#
# CHECK: is ion_rad being ignored on-board?
# Answer: Not as of 2019-09
#
obc_id = acqs['obc_id']
obc_id_no_ms = (acqs['img_func'] == 'star') & ~acqs['sat_pix'] & ~acqs['ion_rad']
acqs['fail'] = np.where(obc_id | obc_id_no_ms, 0.0, 1.0)
# Re-map acq_stats database magnitudes for AGASC 1.7
acqs['mag_aca'] = [agasc1p7['mag_aca'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']]
acqs['red_mag_err'] = [agasc1p7['red_mag_err'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']]
acqs['mag_aca_err'] = [agasc1p7['mag_aca_err'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']]
# Add a flag to distinguish flight from ASVT data
acqs['asvt'] = False
# Filter for year and mag
#
year_max = Time(f'{MODEL_DATE}-01').decimalyear
year_min = year_max - 4.5
acq_ok = ((acqs['year'] > year_min) & (acqs['year'] < year_max) &
(acqs['mag_aca'] > 7.0) & (acqs['mag_aca'] < 11) &
(~np.isclose(acqs['color'], 0.7)))
# Filter known bad obsids. NOTE: this is no longer doing anything, but
# consider updating the list of known bad obsids or obtaining programmically?
print('Filtering known bad obsids, start len = {}'.format(np.count_nonzero(acq_ok)))
bad_obsids = [
# Venus
2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583,
7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499,
16500,16501,16503,16504,16505,16506,16502,
]
for badid in bad_obsids:
acq_ok = acq_ok & (acqs['obsid'] != badid)
print('Filtering known bad obsids, end len = {}'.format(np.count_nonzero(acq_ok))) | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Get ASVT data and make it look more like acq stats data | peas = Table.read('pea_analysis_results_2018_299_CCD_temp_performance.csv', format='ascii.csv')
peas = asvt_utils.flatten_pea_test_data(peas)
peas = peas[peas['ccd_temp'] > -10.5]
# Version of ASVT PEA data that is more flight-like
fpeas = Table([peas['star_mag'], peas['ccd_temp'], peas['search_box_hw']],
names=['mag_aca', 't_ccd', 'halfwidth'])
fpeas['year'] = np.random.uniform(2019.0, 2019.5, size=len(peas))
fpeas['color'] = 1.0
fpeas['quarter'] = (np.trunc((fpeas['year'] - year_q0) * 4)).astype('f4')
fpeas['fail'] = 1.0 - peas['search_success']
fpeas['asvt'] = True
fpeas['red_mag_err'] = False
fpeas['mag_obs'] = 0.0 | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Combine flight acqs and ASVT data | data_all = vstack([acqs[acq_ok]['year', 'fail', 'mag_aca', 't_ccd', 'halfwidth', 'quarter',
'color', 'asvt', 'red_mag_err', 'mag_obs'],
fpeas])
data_all.sort('year') | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Compute box probit delta term based on box size | # Adjust probability (in probit space) for box size.
data_all['box_delta'] = get_box_delta(data_all['halfwidth'])
# Put in an ad-hoc penalty on ASVT data that introduces up to a -0.3 shift
# on probit probability. It goes from 0.0 for mag < 10.1 up to 0.3 at mag=10.4.
ok = data_all['asvt']
box_delta_tweak = (data_all['mag_aca'][ok] - 10.1).clip(0, 0.3)
data_all['box_delta'][ok] -= box_delta_tweak
# Another ad-hoc tweak: the mag=8.0 data show more failures at smaller
# box sizes. This confounds the fitting. For this case only just
# set the box deltas to zero and this makes the fit work.
ok = data_all['asvt'] & (data_all['mag_aca'] == 8)
data_all['box_delta'][ok] = 0.0
data_all = data_all.group_by('quarter')
data_all0 = data_all.copy() # For later augmentation with simulated red_mag_err stars
data_mean = data_all.groups.aggregate(np.mean) | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Model definition | def t_ccd_normed(t_ccd):
return (t_ccd + 8.0) / 8.0
def p_fail(pars,
t_ccd, tc2=None,
box_delta=0, rescale=True, probit=False):
"""
Acquisition probability model
:param pars: p0, p1, p2 (quadratic in t_ccd) and floor (min p_fail)
:param t_ccd: t_ccd (degC) or scaled t_ccd if rescale is False.
:param tc2: (scaled t_ccd) ** 2, this is just for faster fitting
:param box_delta: delta p_fail for search box size
:param rescale: rescale t_ccd to about -1 to 1 (makes P0, P1, P2 better-behaved)
:param probit: return probability as probit instead of 0 to 1.
"""
p0, p1, p2, floor = pars
tc = t_ccd_normed(t_ccd) if rescale else t_ccd
if tc2 is None:
tc2 = tc ** 2
# Make sure box_delta has right dimensions
tc, box_delta = np.broadcast_arrays(tc, box_delta)
# Compute the model. Also clip at +10 to avoid values that are
# exactly 1.0 at 64-bit precision.
probit_p_fail = (p0 + p1 * tc + p2 * tc2 + box_delta).clip(floor, 10)
# Possibly transform from probit to linear probability
out = probit_p_fail if probit else stats.norm.cdf(probit_p_fail)
return out
def p_acq_fail(data=None):
"""
Sherpa fit function wrapper to ensure proper use of data in fitting.
"""
if data is None:
data = data_all
tc = t_ccd_normed(data['t_ccd'])
tc2 = tc ** 2
box_delta = data['box_delta']
def sherpa_func(pars, x=None):
return p_fail(pars, tc, tc2, box_delta, rescale=False)
return sherpa_func | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Model fitting functions | def calc_binom_stat(data, model, staterror=None, syserror=None, weight=None, bkg=None):
"""
Calculate log-likelihood for a binomial probability distribution
for a single trial at each point.
Defining p = model, then probability of seeing data == 1 is p and
probability of seeing data == 0 is (1 - p). Note here that ``data``
is strictly either 0.0 or 1.0, and np.where interprets those float
values as False or True respectively.
"""
fit_stat = -np.sum(np.log(np.where(data, model, 1.0 - model)))
return fit_stat, np.ones(1)
def fit_poly_model(data):
from sherpa import ui
comp_names = ['p0', 'p1', 'p2', 'floor']
data_id = 1
ui.set_method('simplex')
# Set up the custom binomial statistics
ones = np.ones(len(data))
ui.load_user_stat('binom_stat', calc_binom_stat, lambda x: ones)
ui.set_stat(binom_stat)
# Define the user model
ui.load_user_model(p_acq_fail(data), 'model')
ui.add_user_pars('model', comp_names)
ui.set_model(data_id, 'model')
ui.load_arrays(data_id, np.array(data['year']), np.array(data['fail'], dtype=np.float))
# Initial fit values from fit of all data
fmod = ui.get_model_component('model')
# Define initial values / min / max
# This is the p_fail value at t_ccd = -8.0
fmod.p0 = -2.605
fmod.p0.min = -10
fmod.p0.max = 10
# Linear slope of p_fail
fmod.p1 = 2.5
fmod.p1.min = 0.0
fmod.p1.max = 10
# Quadratic term. Only allow negative curvature, and not too much at that.
fmod.p2 = 0.0
fmod.p2.min = -1
fmod.p2.max = 0
# Floor to p_fail.
fmod.floor = -2.6
fmod.floor.min = -2.6
fmod.floor.max = -0.5
ui.fit(data_id)
return ui.get_fit_results() | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Plotting and validation | def plot_fails_mag_aca_vs_t_ccd(mag_bins, data_all, year0=2015.0):
ok = (data_all['year'] > year0) & ~data_all['fail'].astype(bool)
da = data_all[ok]
fuzzx = np.random.uniform(-0.3, 0.3, len(da))
fuzzy = np.random.uniform(-0.125, 0.125, len(da))
plt.plot(da['t_ccd'] + fuzzx, da['mag_aca'] + fuzzy, '.C0', markersize=4)
ok = (data_all['year'] > year0) & data_all['fail'].astype(bool)
da = data_all[ok]
fuzzx = np.random.uniform(-0.3, 0.3, len(da))
fuzzy = np.random.uniform(-0.125, 0.125, len(da))
plt.plot(da['t_ccd'] + fuzzx, da['mag_aca'] + fuzzy, '.C1', markersize=4, alpha=0.8)
# plt.xlim(-18, -10)
# plt.ylim(7.0, 11.1)
x0, x1 = plt.xlim()
for y in mag_bins:
plt.plot([x0, x1], [y, y], '-', color='r', linewidth=2, alpha=0.8)
plt.xlabel('T_ccd (C)')
plt.ylabel('Mag_aca')
plt.title(f'Acq successes (blue) and failures (orange) since {year0}')
plt.grid()
def plot_fit_grouped(data, group_col, group_bin, log=False, colors='br', label=None, probit=False):
group = np.trunc(data[group_col] / group_bin)
data = data.group_by(group)
data_mean = data.groups.aggregate(np.mean)
len_groups = np.diff(data.groups.indices)
data_fail = data_mean['fail']
model_fail = np.array(data_mean['model'])
fail_sigmas = np.sqrt(data_fail * len_groups) / len_groups
# Possibly plot the data and model probabilities in probit space
if probit:
dp = stats.norm.ppf(np.clip(data_fail + fail_sigmas, 1e-6, 1-1e-6))
dm = stats.norm.ppf(np.clip(data_fail - fail_sigmas, 1e-6, 1-1e-6))
data_fail = stats.norm.ppf(data_fail)
model_fail = stats.norm.ppf(model_fail)
fail_sigmas = np.vstack([data_fail - dm, dp - data_fail])
plt.errorbar(data_mean[group_col], data_fail, yerr=fail_sigmas,
fmt='.' + colors[1], label=label, markersize=8)
plt.plot(data_mean[group_col], model_fail, '-' + colors[0])
if log:
ax = plt.gca()
ax.set_yscale('log')
def mag_filter(mag0, mag1):
ok = (data_all['mag_aca'] > mag0) & (data_all['mag_aca'] < mag1)
return ok
def t_ccd_filter(t_ccd0, t_ccd1):
ok = (data_all['t_ccd'] > t_ccd0) & (data_all['t_ccd'] < t_ccd1)
return ok
def wp_filter(wp0, wp1):
ok = (data_all['warm_pix'] > wp0) & (data_all['warm_pix'] < wp1)
return ok | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Define magnitude bins for fitting and show data | mag_centers = np.array([6.3, 8.1, 9.1, 9.55, 9.75, 10.0, 10.25, 10.55, 10.75, 11.0])
mag_bins = (mag_centers[1:] + mag_centers[:-1]) / 2
mag_means = np.array([8.0, 9.0, 9.5, 9.75, 10.0, 10.25, 10.5, 10.75])
for m0, m1, mm in zip(mag_bins[:-1], mag_bins[1:], mag_means):
ok = (data_all['asvt'] == False) & (data_all['mag_aca'] >= m0) & (data_all['mag_aca'] < m1)
print(f"m0={m0:.2f} m1={m1:.2f} mean_mag={data_all['mag_aca'][ok].mean():.2f} vs. {mm}")
plt.figure(figsize=(10, 14))
for subplot, halfwidth in enumerate([60, 80, 100, 120, 140, 160, 180]):
plt.subplot(4, 2, subplot + 1)
ok = (data_all['halfwidth'] > halfwidth - 10) & (data_all['halfwidth'] <= halfwidth + 10)
plot_fails_mag_aca_vs_t_ccd(mag_bins, data_all[ok])
plt.title(f'Acq success (blue) fail (orange) box={halfwidth}')
plt.tight_layout() | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Color != 1.5 fit (this is MOST acq stars) | # fit = fit_sota_model(data_all['color'] == 1.5, ms_disabled=True)
mask_no_1p5 = ((data_all['red_mag_err'] == False) &
(data_all['t_ccd'] > -18) &
(data_all['t_ccd'] < -0.5))
mag0s, mag1s = mag_bins[:-1], mag_bins[1:]
fits = {}
masks = []
for m0, m1 in zip(mag0s, mag1s):
print(m0, m1)
mask = mask_no_1p5 & mag_filter(m0, m1) # & t_ccd_filter(-10.5, 0)
print(np.count_nonzero(mask))
masks.append(mask)
fits[m0, m1] = fit_poly_model(data_all[mask])
colors = [f'kC{i}' for i in range(9)]
plt.figure(figsize=(13, 4))
for subplot in (1, 2):
plt.subplot(1, 2, subplot)
probit = (subplot == 2)
for m0_m1, color, mask, mag_mean in zip(list(fits), colors, masks, mag_means):
fit = fits[m0_m1]
data = data_all[mask]
data['model'] = p_acq_fail(data)(fit.parvals)
plot_fit_grouped(data, 't_ccd', 2.0,
probit=probit, colors=[color, color], label=str(mag_mean))
plt.grid()
if probit:
plt.ylim(-3.5, 2.5)
plt.ylabel('Probit(p_fail)' if probit else 'p_fail')
plt.xlabel('T_ccd');
plt.legend(fontsize='small')
# This computes probabilities for 120 arcsec boxes, corresponding to raw data
t_ccds = np.linspace(-16, -0, 20)
plt.figure(figsize=(13, 4))
for subplot in (1, 2):
plt.subplot(1, 2, subplot)
probit = (subplot == 2)
for m0_m1, color, mag_mean in zip(list(fits), colors, mag_means):
fit = fits[m0_m1]
probs = p_fail(fit.parvals, t_ccds)
if probit:
probs = stats.norm.ppf(probs)
plt.plot(t_ccds, probs, label=f'{mag_mean:.2f}')
plt.legend()
plt.xlabel('T_ccd')
plt.ylabel('P_fail' if subplot == 1 else 'Probit(p_fail)')
plt.title('P_fail for halfwidth=120')
plt.grid()
mag_bin_centers = np.concatenate([[5.0], mag_means, [13.0]])
fit_parvals = []
for fit in fits.values():
fit_parvals.extend(fit.parvals)
fit_parvals = np.array(fit_parvals).reshape(-1, 4)
parvals_mag12 = [[5, 0, 0, 0]]
parvals_mag5 = [[-5, 0, 0, -3]]
fit_parvals = np.concatenate([parvals_mag5, fit_parvals, parvals_mag12])
fit_parvals = fit_parvals.transpose()
for ps, parname in zip(fit_parvals, fit.parnames):
plt.plot(mag_bin_centers, ps, '.-', label=parname)
plt.legend(fontsize='small')
plt.title('Model coefficients vs. mag')
plt.xlabel('Mag_aca')
plt.grid() | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Define model for color=1.5 stars
Post AGASC 1.7, there is inadequate data to independently perform the binned
fitting.
Instead assume a magnitude error distribution which is informed by examining
the observed distribution of dmag = mag_obs - mag_aca (observed - catalog). This
turns out to be well-represented by an exp(-abs(dmag) / dmag_scale)
distribution. This contrasts with a gaussian that scales as exp(dmag^2).
Use the assumed mag error distribution and sample the color != 1.5 star
probabilities accordingly and compute the weighted mean failure probability.
Examine distribution of mag error for color=1.5 stars | def plot_mag_errs(acqs, red_mag_err):
ok = ((acqs['red_mag_err'] == red_mag_err) &
(acqs['mag_obs'] > 0) &
(acqs['img_func'] == 'star'))
dok = acqs[ok]
dmag = dok['mag_obs'] - dok['mag_aca']
plt.figure(figsize=(14, 4.5))
plt.subplot(1, 3, 1)
plt.plot(dok['mag_aca'], dmag, '.')
plt.plot(dok['mag_aca'], dmag, ',', alpha=0.3)
plt.xlabel('mag_aca (catalog)')
plt.ylabel('Mag err')
plt.title('Mag err (observed - catalog) vs mag_aca')
plt.xlim(5, 11.5)
plt.ylim(-4, 2)
plt.grid()
plt.subplot(1, 3, 2)
plt.hist(dmag, bins=np.arange(-3, 4, 0.2), log=True);
plt.grid()
plt.xlabel('Mag err')
plt.title('Mag err (observed - catalog)')
plt.xlim(-4, 2)
plt.subplot(1, 3, 3)
plt.hist(dmag, bins=100, cumulative=-1, normed=True)
plt.xlim(-1, 1)
plt.xlabel('Mag err')
plt.title('Mag err (observed - catalog)')
plt.grid()
plot_mag_errs(acqs, red_mag_err=True)
plt.subplot(1, 3, 2)
plt.plot([-2.8, 0], [1, 7000], 'r');
plt.plot([0, 4.0], [7000, 1], 'r');
plt.xlim(-4, 4); | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Define an analytical approximation for distribution with ad-hoc positive tail | # Define parameters / metadata for floor model
FLOOR = {'fit_parvals': fit_parvals,
'mag_bin_centers': mag_bin_centers}
def calc_1p5_mag_err_weights():
x = np.linspace(-2.8, 4, 18)
ly = 3.8 * (1 - np.abs(x) / np.where(x > 0, 4.0, 2.8))
y = 10 ** ly
return x, y / y.sum()
FLOOR['mag_errs_1p5'], FLOOR['mag_err_weights_1p5'] = calc_1p5_mag_err_weights()
plt.semilogy(FLOOR['mag_errs_1p5'], FLOOR['mag_err_weights_1p5'])
plt.grid() | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Global model for arbitrary mag, t_ccd, color, and halfwidth | def floor_model_acq_prob(mag, t_ccd, color=0.6, halfwidth=120, probit=False):
"""
Acquisition probability model
:param mag: Star magnitude(s)
:param t_ccd: CCD temperature(s)
:param color: Star color (compared to 1.5 to decide which p_fail model to use)
:param halfwidth: Search box size (arcsec)
:param probit: Return probit of failure probability
:returns: acquisition failure probability
"""
parvals = FLOOR['fit_parvals']
mag_bin_centers = FLOOR['mag_bin_centers']
mag_errs_1p5 = FLOOR['mag_errs_1p5']
mag_err_weights_1p5 = FLOOR['mag_err_weights_1p5']
# Make sure inputs have right dimensions
is_scalar, t_ccds, mags, halfwidths, colors = broadcast_arrays(t_ccd, mag, halfwidth, color)
box_deltas = get_box_delta(halfwidths)
p_fails = []
for t_ccd, mag, box_delta, color in zip(t_ccds.flat, mags.flat, box_deltas.flat, colors.flat):
if np.isclose(color, 1.5):
pars_list = [[np.interp(mag + mag_err_1p5, mag_bin_centers, ps) for ps in parvals]
for mag_err_1p5 in mag_errs_1p5]
weights = mag_err_weights_1p5
if probit:
raise ValueError('cannot use probit=True with color=1.5 stars')
else:
pars_list = [[np.interp(mag, mag_bin_centers, ps) for ps in parvals]]
weights = [1]
pf = sum(weight * p_fail(pars, t_ccd, box_delta=box_delta, probit=probit)
for pars, weight in zip(pars_list, weights))
p_fails.append(pf)
out = np.array(p_fails).reshape(t_ccds.shape)
return out
mags, t_ccds = np.mgrid[8.75:10.75:30j, -16:-4:30j]
plt.figure(figsize=(13, 4))
for subplot, color in enumerate([1.0, 1.5]):
plt.subplot(1, 2, subplot + 1)
p_fails = floor_model_acq_prob(mags, t_ccds, probit=False, color=color)
cs = plt.contour(t_ccds, mags, p_fails, levels=[0.05, 0.1, 0.2, 0.5, 0.75, 0.9],
colors=['g', 'g', 'b', 'c', 'm', 'r'])
plt.clabel(cs, inline=1, fontsize=10)
plt.grid()
plt.xlim(-17, -4)
plt.ylim(8.5, 11.0)
plt.xlabel('T_ccd (degC)')
plt.ylabel('Mag_ACA')
plt.title(f'Failure probability color={color}');
mags = np.linspace(8, 11, 301)
plt.figure()
for t_ccd in np.arange(-16, -0.9, 1):
p_fails = floor_model_acq_prob(mags, t_ccd, probit=True)
plt.plot(mags, p_fails)
plt.grid()
plt.xlim(8, 11) | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Compare to flight data for halfwidth=120
Selecting only data with halfwidth=120 is a clean, model-independent way to
compare the model to raw flight statistics.
Setup functions to get appropriate data | # NOTE this is in chandra_aca.star_probs as of version 4.27
from scipy.stats import binom
def binom_ppf(k, n, conf, n_sample=1000):
"""
Compute percent point function (inverse of CDF) for binomial, where
the percentage is with respect to the "p" (binomial probability) parameter
not the "k" parameter.
The following example returns the 1-sigma (0.17 - 0.84) confidence interval
on the true binomial probability for an experiment with 4 successes in 5 trials.
Example::
>>> binom_ppf(4, 5, [0.17, 0.84])
array([ 0.55463945, 0.87748177])
:param k: int, number of successes (0 < k <= n)
:param n: int, number of trials
:param conf: float, array of floats, percent point values
:param n_sample: number of PMF samples for interpolation
:return: percent point function values corresponding to ``conf``
"""
ps = np.linspace(0, 1, n_sample)
vals = binom.pmf(k=k, n=n, p=ps)
return np.interp(conf, xp=np.cumsum(vals) / np.sum(vals), fp=ps)
binom_ppf(4, 5, [0.17, 0.84])
n = 156
k = 127
binom_ppf(k, n, [0.17, 0.84])
def calc_binned_pfail(data_all, mag, dmag, t_ccd, dt, halfwidth=120):
da = data_all[~data_all['asvt'] & (data_all['halfwidth'] == halfwidth)]
fail = da['fail'].astype(bool)
ok = (np.abs(da['mag_aca'] - mag) < dmag) & (np.abs(da['t_ccd'] - t_ccd) < dt)
n_fail = np.count_nonzero(fail[ok])
n_acq = np.count_nonzero(ok)
p_fail = n_fail / n_acq
p_fail_lower, p_fail_upper = binom_ppf(n_fail, n_acq, [0.17, 0.84])
mean_t_ccd = np.mean(da['t_ccd'][ok])
mean_mag = np.mean(da['mag_aca'][ok])
return p_fail, p_fail_lower, p_fail_upper, mean_t_ccd, mean_mag, n_fail, n_acq
halfwidth = 120
pfs_list = []
for mag in (10.0, 10.3, 10.55):
pfs = []
for t_ccd in np.linspace(-15, -10, 6):
pf = calc_binned_pfail(data_all, mag, 0.2, t_ccd, 0.5, halfwidth=halfwidth)
pfs.append(pf)
print(f'mag={mag} mean_mag_aca={pf[4]:.2f} t_ccd=f{pf[3]:.2f} p_fail={pf[-2]}/{pf[-1]}={pf[0]:.2f}')
pfs_list.append(pfs) | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Compare model to flight for color != 1.5 stars | def plot_floor_and_flight(color, halfwidth=120):
# This computes probabilities for 120 arcsec boxes, corresponding to raw data
t_ccds = np.linspace(-16, -6, 20)
mag_acas = np.array([9.5, 10.0, 10.25, 10.5, 10.75])
for ii, mag_aca in enumerate(reversed(mag_acas)):
flight_probs = 1 - acq_success_prob(date='2018-05-01T00:00:00',
t_ccd=t_ccds, mag=mag_aca, color=color, halfwidth=halfwidth)
new_probs = floor_model_acq_prob(mag_aca, t_ccds, color=color, halfwidth=halfwidth)
plt.plot(t_ccds, flight_probs, '--', color=f'C{ii}')
plt.plot(t_ccds, new_probs, '-', color=f'C{ii}', label=f'mag_aca={mag_aca}')
if color != 1.5:
# pf1, pf2 have p_fail, p_fail_lower, p_fail_upper, mean_t_ccd, mean_mag_aca, n_fail, n_acq
for pfs, clr in zip(pfs_list, ('C3', 'C2', 'C1')):
for pf in pfs:
yerr = np.array([pf[0] - pf[1], pf[2] - pf[0]]).reshape(2, 1)
plt.errorbar(pf[3], pf[0], xerr=0.5, yerr=yerr, color=clr)
# plt.xlim(-16, None)
plt.legend()
plt.xlabel('T_ccd')
plt.ylabel('P_fail')
plt.title(f'P_fail (color={color}: new (solid) and flight (dashed)')
plt.grid()
plot_floor_and_flight(color=1.0) | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Compare model to flight for color = 1.5 stars | plt.figure(figsize=(13, 4))
plt.subplot(1, 2, 1)
for m0, m1, color in [(9, 9.5, 'C0'), (9.5, 10, 'C1'), (10, 10.3, 'C2'), (10.3, 10.7, 'C3')]:
ok = data_all['red_mag_err'] & mag_filter(m0, m1) & t_ccd_filter(-16, -10)
data = data_all[ok]
data['model'] = floor_model_acq_prob(data['mag_aca'], data['t_ccd'], color=1.5, halfwidth=data['halfwidth'])
plot_fit_grouped(data, 't_ccd', 2.0,
probit=False, colors=[color, color], label=f'{m0}-{m1}')
plt.ylim(0, 1.0)
plt.legend(fontsize='small')
plt.grid()
plt.xlabel('T_ccd')
plt.title('COLOR1=1.5 acquisition probabilities')
plt.subplot(1, 2, 2)
plot_floor_and_flight(color=1.5) | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Write model as a 3-d grid to a gzipped FITS file | def write_model_as_fits(model_name,
comment=None,
mag0=5, mag1=12, n_mag=141, # 0.05 mag spacing
t_ccd0=-16, t_ccd1=-1, n_t_ccd=31, # 0.5 degC spacing
halfw0=60, halfw1=180, n_halfw=7, # 20 arcsec spacing
):
from astropy.io import fits
mags = np.linspace(mag0, mag1, n_mag)
t_ccds = np.linspace(t_ccd0, t_ccd1, n_t_ccd)
halfws = np.linspace(halfw0, halfw1, n_halfw)
mag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij')
print('Computing probs, stand by...')
# COLOR = 1.5 (stars with poor mag estimates)
p_fails = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=False, color=1.5)
p_fails_probit_1p5 = stats.norm.ppf(p_fails)
# COLOR not 1.5 (most stars)
p_fails_probit = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0)
hdu = fits.PrimaryHDU()
if comment:
hdu.header['comment'] = comment
hdu.header['date'] = DateTime().fits
hdu.header['mdl_name'] = model_name
hdu.header['mag_lo'] = mags[0]
hdu.header['mag_hi'] = mags[-1]
hdu.header['mag_n'] = len(mags)
hdu.header['t_ccd_lo'] = t_ccds[0]
hdu.header['t_ccd_hi'] = t_ccds[-1]
hdu.header['t_ccd_n'] = len(t_ccds)
hdu.header['halfw_lo'] = halfws[0]
hdu.header['halfw_hi'] = halfws[-1]
hdu.header['halfw_n'] = len(halfws)
hdu1 = fits.ImageHDU(p_fails_probit.astype(np.float32))
hdu1.header['comment'] = 'COLOR1 != 1.5 (good mag estimates)'
hdu2 = fits.ImageHDU(p_fails_probit_1p5.astype(np.float32))
hdu2.header['comment'] = 'COLOR1 == 1.5 (poor mag estimates)'
hdus = fits.HDUList([hdu, hdu1, hdu2])
hdus.writeto(f'{model_name}.fits.gz', overwrite=True)
comment = f'Created with fit_acq_model-{MODEL_DATE}-binned-poly-binom-floor.ipynb in aca_stats repository'
write_model_as_fits(MODEL_NAME, comment=comment)
# Fudge the chandra_aca.star_probs global STAR_PROBS_DATA_DIR temporarily
# in order to load the dev model that was just created locally
_dir_orig = star_probs.STAR_PROBS_DATA_DIR
star_probs.STAR_PROBS_DATA_DIR = '.'
grid_model_acq_prob(model=MODEL_NAME)
star_probs.STAR_PROBS_DATA_DIR = _dir_orig
# Remake standard plot comparing grouped data to model, but now use
# chandra_aca.star_probs grid_model_acq_prob function with the newly
# generated 3-d FITS model that we just loaded.
colors = [f'kC{i}' for i in range(9)]
plt.figure(figsize=(13, 4))
for subplot in (1, 2):
plt.subplot(1, 2, subplot)
probit = (subplot == 2)
for m0_m1, color, mask, mag_mean in zip(list(fits), colors, masks, mag_means):
fit = fits[m0_m1]
data = data_all[mask]
data['model'] = 1 - grid_model_acq_prob(data['mag_aca'], data['t_ccd'],
halfwidth=data['halfwidth'],
model=MODEL_NAME)
plot_fit_grouped(data, 't_ccd', 2.0,
probit=probit, colors=[color, color], label=str(mag_mean))
plt.grid()
if probit:
plt.ylim(-3.5, 2.5)
plt.ylabel('Probit(p_fail)' if probit else 'p_fail')
plt.xlabel('T_ccd');
plt.legend(fontsize='small')
# Check chandra_aca implementation vs. native model from this notebook
mags = np.linspace(5, 12, 40)
t_ccds = np.linspace(-16, -1, 40)
halfws = np.linspace(60, 180, 7)
mag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij')
# First color != 1.5
# Notebook
nb_probs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0)
# Chandra_aca. Note that grid_model returns p_success, so need to negate it.
ca_probs = -grid_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0,
model=MODEL_NAME)
assert nb_probs.shape == ca_probs.shape
print('Max difference is {:.3f}'.format(np.max(np.abs(nb_probs - ca_probs))))
assert np.allclose(nb_probs, ca_probs, rtol=0, atol=0.1)
d_probs = (nb_probs - ca_probs)[:, :, 3]
plt.imshow(d_probs, origin='lower', extent=[-16, -1, 5, 12], aspect='auto', cmap='jet')
plt.colorbar();
plt.title('Delta between probit p_fail: analytical vs. gridded');
mags = np.linspace(8, 11, 200)
plt.figure()
for ii, t_ccd in enumerate(np.arange(-16, -0.9, 2)):
p_fails = floor_model_acq_prob(mags, t_ccd, probit=True)
plt.plot(mags, p_fails, color=f'C{ii}')
p_success = grid_model_acq_prob(mags, t_ccd, probit=True, model=MODEL_NAME)
plt.plot(mags, -p_success, color=f'C{ii}')
plt.grid()
plt.xlim(8, 11) | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Generate regression data for chandra_aca
The real testing is done here with a copy of the functions from chandra_aca, but
now generate some regression test data as a smoke test that things are working
on all platforms. | mags = [9, 9.5, 10.5]
t_ccds = [-10, -5]
halfws = [60, 120, 160]
mag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij')
probs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0)
print(repr(probs.round(3).flatten()))
probs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=False, color=1.5)
probs = stats.norm.ppf(probs)
print(repr(probs.round(3).flatten())) | fit_acq_model-2019-08-binned-poly-binom-floor.ipynb | sot/aca_stats | bsd-3-clause |
Scatter Chart
Scatter Chart Selections
Click a point on the Scatter plot to select it. Now, run the cell below to check the selection. After you've done this, try holding the ctrl (or command key on Mac) and clicking another point. Clicking the background will reset the selection. | x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(20)
y_data = np.random.randn(20)
scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'],
interactions={'click': 'select'},
selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'},
unselected_style={'opacity': 0.5})
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
Figure(marks=[scatter_chart], axes=[ax_x, ax_y])
scatter_chart.selected | examples/Interactions/Mark Interactions.ipynb | SylvainCorlay/bqplot | apache-2.0 |
Alternately, the selected attribute can be directly set on the Python side (try running the cell below): | scatter_chart.selected = [1, 2, 3] | examples/Interactions/Mark Interactions.ipynb | SylvainCorlay/bqplot | apache-2.0 |
Scatter Chart Interactions and Tooltips | x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(20)
y_data = np.random.randn(20)
dd = Dropdown(options=['First', 'Second', 'Third', 'Fourth'])
scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'],
names=np.arange(100, 200), names_unique=False, display_names=False, display_legend=True,
labels=['Blue'])
ins = Button(icon='fa-legal')
scatter_chart.tooltip = ins
line = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'])
scatter_chart2 = Scatter(x=x_data, y=np.random.randn(20),
scales= {'x': x_sc, 'y': y_sc}, colors=['orangered'],
tooltip=dd, names=np.arange(100, 200), names_unique=False, display_names=False,
display_legend=True, labels=['Red'])
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
fig = Figure(marks=[scatter_chart, scatter_chart2, line], axes=[ax_x, ax_y])
fig
def print_event(self, target):
print(target)
# Adding call back to scatter events
# print custom mssg on hover and background click of Blue Scatter
scatter_chart.on_hover(print_event)
scatter_chart.on_background_click(print_event)
# print custom mssg on click of an element or legend of Red Scatter
scatter_chart2.on_element_click(print_event)
scatter_chart2.on_legend_click(print_event)
line.on_element_click(print_event)
# Changing interaction from hover to click for tooltip
scatter_chart.interactions = {'click': 'tooltip'}
# Adding figure as tooltip
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(10)
y_data = np.random.randn(10)
lc = Lines(x=x_data, y=y_data, scales={'x': x_sc, 'y':y_sc})
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
tooltip_fig = Figure(marks=[lc], axes=[ax_x, ax_y], layout=Layout(min_width='600px'))
scatter_chart.tooltip = tooltip_fig | examples/Interactions/Mark Interactions.ipynb | SylvainCorlay/bqplot | apache-2.0 |
Image
For images, on_element_click returns the location of the mouse click. | i = ImageIpy.from_file(os.path.abspath('../data_files/trees.jpg'))
bqi = Image(image=i, scales={'x': x_sc, 'y': y_sc}, x=(0, 10), y=(-1, 1))
fig_image = Figure(marks=[bqi], axes=[ax_x, ax_y])
fig_image
bqi.on_element_click(print_event) | examples/Interactions/Mark Interactions.ipynb | SylvainCorlay/bqplot | apache-2.0 |
Line Chart | # Adding default tooltip to Line Chart
x_sc = LinearScale()
y_sc = LinearScale()
x_data = np.arange(100)
y_data = np.random.randn(3, 100)
def_tt = Tooltip(fields=['name', 'index'], formats=['', '.2f'], labels=['id', 'line_num'])
line_chart = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc},
tooltip=def_tt, display_legend=True, labels=["line 1", "line 2", "line 3"] )
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
Figure(marks=[line_chart], axes=[ax_x, ax_y])
# Adding call back to print event when legend or the line is clicked
line_chart.on_legend_click(print_event)
line_chart.on_element_click(print_event) | examples/Interactions/Mark Interactions.ipynb | SylvainCorlay/bqplot | apache-2.0 |
Bar Chart | # Adding interaction to select bar on click for Bar Chart
x_sc = OrdinalScale()
y_sc = LinearScale()
x_data = np.arange(10)
y_data = np.random.randn(2, 10)
bar_chart = Bars(x=x_data, y=[y_data[0, :].tolist(), y_data[1, :].tolist()], scales= {'x': x_sc, 'y': y_sc},
interactions={'click': 'select'},
selected_style={'stroke': 'orange', 'fill': 'red'},
labels=['Level 1', 'Level 2'],
display_legend=True)
ax_x = Axis(scale=x_sc)
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
Figure(marks=[bar_chart], axes=[ax_x, ax_y])
# Adding a tooltip on hover in addition to select on click
def_tt = Tooltip(fields=['x', 'y'], formats=['', '.2f'])
bar_chart.tooltip=def_tt
bar_chart.interactions = {
'legend_hover': 'highlight_axes',
'hover': 'tooltip',
'click': 'select',
}
# Changing tooltip to be on click
bar_chart.interactions = {'click': 'tooltip'}
# Call back on legend being clicked
bar_chart.type='grouped'
bar_chart.on_legend_click(print_event) | examples/Interactions/Mark Interactions.ipynb | SylvainCorlay/bqplot | apache-2.0 |
Histogram | # Adding tooltip for Histogram
x_sc = LinearScale()
y_sc = LinearScale()
sample_data = np.random.randn(100)
def_tt = Tooltip(formats=['', '.2f'], fields=['count', 'midpoint'])
hist = Hist(sample=sample_data, scales= {'sample': x_sc, 'count': y_sc},
tooltip=def_tt, display_legend=True, labels=['Test Hist'], select_bars=True)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')
Figure(marks=[hist], axes=[ax_x, ax_y])
# Changing tooltip to be displayed on click
hist.interactions = {'click': 'tooltip'}
# Changing tooltip to be on click of legend
hist.interactions = {'legend_click': 'tooltip'} | examples/Interactions/Mark Interactions.ipynb | SylvainCorlay/bqplot | apache-2.0 |
Pie Chart
Set up a pie chart with click to show the tooltip. | pie_data = np.abs(np.random.randn(10))
sc = ColorScale(scheme='Reds')
tooltip_widget = Tooltip(fields=['size', 'index', 'color'], formats=['0.2f', '', '0.2f'])
pie = Pie(sizes=pie_data, scales={'color': sc}, color=np.random.randn(10),
tooltip=tooltip_widget, interactions = {'click': 'tooltip'}, selected_style={'fill': 'red'})
pie.selected_style = {"opacity": "1", "stroke": "white", "stroke-width": "2"}
pie.unselected_style = {"opacity": "0.2"}
Figure(marks=[pie])
# Changing interaction to select on click and tooltip on hover
pie.interactions = {'click': 'select', 'hover': 'tooltip'} | examples/Interactions/Mark Interactions.ipynb | SylvainCorlay/bqplot | apache-2.0 |
Select clean 83mKr events
KR83m cuts similar to Adam's note:
https://github.com/XENON1T/FirstResults/blob/master/PositionReconstructionSignalCorrections/S2map/s2-correction-xy-kr83m-fit-in-bins.ipynb
Valid second interaction
Time between S1s in [0.6, 2] $\mu s$
z in [-90, -5] cm | # Get SR1 krypton datasets
dsets = hax.runs.datasets
dsets = dsets[dsets['source__type'] == 'Kr83m']
dsets = dsets[dsets['trigger__events_built'] > 10000] # Want a lot of Kr, not diffusion mode
dsets = hax.runs.tags_selection(dsets, include='sciencerun0')
# Sample ten datasets randomly (with fixed seed, so the analysis is reproducible)
dsets = dsets.sample(10, random_state=0)
dsets.number.values
# Suppress rootpy warning about root2rec.. too lazy to fix.
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
data = hax.minitrees.load(dsets.number,
'Basics DoubleScatter Corrections'.split(),
num_workers=5,
preselection=['int_b_x>-60.0',
'600 < s1_b_center_time - s1_a_center_time < 2000',
'-90 < z < -5']) | notebooks/extraction/extract_s1s.ipynb | JelleAalbers/xeshape | mit |
Get S1s from these events | from hax.treemakers.peak_treemakers import PeakExtractor
dt = 10 * units.ns
wv_length = pax_config['BasicProperties.SumWaveformProperties']['peak_waveform_length']
waveform_ts = np.arange(-wv_length/2, wv_length/2 + 0.1, dt)
class GetS1s(PeakExtractor):
__version__ = '0.0.1'
uses_arrays = True
# (don't actually need all properties, but useful to check if there's some problem)
peak_fields = ['area', 'range_50p_area', 'area_fraction_top',
'n_contributing_channels', 'left', 'hit_time_std', 'n_hits',
'type', 'detector', 'center_time', 'index_of_maximum',
'sum_waveform',
]
peak_cut_list = ['detector == "tpc"', 'type == "s1"']
def get_data(self, dataset, event_list=None):
# Get the event list from the dataframe selected above
event_list = data[data['run_number'] == hax.runs.get_run_number(dataset)]['event_number'].values
return PeakExtractor.get_data(self, dataset, event_list=event_list)
def extract_data(self, event):
peak_data = PeakExtractor.extract_data(self, event)
# Convert sum waveforms from arcane pyroot buffer type to proper numpy arrays
for p in peak_data:
p['sum_waveform'] = np.array(list(p['sum_waveform']))
return peak_data
s1s = hax.minitrees.load(dsets.number, GetS1s, num_workers=5) | notebooks/extraction/extract_s1s.ipynb | JelleAalbers/xeshape | mit |
Save to disk
Pandas object array is very memory-ineficient. Takes about 25 MB/dataset to store it in this format (even compressed). If we'd want to extract more than O(10) datasets we'd get into trouble already at the extraction stage.
Least we can do is convert to sensible format (waveform matrix, ordinary dataframe) now. Unfortunately dataframe retains 'object' mark even after deleting sum waveform column. Converting to and from a record array removes this. | waveforms = np.vstack(s1s['sum_waveform'].values)
del s1s['sum_waveform']
s1s = pd.DataFrame(s1s.to_records()) | notebooks/extraction/extract_s1s.ipynb | JelleAalbers/xeshape | mit |
Merge with the per-event data (which is useful e.g. for making position-dependent selections) | merged_data = hax.minitrees._merge_minitrees(s1s, data)
del merged_data['index']
np.savez_compressed('sr0_kr_s1s.npz', waveforms=waveforms)
merged_data.to_hdf('sr0_kr_s1s.hdf5', 'data') | notebooks/extraction/extract_s1s.ipynb | JelleAalbers/xeshape | mit |
Quick look | len(s1s)
from pax import units
plt.hist(s1s.left * 10 * units.ns / units.ms, bins=np.linspace(0, 2.5, 100));
plt.yscale('log') | notebooks/extraction/extract_s1s.ipynb | JelleAalbers/xeshape | mit |
S1 is usually at trigger. | plt.hist(s1s.area, bins=np.logspace(0, 3, 100));
plt.axvline(35, color='r')
plt.yscale('log')
plt.xscale('log')
np.sum(s1s['area'] > 35)/len(s1s) | notebooks/extraction/extract_s1s.ipynb | JelleAalbers/xeshape | mit |
How to build a GeoDataFrame
We firstly explore how to do this by using the GeoJSON schema.
See https://gist.github.com/sgillies/2217756 for the "__geo_interface__".
But this basically copies GeoJSON, for which see https://tools.ietf.org/html/rfc7946
It's then as simple as this... | point_features = [{"geometry": {
"type": "Point",
"coordinates": [102.0, 0.5]
},
"properties": {
"prop0": "value0", "prop1": "value1"
}
}]
point_data = gpd.GeoDataFrame.from_features(point_features)
point_data
point_data.ix[0].geometry
line_features = [{"geometry": {
"type": "LineString",
"coordinates": [[102.0, 0.5], [104, 3], [103, 2]]
},
"properties": {
"prop3": "value3"
}
}]
line_data = gpd.GeoDataFrame.from_features(line_features)
line_data
line_data.ix[0].geometry
polygon_features = [{"geometry": {
"type": "Polygon",
"coordinates": [[[102.0, 0.5], [104, 3], [102, 2], [102,0.5]]]
},
"properties": {
"prop4": "value4", "prop1": "value1"
}
}]
data = gpd.GeoDataFrame.from_features(polygon_features)
data
data.plot()
data.ix[0].geometry
features = []
features.extend(point_features)
features.extend(line_features)
features.extend(polygon_features)
gpd.GeoDataFrame.from_features(features) | notebooks/Geopandas.ipynb | MatthewDaws/OSMDigest | mit |
Notes
Some things that jumped out at me as I read the GeoJSON spec:
Coordinates are always in the order: longitude, latitude.
A "Polygon" is allowed to contain holes. The "outer" edge should be ordered counter-clockwise, and each "inner" edge (i.e. a "hole") should be clockwise.
If a polygon contains more than one array of points, then the first array is the outer edge, and the rest inner edges.
Lines crossing the anti-meridian need to be split. (I wonder what OSM does?)
Via using shapely
Under the hood, geopandas uses the shapely library, and we can alternatively build data frames by directly building shapely objects | type(point_data.geometry[0]), type(line_data.geometry[0]), type(data.geometry[0])
import shapely.geometry
pts = shapely.geometry.LineString([shapely.geometry.Point(0,0), shapely.geometry.Point(1,0), shapely.geometry.Point(1,1)])
df = gpd.GeoDataFrame({"geometry": [pts], "key1":["value1"], "key2":["value2"]})
df
df.ix[0].geometry | notebooks/Geopandas.ipynb | MatthewDaws/OSMDigest | mit |
Support in the library | import osmdigest.geometry as geometry
import osmdigest.sqlite as sq
import os
filename = os.path.join("..", "..", "..", "Data", "california-latest.db")
db = sq.OSM_SQLite(filename)
way = db.complete_way(33088737)
series = geometry.geoseries_from_way(way)
series
gpd.GeoDataFrame(series).T.plot()
way = db.complete_way(285549437)
series = geometry.geoseries_from_way(way)
series
df = gpd.GeoDataFrame(series).T
df
df.plot() | notebooks/Geopandas.ipynb | MatthewDaws/OSMDigest | mit |
For relations
We can build a geo data frame with the raw data from a relation. | relation = db.complete_relation(2866485)
geometry.geodataframe_from_relation(relation)
geometry.geodataframe_from_relation( db.complete_relation(63222) ) | notebooks/Geopandas.ipynb | MatthewDaws/OSMDigest | mit |
Looking at relations
These are harder to compute automatically, because the exact interpretation of the sub-elements depends upon context. However, most relations which have "interesting" geometry (as opposed to giving contextual information on other elements) are of "multi-polygon" type, and can be recognised by the presence of ways with the "role" of "inner" or "outer".
I found that using the shapely library itself was the easiest way to conver the geometry.
There are some cases of geometry which shapely cannot handle. For example:
- http://www.openstreetmap.org/relation/70986 (A lot of self-intersection, I think).
- http://www.openstreetmap.org/relation/184199 (Ditto).
- http://www.openstreetmap.org/relation/1483140 (Adjoining polygons). | gen = db.relations()
for _ in range(15):
next(gen)
relation = next(gen)
print(relation)
series = geometry.geoseries_from_relation(db.complete_relation(relation))
series
gpd.GeoDataFrame(series).T.plot() | notebooks/Geopandas.ipynb | MatthewDaws/OSMDigest | mit |
3. Enter BigQuery Query To Table Recipe Parameters
Specify a single query and choose legacy or standard mode.
For PLX use user authentication and: SELECT * FROM [plx.google:FULL_TABLE_NAME.all] WHERE...
Every time the query runs it will overwrite the table.
Modify the values below for your use case, can be done multiple times, then click play. | FIELDS = {
'auth_write':'service', # Credentials used for writing data.
'query':'', # SQL with newlines and all.
'dataset':'', # Existing BigQuery dataset.
'table':'', # Table to create from this query.
'legacy':True, # Query type must match source tables.
}
print("Parameters Set To: %s" % FIELDS)
| colabs/bigquery_query.ipynb | google/starthinker | apache-2.0 |
4. Execute BigQuery Query To Table
This does NOT need to be modified unless you are changing the recipe, click play. | from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'from':{
'query':{'field':{'name':'query','kind':'text','order':1,'default':'','description':'SQL with newlines and all.'}},
'legacy':{'field':{'name':'legacy','kind':'boolean','order':4,'default':True,'description':'Query type must match source tables.'}}
},
'to':{
'dataset':{'field':{'name':'dataset','kind':'string','order':2,'default':'','description':'Existing BigQuery dataset.'}},
'table':{'field':{'name':'table','kind':'string','order':3,'default':'','description':'Table to create from this query.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
| colabs/bigquery_query.ipynb | google/starthinker | apache-2.0 |
Then as a list of lines... (just one line) | with open('tmdb_5000_movies.csv','r') as f:
lines = [line for line in f]
lines[0] | three_agd.ipynb | Fifth-Cohort-Awesome/NightThree | mit |
Then as a data frame... (just Avatar) | import pandas as pd
df = pd.read_csv("tmdb_5000_movies.csv")
df.query('id == 19995') | three_agd.ipynb | Fifth-Cohort-Awesome/NightThree | mit |
Goal Two
Right now, the file is in a 'narrow' format. In other words, several interesting bits are collapsed into a single field. Let's attempt to make the data frame a 'wide' format. All the collapsed items expanded horizontally.
References:
https://www.kaggle.com/fabiendaniel/film-recommendation-engine
http://www.jeannicholashould.com/tidy-data-in-python.html | import json
import pandas as pd
import numpy as np
df = pd.read_csv("tmdb_5000_movies.csv")
#convert to json
json_columns = ['genres', 'keywords', 'production_countries',
'production_companies', 'spoken_languages']
for column in json_columns:
df[column] = df[column].apply(json.loads)
def get_unique_inner_json(feature):
tmp = []
for i, row in df[feature].iteritems():
for x in range(0,len(df[feature].iloc[i])):
tmp.append(df[feature].iloc[i][x]['name'])
unique_values = set(tmp)
return unique_values
def widen_data(df, feature):
unique_json = get_unique_inner_json(feature)
tmp = []
#rearrange genres
for i, row in df.iterrows():
for x in range(0,len(row[feature])):
for val in unique_json:
if row[feature][x]['name'] == val:
row[val] = 1
tmp.append(row)
new_df = pd.DataFrame(tmp)
new_df[list(unique_json)] = new_df[list(unique_json)].fillna(value=0)
return new_df
genres_arranged_df = widen_data(df, "genres")
genres_arranged_df[list(get_unique_inner_json("genres"))] = genres_arranged_df[list(get_unique_inner_json("genres"))].astype(int)
genres_arranged_df.query('title == "Avatar"') | three_agd.ipynb | Fifth-Cohort-Awesome/NightThree | mit |
Goal Three | genres_long_df = pd.melt(genres_arranged_df, id_vars=df.columns, value_vars=get_unique_inner_json("genres"), var_name="genre", value_name="genre_val")
genres_long_df = genres_long_df[genres_long_df['genre_val'] == 1]
genres_long_df.query('title == "Avatar"')
| three_agd.ipynb | Fifth-Cohort-Awesome/NightThree | mit |
2 数据预处理
通过特征提取,我们能得到未经处理的特征,这时的特征可能有以下问题:
不属于同一量纲:即特征的规格不一样,不能够放在一起比较。无量纲化可以解决这一问题。
信息冗余:对于某些定量特征,其包含的有效信息为区间划分,例如学习成绩,假若只关心“及格”或不“及格”,那么需要将定量的考分,转换成“1”和“0”表示及格和未及格。二值化可以解决这一问题。
定性特征不能直接使用:某些机器学习算法和模型只能接受定量特征的输入,那么需要将定性特征转换为定量特征。最简单的方式是为每一种定性值指定一个定量值,但是这种方式过于灵活,增加了调参的工作。通常使用哑编码的方式将定性特征转换为定量特征:假设有N种定性值,则将这一个特征扩展为N种特征,当原始特征值为第i种定性值时,第i个扩展特征赋值为1,其他扩展特征赋值为0。哑编码的方式相比直接指定的方式,不用增加调参的工作,对于线性模型来说,使用哑编码后的特征可达到非线性的效果。
存在缺失值:缺失值需要补充。
信息利用率低:不同的机器学习算法和模型对数据中信息的利用是不同的,之前提到在线性模型中,使用对定性特征哑编码可以达到非线性的效果。类似地,对定量变量多项式化,或者进行其他的转换,都能达到非线性的效果。
我们使用sklearn中的preproccessing库来进行数据预处理,可以覆盖以上问题的解决方案。
2.1 无量纲化
无量纲化使不同规格的数据转换到同一规格。常见的无量纲化方法有标准化和区间缩放法。标准化的前提是特征值服从正态分布,标准化后,其转换成标准正态分布。区间缩放法利用了边界值信息,将特征的取值区间缩放到某个特点的范围,例如[0, 1]等。
2.1.1 标准化
标准化需要计算特征的均值和标准差,公式表达为:
x' = (x - mean) / std
使用preproccessing库的StandardScaler类对数据进行标准化的代码如下: | from sklearn.preprocessing import StandardScaler
# 标准化,返回值为标准化后的数据
StandardScaler().fit_transform(iris.data) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
2.1.2 区间缩放法
区间缩放法的思路有多种,常见的一种为利用两个最值进行缩放,公式表达为:
x' = (x - min) / (max - min)
使用preproccessing库的MinMaxScaler类对数据进行区间缩放的代码如下: | from sklearn.preprocessing import MinMaxScaler
# 区间缩放,返回值为缩放到[0, 1]区间的数据
MinMaxScaler().fit_transform(iris.data) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
2.1.3 标准化与归一化的区别
简单来说,标准化是依照特征矩阵的列处理数据,其通过求z-score的方法,将样本的特征值转换到同一量纲下。归一化是依照特征矩阵的行处理数据,其目的在于样本向量在点乘运算或其他核函数计算相似性时,拥有统一的标准,也就是说都转化为“单位向量”。规则为l2的归一化公式如下:
x' = x / ((sum(x[j] ^ 2)) ^ 0.5)
使用preproccessing库的Normalizer类对数据进行归一化的代码如下: | from sklearn.preprocessing import Normalizer
# 归一化,返回值为归一化后的数据
Normalizer().fit_transform(iris.data) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
2.2 对定量特征二值化
定量特征二值化的核心在于设定一个阈值,大于阈值的赋值为1,小于等于阈值的赋值为0,公式表达如下:
x = 1 if x > threshold else 0
使用preproccessing库的Binarizer类对数据进行二值化的代码如下: | from sklearn.preprocessing import Binarizer
# 二值化,阈值设置为3,返回值 为二值化后的数据
Binarizer(threshold=3).fit_transform(iris.data) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
2.3 对定性特征哑编码
由于IRIS数据集的特征皆为定量特征,故使用其目标值进行哑编码(实际上是不需要的)。使用preproccessing库的OneHotEncoder类对数据进行哑编码的代码如下: | from sklearn.preprocessing import OneHotEncoder
# 哑编码,对数据的目标值,返回值为哑编码后的数据
OneHotEncoder().fit_transform(iris.target.reshape((-1,1))) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
2.4 缺失值计算
由于IRIS数据集没有缺失值,故对数据集新增一个样本,4个特征均赋值为NaN,表示数据缺失。使用preproccessing库的Imputer类对数据进行缺失值计算的代码如下: | import numpy as np
from sklearn.preprocessing import Imputer
# 缺失值计算,返回值为计算缺失值后的数据
# 参数missing_value为缺失值的表示形式,默认为NaN
# 参数strategy为缺失值的填充方式,默认为mean(均值)
Imputer().fit_transform(\
np.vstack((np.array([np.nan, np.nan, np.nan, np.nan]),iris.data))) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
2.5 数据变换
常见的数据变换有基于多项式的、基于指数函数的、基于对数函数的。4个特征,度为2的多项式转换公式如下:
(x1',x2',x3',...,xn')
=(1, x1, x2, ..., xn, x1^2, x1*x2, x1*x2*x3, ..., )
使用preproccessing库的PolynomialFeatures类对数据进行多项式转换的代码如下: | from sklearn.preprocessing import PolynomialFeatures
# 多项式转换
# 参数degree为度,默认值为2
PolynomialFeatures().fit_transform(iris.data) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
基于单变元函数的数据变换可以使用一个统一的方式完成,使用preproccessing库的FunctionTransformer对数据进行对数函数转换的代码如下: | from sklearn.preprocessing import FunctionTransformer
#自定义转换函数为对数函数的数据变换
#第一个参数是单变元函数
FunctionTransformer(np.log1p).fit_transform(iris.data) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
2.6 回顾
类 功能 说明
StandardScaler 无量纲化 标准化,基于特征矩阵的列,将特征值转换至服从标准正态分布
MinMaxScaler 无量纲化 区间缩放,基于最大最小值,将特征值转换到[0, 1]区间上
Normalizer 归一化 基于特征矩阵的行,将样本向量转换为“单位向量”
Binarizer 二值化 基于给定阈值,将定量特征按阈值划分
OneHotEncoder 哑编码 将定性数据编码为定量数据
Imputer 缺失值计算 计算缺失值,缺失值可填充为均值等
PolynomialFeatures 多项式数据转换 多项式数据转换
FunctionTransformer 自定义单元数据转换 使用单变元的函数来转换数据
3 特征选择
当数据预处理完成后,我们需要选择有意义的特征输入机器学习的算法和模型进行训练。通常来说,从两个方面考虑来选择特征:
特征是否发散:如果一个特征不发散,例如方差接近于0,也就是说样本在这个特征上基本上没有差异,这个特征对于样本的区分并没有什么用。
特征与目标的相关性:这点比较显见,与目标相关性高的特征,应当优选选择。除方差法外,本文介绍的其他方法均从相关性考虑。
根据特征选择的形式又可以将特征选择方法分为3种:
Filter:过滤法,按照发散性或者相关性对各个特征进行评分,设定阈值或者待选择阈值的个数,选择特征。
Wrapper:包装法,根据目标函数(通常是预测效果评分),每次选择若干特征,或者排除若干特征。
Embedded:嵌入法,先使用某些机器学习的算法和模型进行训练,得到各个特征的权值系数,根据系数从大到小选择特征。类似于Filter方法,但是是通过训练来确定特征的优劣。
我们使用sklearn中的feature_selection库来进行特征选择。
3.1 Filter
3.1.1 方差选择法
使用方差选择法,先要计算各个特征的方差,然后根据阈值,选择方差大于阈值的特征。使用feature_selection库的VarianceThreshold类来选择特征的代码如下: | from sklearn.feature_selection import VarianceThreshold
# 方差选择法,返回值为特征选择后的数据
# 参数threshold为方差的阈值
VarianceThreshold(threshold=3).fit_transform(iris.data) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
3.1.2 相关系数法
使用相关系数法,先要计算各个特征对目标值的相关系数以及相关系数的P值。用feature_selection库的SelectKBest类结合相关系数来选择特征的代码如下: | from sklearn.feature_selection import SelectKBest
from scipy.stats import pearsonr
# 选择K个最好的特征,返回选择特征后的数据
# 第一个参数为计算评估特征是否好的函数,该函数输入特征矩阵和目标向量,
# 输出二元组(评分,P值)的数组,数组第i项为第i个特征的评分和P值。
# 在此定义为计算相关系数
# 参数k为选择的特征个数
SelectKBest(lambda X, Y: tuple(map(tuple,np.array(list(map(lambda x:pearsonr(x, Y), X.T))).T)), k=2).fit_transform(iris.data, iris.target) | dev/pyml/2001_使用sklearn做单机特征工程.ipynb | karst87/ml | mit |
The reveals that we have the function
$$ expi = \intop_{-\infty}^u \frac {e^y} y dy $$
By just changing the sign of y to -y we obtain
$$ W(u) = \intop_u^\infty \frac {e^{-y}} y dy = - \intop_{y = -\infty}^{y = u} \frac {e^{y}} y dy $$
Replace $y$ by $-\xi$ the $W(u)$ becomes
$$ W(u) = - \intop_{\xi = \infty}^{\xi = -u} \frac {e^{-\xi}} \xi d \xi = - expi(-u) $$
So that
$$ W(u) = -expi(-u) $$
according to the definition used in scipy.special.expi.
Notice that diferent libraries and books may define the exponential integral differently. The famous `Abramowitz M & Stegun, I (1964) Handbook of Mathematical Functions. Dover`, for example define the exponential function exactly as the theis well function.
We can readily check the expi function using the table from Kruseman and De Ridder (2000) p294 that was referenced above. Verifying for example the values for u = 4, 0.4, 0.04, 0.004 etc to $4^{-10$ can be done as follows: | u = 4 * 10** -np.arange(11.) # generates values 4, 4e-1, 4e-2 .. 4e-10
print("{:>10s} {:>10s}".format('u ', 'wu '))
for u, wu in zip(u, -expi(-u)): # makes a list of value pairs [u, W(u)]
print("{0:10.1e} {1:10.4e}".format(u, wu)) | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
which is equal to the values in the table.
It''s now convenient to use the familiar form W(u) instead of -expi(-u)
We can define a function for W either as an anonymous function or a regular function. Anonymous functions are called lambdda functions or lambda expressions in Python. In this case: | from scipy.special import expi
W = lambda u : -expi(-u) | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
Or, alternatively as a regular one-line function: | def W(u): return -expi(-u) | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
or in full, so that we don't need the import above and we directly see where the function comes from: | import scipy
W = lambda u: -scipy.special.expi( -u ) # Theis well function | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
Now we can put this well function immediately to use for answering practical questions. For example: what is the drawdown after $t=1\,d$ at distance $r=350 \, m$ by a well extracting $Q = 2400\, m^3/d$ in an confined aquifer with transmissivity $kD = 2400\, m^2/d$ and storage coefficient $S=0.001$ [-] ? | r = 350; t = 1.; kD=2400; S=0.001; Q=2400
u = r**2 * S / (4 * kD * t)
s = Q/(4 * np.pi * kD) * W(u) # applying the theis well function according to the book
print(" r = {} m\n\
t = {} d\n\
kD = {} m2/d\n\
S = {} [-]\n\
Q = {} m3/d\n\
u = {:.5g} [-]\n\
W(u) = {:.5g} [-]\n\
s(r, t) = {:.5g} m".
format(r, t, kD, S, Q, u, W(u), s)) | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
Above we computed $u$ separately to prevent cluttering the expression. Of course, you can define a lambda or regular function to compute like so | u = lambda r, t: r**2 * S / (4 * kD * t) | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
The lambda function $u$ now takes two parameters like $u(r,t)$ and uses the other parameters $S$ and $kD$ that it finds in the workspace at the moment when the lambda function is created. So don't change $S$ and $kD$ afterwards without redefining $u(r,t)$.
Try this out: | u(r,t) # yields u as a function of r and t
W(u(r,t)) # given W(u) as a function of r and t
Q/(4 * np.pi * kD) * W(u(r,t)) # gives the drawdown that we had before | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
It's now straight forward to compute the drawdown for many times like so: | t = np.logspace(-3, 2, 51) # gives 51 times on log scale between 10^(-3) = 0.001 and 10^(2) = 100 | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
This given the following times: | for it, tt in enumerate(t):
if it % 10 == 0: print()
print("%8.3g" % tt, end=" ") | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
With these times we can compute the drawdown for all these times in a single strike without changing anything to our formula: | s = Q / (4 * np.pi * kD) * W(u(r,t)) # computes s(r,t)
s # shows s(r,t) | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
For a nicer print print t and s next to each other | print("{:>10s} {:>10s}".format('time', 'drawdown'))
for tt, ss in zip(t, s):
print("{0:10.3g} {1:10.3g}".format(tt,ss)) | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
And of course we can make a plot of these results: | import matplotlib.pyplot as plt # imports plot functions (matlab style)
fig = plt.figure()
# Drawdown versus log(t)
ax1 = fig.add_subplot(121)
ax1.set(xlabel='time [d]', ylabel='drawdown [m]', xscale='log', title='Drawdown versus log(t)')
ax1.invert_yaxis()
ax1.grid(True)
plt.plot(t, s)
# Drawdown versus t
ax2 = fig.add_subplot(122)
ax2.set(xlabel='time [d]', ylabel='', xscale='linear', title='Drawdown versus t')
ax2.invert_yaxis()
ax2.grid(True)
plt.plot(t, s)
plt.show() | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
Exercises
Show the drawdown as a function of r instead of x, for t=2 d and r between 0.1 and 1000 m
For the 5 wells of which the lcoations and extractions are given below, show the combined drawdown for time between 0.01 and 10 days at x= 0 and y = 0. | well_names = ['School', 'Lazaret', 'Square', 'Mosque', 'Water_company']
Q = [400., 1200., 1150., 600., 1900]
x = [-300., -250., 100., 55., 125.]
y =[-450., +230., 50., -300., 250.]
Nwells = len(well_names)
x0 = 0.
y0 = 0.
t = np.logspace(-2, 2, 41)
s = np.zeros((Nwells, len(t)))
for iw, Q0, xw, yw in zip(range(Nwells), Q, y, x):
r = np.sqrt((xw-x0) ** 2 + (yw - y0) **2)
s[iw,:] = Q0 / (4 * np.pi * kD) * W(u(r,t))
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set(xlabel='time [d]', ylabel='drawdown[m]', title='Drawdown due to multiple wells')
ax.invert_yaxis()
ax.grid(True)
for iw, name in zip(range(Nwells), well_names):
ax.plot(t, s[iw,:], label=name)
ax.plot(t, np.sum(s, axis=0), label='total_drawdown')
ax.legend()
plt.show() | exercises_notebooks/TransientFlowToAWell.ipynb | Olsthoorn/TransientGroundwaterFlow | gpl-3.0 |
Minimal example
Generate a .csv file that is accepted as input to SmartVA-Analyze 1.1 | # SmartVA-Analyze 1.1 accepts a csv file as input
# and expects a column for every field name in the "Guide for data entry.xlsx" spreadsheet
df = pd.DataFrame(index=[0], columns=cb.index.unique())
# SmartVA-Analyze 1.1 also requires a handful of columns that are not in the Guide
df['child_3_10'] = np.nan
df['agedays'] = np.nan
df['child_5_7e'] = np.nan
df['child_5_6e'] = np.nan
df['adult_2_9a'] = np.nan
df.loc[0,'sid'] = 'example'
# if we save this dataframe as a csv, we can run it through SmartVA-Analyze 1.1
fname = 'example_1.csv'
df.to_csv(fname, index=False)
# here are the results of running this example through SmartVA-Analyze 1.1
pd.read_csv('neonate-predictions.csv') | 01_example_mapping_in_python.ipynb | aflaxman/SmartVA-Analyze-Mapping-Example | gpl-3.0 |
Example of simple, hypothetical mapping
If we have data on a set of verbal autopsies (VAs) that did not use the PHMRC Shortened Questionnaire, we must map them to the expected format. This is a simple, hypothetical example for a set of VAs that asked only about injuries, hypertension, chest pain: | hypothetical_data = pd.DataFrame(index=range(5))
hypothetical_data['sex'] = ['M', 'M', 'F', 'M', 'F']
hypothetical_data['age'] = [35, 45, 75, 67, 91]
hypothetical_data['injury'] = ['rti', 'fall', '', '', '']
hypothetical_data['heart_disease'] = ['N', 'N', 'Y', 'Y', 'Y']
hypothetical_data['chest_pain'] = ['N', 'N', 'Y', 'N', '']
hypothetical_data
# SmartVA-Analyze 1.1 accepts a csv file as input
# and expects a column for every field name in the "Guide for data entry.xlsx" spreadsheet
df = pd.DataFrame(index=hypothetical_data.index, columns=cb.index.unique())
# SmartVA-Analyze 1.1 also requires a handful of columns that are not in the Guide
df['child_3_10'] = np.nan
df['agedays'] = np.nan
df['child_5_7e'] = np.nan
df['child_5_6e'] = np.nan
df['adult_2_9a'] = np.nan
# to find the coding of specific variables, look in the Guide, and
# as necessary refer to the numbers in paper form for the PHMRC Shortened Questionnaire
# http://www.healthdata.org/sites/default/files/files/Tools/SmartVA/2015/PHMRC%20Shortened%20VAI_all-modules_2015.zip
# set id
df['sid'] = hypothetical_data.index
# set sex
df['gen_5_2'] = hypothetical_data['sex'].map({'M': '1', 'F': '2'})
# set age
df['gen_5_4'] = 1 # units are years
df['gen_5_4a'] = hypothetical_data['age'].astype(int)
# good place to save work and confirm that it runs through SmartVA
fname = 'example_2.csv'
df.to_csv(fname, index=False)
# here are the results of running this example
pd.read_csv('adult-predictions.csv')
# map injuries to appropriate codes
# suffered injury?
df['adult_5_1'] = hypothetical_data['injury'].map({'rti':'1', 'fall':'1', '':'0'})
# injury type
df['adult_5_2'] = hypothetical_data['injury'].map({'rti':'1', 'fall':'2'})
# _another_ good place to save work and confirm that it runs through SmartVA
fname = 'example_3.csv'
df.to_csv(fname, index=False)
# here are the results of running this example
pd.read_csv('adult-predictions.csv')
# map heart disease (to column adult_1_1i, see Guide)
df['adult_1_1i'] = hypothetical_data['heart_disease'].map({'Y':'1', 'N':'0'})
# map chest pain (to column adult_2_43, see Guide)
df['adult_2_43'] = hypothetical_data['chest_pain'].map({'Y':'1', 'N':'0', '':'9'})
# and that completes the work for a simple, hypothetical mapping
fname = 'example_4.csv'
df.to_csv(fname, index=False)
# have a look at the non-empty entries in the mapped database:
df.T.dropna()
# here are the results of running this example
pd.read_csv('adult-predictions.csv') | 01_example_mapping_in_python.ipynb | aflaxman/SmartVA-Analyze-Mapping-Example | gpl-3.0 |
Analysis of evoked response using ICA and PCA reduction techniques
This example computes PCA and ICA of evoked or epochs data. Then the
PCA / ICA components, a.k.a. spatial filters, are used to transform
the channel data to new sources / virtual channels. The output is
visualized on the average of all the epochs. | # Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Asish Panda <asishrocks95@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.decoding import UnsupervisedSpatialFilter
from sklearn.decomposition import PCA, FastICA
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
X = epochs.get_data() | 0.17/_downloads/8b68ef11c9dcc68ed3cd0ccec9a41a34/plot_decoding_unsupervised_spatial_filter.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Transform data with PCA computed on the average ie evoked response | pca = UnsupervisedSpatialFilter(PCA(30), average=False)
pca_data = pca.fit_transform(X)
ev = mne.EvokedArray(np.mean(pca_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev.plot(show=False, window_title="PCA", time_unit='s') | 0.17/_downloads/8b68ef11c9dcc68ed3cd0ccec9a41a34/plot_decoding_unsupervised_spatial_filter.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Transform data with ICA computed on the raw epochs (no averaging) | ica = UnsupervisedSpatialFilter(FastICA(30), average=False)
ica_data = ica.fit_transform(X)
ev1 = mne.EvokedArray(np.mean(ica_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev1.plot(show=False, window_title='ICA', time_unit='s')
plt.show() | 0.17/_downloads/8b68ef11c9dcc68ed3cd0ccec9a41a34/plot_decoding_unsupervised_spatial_filter.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.