Skip to content

ERED Light Curve

Sarah Chastain edited this page Jun 4, 2021 · 6 revisions

Definition

The Exponential-Rise Exponential-Decay light curve is a Wilma light curve immediately followed by a FRED light curve and is defined mathematically as follows:

import numpy as np 
F(t) = F0*np.exp(t/tau) # if t < t_crit
F(t) = F0 # if t = t_crit
F(t) = F0*np.exp(-t/tau) # if t > t_crit 

Integrated Flux

Computationally problems arise when tcrit falls outside of the observation: the exponentials blow up. Therefore the implementation actually looks like this:

        # Q: Why is this part so complicated? 
        # A: Because of overflows, underflows, and all kinds of computational issues when tcrit falls outside of the observation. Also for debugging.
        # Q: Why is there a tau/2? 
        # A: Good question. Ok, so how do we want to define tau? We already decided in definitions of earliest_crit_time and latest_crit_time to 
        #    define tau as something that is equivalent to the tau in the fred and wilma cases. Therefore, because we have a wilma glued to a fred for this light curve,
        #    we set half of tau to be the exponential decay factor. 

        tau = tau/2
        
        # In order to avoid computational issues, we create masks for three different possible scenarios:
        # 1. The critical time of the transient is before the start of the observation and looks like a fred in the observation
        # 2. The critical time is after the end of the observation and looks like a wilma
        # 3. The critical time is during the observation and the wilma and fred portions must be calculated. 
        scen1 = np.zeros(len(tcrit),dtype=bool)
        scen2 = np.zeros(len(tcrit),dtype=bool)
        scen3 = np.zeros(len(tcrit),dtype=bool)
        scen1 += (tcrit < start_obs)
        scen2 += (tcrit > end_obs)
        scen3 += (tcrit <= end_obs) & (tcrit >= start_obs)


        flux_int = np.zeros(tcrit.shape[0],dtype=np.float64)
               
        # Scenario one 
        tstart1 = np.maximum(tcrit[scen1], start_obs) - tcrit[scen1]
        tend1 = end_obs - tcrit[scen1]
        flux_int[scen1] = F0[scen1]*tau[scen1]*(np.exp(-tstart1/tau[scen1]) - np.exp(-tend1/tau[scen1]))/(end_obs - start_obs)
        
        # Scenario two
        tstart2 = start_obs - tcrit[scen2]# Burst really starts, so we always start at the beginning of the observation
        tend2 = np.minimum(end_obs,tcrit[scen2]) - tcrit[scen2] 
        flux_int[scen2] = F0[scen2]*tau[scen2]*(np.exp(tend2/tau[scen2]) - np.exp(tstart2/tau[scen2]))/(end_obs - start_obs)

        # Scenario three 
        tstart3f = np.maximum(tcrit[scen3], start_obs) - tcrit[scen3]
        tend3f = end_obs - tcrit[scen3]
        tstart3w = start_obs - tcrit[scen3]
        tend3w = np.minimum(end_obs,tcrit[scen3]) - tcrit[scen3] 
        flux_int[scen3] = F0[scen3]*tau[scen3]*(np.exp(-tstart3f/tau[scen3]) - np.exp(-tend3f/tau[scen3]) + np.exp(tend3w/tau[scen3]) - np.exp(tstart3w/tau[scen3]))/(end_obs - start_obs)

        return flux_int

The comments in the code should make things clear.

Probability Contours

Lines

Each probability contour plot has three vertical lines or curves:

Shortest observation

The leftmost vertical line in the above image represents the shortest observation input

Longest Gap

The second line is from the left is a curve representing the shortest possible duration for a transient that will always be detected. Mathematically:

F_int = S_obs 
# But we can substitute for F_int
F0*tau*((np.exp(t_crit/tau) - np.exp(T_start/tau)) - (np.exp(-T_end/tau) - np.exp(-t_crit/tau)))/(T_end - T_start) = S_obs
F0 = S_obs*(T_end - T_start)/tau/((np.exp(t_crit/tau) - np.exp(T_start/tau)) - (np.exp(-T_end/tau) - np.exp(-t_crit/tau)))

We now have F0 as a function of tau. We simply need to figure out what T_end, T_start, t_end, and S_obs are.

S_obs is the sensitivity of the observation that would detect the transient that would have fallen in the longest gap. This is a bit more complicated than the FRED or Wilma case. It could be detected/not-detected either before or after the gap depending on when the transient occurs and how sensitive the observations are. We could try to figure out which one that is, but in the end we need a scalar value here. Therefore we will make a guess/approximation for now that the most sensitive observation is the correct one:

sens_maxgapbefore = obs['sens'][np.where((gaps[:] == max(gaps)))[0]-1][0]
sens_maxgapafter = obs['sens'][np.where((gaps[:] == max(gaps)))[0]+1][0]
sens_maxgap = min(sens_maxgapbefore, sens_maxgapafter)
sens_argmin = np.argmin(np.array([sens_maxgapbefore, sens_maxgapafter]))

We can set t_crit=0 and set this as the beginning of our "epoch."

By definition of the problem, the transient's critical time is occurs at exactly half the maximum gap. We don't know whether the transient is detected before or after the gap, but we make an assumption that it is whichever is the more sensitive observation. We have stored the index for this in sens_argmin, as shown above.

If the transient is detected before the gap, then T_start = - (obsdur + maxdistance/2). This would make T_end = -(maxdistance/2). In this case, the transient just looks like a wilma and we have:

# recall that tau =  tau/2 we insert it here: 
F0 = S_obs*(-(maxdistance/2) + obsdur + (maxdistance/2))/(tau/2)/(np.exp(-maxdistance/2/(tau/2)) - np.exp(-(obsdur + maxdistance/2)/(tau/2)))
# simplify
F0 = S_obs*(obsdur)/(tau/2)/(np.exp(-maxdistance/tau) - np.exp((2*obsdur + maxdistance)/tau))

Note that with the exception of the numerical value of obsdur, this is identical to the case of detecting the transient at the end of the maximum gap since that would look like a FRED which would add a negative to the time value.

Longest Timescale

The third line from the left is a curve representing the longest duration for a transient before it is detected as constant.

For the ERED light curve, the transient will be non-detected in either the first or the last observation, this time we will assume that the non-detection occurs in the least sensitive observation:

sens_last = obs['sens'][-1]
sens_first = obs['sens'][0]
sens_maxtime = max(sens_last, sens_first)
maxargobs = np.argmin(np.array([sens_last, sens_first]))
maxtime_obs = obs['duration'[maxargobs]

First we examine the case that it is non-detected in the first observation. We define the "epoch" by setting t_crit=0. This means that T_start = -tsurvey/2 and T_end = -(tsurvey/2 + obsur). In this observation, we only need to consider the Wilma portion of the light curve:

# recall that tau =  tau/2 we insert it here: 
F0 = S_obs*(-(tsurvey/2 + obsdur) + (t_survey/2) )/(tau/2)/(np.exp(-(t_survey/2)/(tau/2)) - np.exp(-(tsurvey/2 + obsdur)/(tau/2)))
# simplify
F0 = S_obs*(obsdur)/(tau/2)/(np.exp(-t_survey/tau) - np.exp(-(2*obsdur + tsurvey)/tau))

By examination, just like last time we can see that with the exception of the specific value of obsdur the case of non-detected in the last observation is identical.