code
stringlengths
59
4.4k
docstring
stringlengths
5
7.69k
def _multiple_self_ref_fk_check(class_model): self_fk = [] for f in class_model._meta.concrete_fields: if f.related_model in self_fk: return True if f.related_model == class_model: self_fk.append(class_model) return False
We check whether a class has more than 1 FK reference to itself.
def get_citation_years(graph: BELGraph) -> List[Tuple[int, int]]: return create_timeline(count_citation_years(graph))
Create a citation timeline counter from the graph.
def _get_Ks(self): "Ks as an array and type-checked." Ks = as_integer_type(self.Ks) if Ks.ndim != 1: raise TypeError("Ks should be 1-dim, got shape {}".format(Ks.shape)) if Ks.min() < 1: raise ValueError("Ks should be positive; got {}".format(Ks.min())) return Ks
Ks as an array and type-checked.
def bake(binder, recipe_id, publisher, message, cursor): recipe = _get_recipe(recipe_id, cursor) includes = _formatter_callback_factory() binder = collate_models(binder, ruleset=recipe, includes=includes) def flatten_filter(model): return (isinstance(model, cnxepub.CompositeDocument) or (isinstance(model, cnxepub.Binder) and model.metadata.get('type') == 'composite-chapter')) def only_documents_filter(model): return isinstance(model, cnxepub.Document) \ and not isinstance(model, cnxepub.CompositeDocument) for doc in cnxepub.flatten_to(binder, flatten_filter): publish_composite_model(cursor, doc, binder, publisher, message) for doc in cnxepub.flatten_to(binder, only_documents_filter): publish_collated_document(cursor, doc, binder) tree = cnxepub.model_to_tree(binder) publish_collated_tree(cursor, tree) return []
Given a `Binder` as `binder`, bake the contents and persist those changes alongside the published content.
def _new_from_xml(cls, xmlnode): child = xmlnode.children fields = [] while child: if child.type != "element" or child.ns().content != DATAFORM_NS: pass elif child.name == "field": fields.append(Field._new_from_xml(child)) child = child.next return cls(fields)
Create a new `Item` object from an XML element. :Parameters: - `xmlnode`: the XML element. :Types: - `xmlnode`: `libxml2.xmlNode` :return: the object created. :returntype: `Item`
def use(cls, name, method: [str, Set, List], url=None): if not isinstance(method, (str, list, set, tuple)): raise BaseException('Invalid type of method: %s' % type(method).__name__) if isinstance(method, str): method = {method} cls._interface[name] = [{'method': method, 'url': url}]
interface helper function
def node_has_namespaces(node: BaseEntity, namespaces: Set[str]) -> bool: ns = node.get(NAMESPACE) return ns is not None and ns in namespaces
Pass for nodes that have one of the given namespaces.
def check_write_permissions(file): try: open(file, 'a') except IOError: print("Can't open file {}. " "Please grant write permissions or change the path in your config".format(file)) sys.exit(1)
Check if we can write to the given file Otherwise since we might detach the process to run in the background we might never find out that writing failed and get an ugly exit message on startup. For example: ERROR: Child exited immediately with non-zero exit code 127 So we catch this error upfront and print a nicer error message with a hint on how to fix it.
def __prepare_domain(data): if not data: raise JIDError("Domain must be given") data = unicode(data) if not data: raise JIDError("Domain must be given") if u'[' in data: if data[0] == u'[' and data[-1] == u']': try: addr = _validate_ip_address(socket.AF_INET6, data[1:-1]) return "[{0}]".format(addr) except ValueError, err: logger.debug("ValueError: {0}".format(err)) raise JIDError(u"Invalid IPv6 literal in JID domainpart") else: raise JIDError(u"Invalid use of '[' or ']' in JID domainpart") elif data[0].isdigit() and data[-1].isdigit(): try: addr = _validate_ip_address(socket.AF_INET, data) except ValueError, err: logger.debug("ValueError: {0}".format(err)) data = UNICODE_DOT_RE.sub(u".", data) data = data.rstrip(u".") labels = data.split(u".") try: labels = [idna.nameprep(label) for label in labels] except UnicodeError: raise JIDError(u"Domain name invalid") for label in labels: if not STD3_LABEL_RE.match(label): raise JIDError(u"Domain name invalid") try: idna.ToASCII(label) except UnicodeError: raise JIDError(u"Domain name invalid") domain = u".".join(labels) if len(domain.encode("utf-8")) > 1023: raise JIDError(u"Domain name too long") return domain
Prepare domainpart of the JID. :Parameters: - `data`: Domain part of the JID :Types: - `data`: `unicode` :raise JIDError: if the domain name is too long.
def SInt(value, width): return Operators.ITEBV(width, Bit(value, width - 1) == 1, GetNBits(value, width) - 2**width, GetNBits(value, width))
Convert a bitstring `value` of `width` bits to a signed integer representation. :param value: The value to convert. :type value: int or long or BitVec :param int width: The width of the bitstring to consider :return: The converted value :rtype int or long or BitVec
def link_user(self, enterprise_customer, user_email): try: existing_user = User.objects.get(email=user_email) self.get_or_create(enterprise_customer=enterprise_customer, user_id=existing_user.id) except User.DoesNotExist: PendingEnterpriseCustomerUser.objects.get_or_create(enterprise_customer=enterprise_customer, user_email=user_email)
Link user email to Enterprise Customer. If :class:`django.contrib.auth.models.User` instance with specified email does not exist, :class:`.PendingEnterpriseCustomerUser` instance is created instead.
def setup_figure(figsize, as_subplot): if not as_subplot: fig = plt.figure(figsize=figsize) return fig
Setup a figure for plotting an image. Parameters ----------- figsize : (int, int) The size of the figure in (rows, columns). as_subplot : bool If the figure is a subplot, the setup_figure function is omitted to ensure that each subplot does not create a \ new figure and so that it can be output using the *output_subplot_array* function.
def edit(self, config, etag): data = self._json_encode(config) headers = self._default_headers() if etag is not None: headers["If-Match"] = etag return self._request(self.name, ok_status=None, data=data, headers=headers, method="PUT")
Update template config for specified template name. .. __: https://api.go.cd/current/#edit-template-config Returns: Response: :class:`gocd.api.response.Response` object
def basen_to_integer(self, X, cols, base): out_cols = X.columns.values.tolist() for col in cols: col_list = [col0 for col0 in out_cols if str(col0).startswith(str(col))] insert_at = out_cols.index(col_list[0]) if base == 1: value_array = np.array([int(col0.split('_')[-1]) for col0 in col_list]) else: len0 = len(col_list) value_array = np.array([base ** (len0 - 1 - i) for i in range(len0)]) X.insert(insert_at, col, np.dot(X[col_list].values, value_array.T)) X.drop(col_list, axis=1, inplace=True) out_cols = X.columns.values.tolist() return X
Convert basen code as integers. Parameters ---------- X : DataFrame encoded data cols : list-like Column names in the DataFrame that be encoded base : int The base of transform Returns ------- numerical: DataFrame
def pr_lmean(self): r precision = self.precision() recall = self.recall() if not precision or not recall: return 0.0 elif precision == recall: return precision return (precision - recall) / (math.log(precision) - math.log(recall))
r"""Return logarithmic mean of precision & recall. The logarithmic mean is: 0 if either precision or recall is 0, the precision if they are equal, otherwise :math:`\frac{precision - recall} {ln(precision) - ln(recall)}` Cf. https://en.wikipedia.org/wiki/Logarithmic_mean Returns ------- float The logarithmic mean of the confusion table's precision & recall Example ------- >>> ct = ConfusionTable(120, 60, 20, 30) >>> ct.pr_lmean() 0.8282429171492667
def plot_cdf(self, graphing_library='matplotlib'): graphed = False for percentile_csv in self.percentiles_files: csv_filename = os.path.basename(percentile_csv) column = self.csv_column_map[percentile_csv.replace(".percentiles.", ".")] if not self.check_important_sub_metrics(column): continue column = naarad.utils.sanitize_string(column) graph_title = '.'.join(csv_filename.split('.')[0:-1]) if self.sub_metric_description and column in self.sub_metric_description.keys(): graph_title += ' (' + self.sub_metric_description[column] + ')' if self.sub_metric_unit and column in self.sub_metric_unit.keys(): plot_data = [PD(input_csv=percentile_csv, csv_column=1, series_name=graph_title, x_label='Percentiles', y_label=column + ' (' + self.sub_metric_unit[column] + ')', precision=None, graph_height=600, graph_width=1200, graph_type='line')] else: plot_data = [PD(input_csv=percentile_csv, csv_column=1, series_name=graph_title, x_label='Percentiles', y_label=column, precision=None, graph_height=600, graph_width=1200, graph_type='line')] graphed, div_file = Metric.graphing_modules[graphing_library].graph_data_on_the_same_graph(plot_data, self.resource_directory, self.resource_path, graph_title) if graphed: self.plot_files.append(div_file) return True
plot CDF for important sub-metrics
def Tb(CASRN, AvailableMethods=False, Method=None, IgnoreMethods=[PSAT_DEFINITION]): r def list_methods(): methods = [] if CASRN in CRC_inorganic_data.index and not np.isnan(CRC_inorganic_data.at[CASRN, 'Tb']): methods.append(CRC_INORG) if CASRN in CRC_organic_data.index and not np.isnan(CRC_organic_data.at[CASRN, 'Tb']): methods.append(CRC_ORG) if CASRN in Yaws_data.index: methods.append(YAWS) if PSAT_DEFINITION not in IgnoreMethods: try: VaporPressure(CASRN=CASRN).solve_prop(101325.) methods.append(PSAT_DEFINITION) except: pass if IgnoreMethods: for Method in IgnoreMethods: if Method in methods: methods.remove(Method) methods.append(NONE) return methods if AvailableMethods: return list_methods() if not Method: Method = list_methods()[0] if Method == CRC_INORG: return float(CRC_inorganic_data.at[CASRN, 'Tb']) elif Method == CRC_ORG: return float(CRC_organic_data.at[CASRN, 'Tb']) elif Method == YAWS: return float(Yaws_data.at[CASRN, 'Tb']) elif Method == PSAT_DEFINITION: return VaporPressure(CASRN=CASRN).solve_prop(101325.) elif Method == NONE: return None else: raise Exception('Failure in in function')
r'''This function handles the retrieval of a chemical's boiling point. Lookup is based on CASRNs. Will automatically select a data source to use if no Method is provided; returns None if the data is not available. Prefered sources are 'CRC Physical Constants, organic' for organic chemicals, and 'CRC Physical Constants, inorganic' for inorganic chemicals. Function has data for approximately 13000 chemicals. Parameters ---------- CASRN : string CASRN [-] Returns ------- Tb : float Boiling temperature, [K] methods : list, only returned if AvailableMethods == True List of methods which can be used to obtain Tb with the given inputs Other Parameters ---------------- Method : string, optional A string for the method name to use, as defined by constants in Tb_methods AvailableMethods : bool, optional If True, function will determine which methods can be used to obtain Tb for the desired chemical, and will return methods instead of Tb IgnoreMethods : list, optional A list of methods to ignore in obtaining the full list of methods, useful for for performance reasons and ignoring inaccurate methods Notes ----- A total of four methods are available for this function. They are: * 'CRC_ORG', a compillation of data on organics as published in [1]_. * 'CRC_INORG', a compillation of data on inorganic as published in [1]_. * 'YAWS', a large compillation of data from a variety of sources; no data points are sourced in the work of [2]_. * 'PSAT_DEFINITION', calculation of boiling point from a vapor pressure calculation. This is normally off by a fraction of a degree even in the best cases. Listed in IgnoreMethods by default for performance reasons. Examples -------- >>> Tb('7732-18-5') 373.124 References ---------- .. [1] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014. .. [2] Yaws, Carl L. Thermophysical Properties of Chemicals and Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional Publishing, 2014.
def pklc_fovcatalog_objectinfo( pklcdir, fovcatalog, fovcatalog_columns=[0,1,2, 6,7, 8,9, 10,11, 13,14,15,16, 17,18,19, 20,21], fovcatalog_colnames=['objectid','ra','decl', 'jmag','jmag_err', 'hmag','hmag_err', 'kmag','kmag_err', 'bmag','vmag','rmag','imag', 'sdssu','sdssg','sdssr', 'sdssi','sdssz'], fovcatalog_colformats=('U20,f8,f8,' 'f8,f8,' 'f8,f8,' 'f8,f8,' 'f8,f8,f8,f8,' 'f8,f8,f8,' 'f8,f8') ): if fovcatalog.endswith('.gz'): catfd = gzip.open(fovcatalog) else: catfd = open(fovcatalog) fovcat = np.genfromtxt(catfd, usecols=fovcatalog_columns, names=fovcatalog_colnames, dtype=fovcatalog_colformats) catfd.close() pklclist = sorted(glob.glob(os.path.join(pklcdir, '*HAT*-pklc.pkl'))) updatedpklcs, failedpklcs = [], [] for pklc in pklclist: lcdict = read_hatpi_pklc(pklc) objectid = lcdict['objectid'] catind = np.where(fovcat['objectid'] == objectid) if len(catind) > 0 and catind[0]: lcdict['objectinfo'].update( {x:y for x,y in zip( fovcatalog_colnames, [np.asscalar(fovcat[z][catind]) for z in fovcatalog_colnames] ) } ) with open(pklc+'-tmp','wb') as outfd: pickle.dump(lcdict, outfd, pickle.HIGHEST_PROTOCOL) if os.path.exists(pklc+'-tmp'): shutil.move(pklc+'-tmp',pklc) LOGINFO('updated %s with catalog info for %s at %.3f, %.3f OK' % (pklc, objectid, lcdict['objectinfo']['ra'], lcdict['objectinfo']['decl'])) updatedpklcs.append(pklc) else: failedpklcs.append(pklc) return updatedpklcs, failedpklcs
Adds catalog info to objectinfo key of all pklcs in lcdir. If fovcatalog, fovcatalog_columns, fovcatalog_colnames are provided, uses them to find all the additional information listed in the fovcatalog_colname keys, and writes this info to the objectinfo key of each lcdict. This makes it easier for astrobase tools to work on these light curve. The default set up for fovcatalog is to use a text file generated by the HATPI pipeline before auto-calibrating a field. The format is specified as above in _columns, _colnames, and _colformats.
def _pkl_periodogram(lspinfo, plotdpi=100, override_pfmethod=None): pgramylabel = PLOTYLABELS[lspinfo['method']] periods = lspinfo['periods'] lspvals = lspinfo['lspvals'] bestperiod = lspinfo['bestperiod'] nbestperiods = lspinfo['nbestperiods'] nbestlspvals = lspinfo['nbestlspvals'] pgramfig = plt.figure(figsize=(7.5,4.8),dpi=plotdpi) plt.plot(periods,lspvals) plt.xscale('log',basex=10) plt.xlabel('Period [days]') plt.ylabel(pgramylabel) plottitle = '%s - %.6f d' % (METHODLABELS[lspinfo['method']], bestperiod) plt.title(plottitle) for xbestperiod, xbestpeak in zip(nbestperiods, nbestlspvals): plt.annotate('%.6f' % xbestperiod, xy=(xbestperiod, xbestpeak), xycoords='data', xytext=(0.0,25.0), textcoords='offset points', arrowprops=dict(arrowstyle="->"),fontsize='14.0') plt.grid(color=' alpha=0.9, zorder=0, linewidth=1.0, linestyle=':') pgrampng = StrIO() pgramfig.savefig(pgrampng, pad_inches=0.0, format='png') plt.close() pgrampng.seek(0) pgramb64 = base64.b64encode(pgrampng.read()) pgrampng.close() if not override_pfmethod: checkplotdict = { lspinfo['method']:{ 'periods':periods, 'lspvals':lspvals, 'bestperiod':bestperiod, 'nbestperiods':nbestperiods, 'nbestlspvals':nbestlspvals, 'periodogram':pgramb64, } } else: checkplotdict = { override_pfmethod:{ 'periods':periods, 'lspvals':lspvals, 'bestperiod':bestperiod, 'nbestperiods':nbestperiods, 'nbestlspvals':nbestlspvals, 'periodogram':pgramb64, } } return checkplotdict
This returns the periodogram plot PNG as base64, plus info as a dict. Parameters ---------- lspinfo : dict This is an lspinfo dict containing results from a period-finding function. If it's from an astrobase period-finding function in periodbase, this will already be in the correct format. To use external period-finder results with this function, the `lspinfo` dict must be of the following form, with at least the keys listed below:: {'periods': np.array of all periods searched by the period-finder, 'lspvals': np.array of periodogram power value for each period, 'bestperiod': a float value that is the period with the highest peak in the periodogram, i.e. the most-likely actual period, 'method': a three-letter code naming the period-finder used; must be one of the keys in the `astrobase.periodbase.METHODLABELS` dict, 'nbestperiods': a list of the periods corresponding to periodogram peaks (`nbestlspvals` below) to annotate on the periodogram plot so they can be called out visually, 'nbestlspvals': a list of the power values associated with periodogram peaks to annotate on the periodogram plot so they can be called out visually; should be the same length as `nbestperiods` above} `nbestperiods` and `nbestlspvals` must have at least 5 elements each, e.g. describing the five 'best' (highest power) peaks in the periodogram. plotdpi : int The resolution in DPI of the output periodogram plot to make. override_pfmethod : str or None This is used to set a custom label for this periodogram method. Normally, this is taken from the 'method' key in the input `lspinfo` dict, but if you want to override the output method name, provide this as a string here. This can be useful if you have multiple results you want to incorporate into a checkplotdict from a single period-finder (e.g. if you ran BLS over several period ranges separately). Returns ------- dict Returns a dict that contains the following items:: {methodname: {'periods':the period array from lspinfo, 'lspval': the periodogram power array from lspinfo, 'bestperiod': the best period from lspinfo, 'nbestperiods': the 'nbestperiods' list from lspinfo, 'nbestlspvals': the 'nbestlspvals' list from lspinfo, 'periodogram': base64 encoded string representation of the periodogram plot}} The dict is returned in this format so it can be directly incorporated under the period-finder's label `methodname` in a checkplotdict, using Python's dict `update()` method.
def __groupchat_message(self,stanza): fr=stanza.get_from() key=fr.bare().as_unicode() rs=self.rooms.get(key) if not rs: self.__logger.debug("groupchat message from unknown source") return False rs.process_groupchat_message(stanza) return True
Process a groupchat message from a MUC room. :Parameters: - `stanza`: the stanza received. :Types: - `stanza`: `Message` :return: `True` if the message was properly recognized as directed to one of the managed rooms, `False` otherwise. :returntype: `bool`
def is_self(addr): ips = [] for i in netifaces.interfaces(): entry = netifaces.ifaddresses(i) if netifaces.AF_INET in entry: for ipv4 in entry[netifaces.AF_INET]: if "addr" in ipv4: ips.append(ipv4["addr"]) return addr in ips or addr == get_self_hostname()
check if this host is this addr
def addNoise(input, noise=0.1, doForeground=True, doBackground=True): if doForeground and doBackground: return numpy.abs(input - (numpy.random.random(input.shape) < noise)) else: if doForeground: return numpy.logical_and(input, numpy.random.random(input.shape) > noise) if doBackground: return numpy.logical_or(input, numpy.random.random(input.shape) < noise) return input
Add noise to the given input. Parameters: ----------------------------------------------- input: the input to add noise to noise: how much noise to add doForeground: If true, turn off some of the 1 bits in the input doBackground: If true, turn on some of the 0 bits in the input
def load(directory_name, module_name): directory_name = os.path.expanduser(directory_name) if os.path.isdir(directory_name) and directory_name not in sys.path: sys.path.append(directory_name) try: return importlib.import_module(module_name) except ImportError: pass
Try to load and return a module Will add DIRECTORY_NAME to sys.path and tries to import MODULE_NAME. For example: load("~/.yaz", "yaz_extension")
def merge_ordered(ordereds: typing.Iterable[typing.Any]) -> typing.Iterable[typing.Any]: seen_set = set() add_seen = seen_set.add return reversed(tuple(map( lambda obj: add_seen(obj) or obj, filterfalse( seen_set.__contains__, chain.from_iterable(map(reversed, reversed(ordereds))), ), )))
Merge multiple ordered so that within-ordered order is preserved
def ConsumeIdentifier(self): result = self.token if not self._IDENTIFIER.match(result): raise self._ParseError('Expected identifier.') self.NextToken() return result
Consumes protocol message field identifier. Returns: Identifier string. Raises: ParseError: If an identifier couldn't be consumed.
def convert_elementwise_sub( params, w_name, scope_name, inputs, layers, weights, names ): print('Converting elementwise_sub ...') model0 = layers[inputs[0]] model1 = layers[inputs[1]] if names == 'short': tf_name = 'S' + random_string(7) elif names == 'keep': tf_name = w_name else: tf_name = w_name + str(random.random()) sub = keras.layers.Subtract(name=tf_name) layers[scope_name] = sub([model0, model1])
Convert elementwise subtraction. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
def flare_model(flareparams, times, mags, errs): (amplitude, flare_peak_time, rise_gaussian_stdev, decay_time_constant) = flareparams zerolevel = np.median(mags) modelmags = np.full_like(times, zerolevel) modelmags[times < flare_peak_time] = ( mags[times < flare_peak_time] + amplitude * np.exp( -((times[times < flare_peak_time] - flare_peak_time) * (times[times < flare_peak_time] - flare_peak_time)) / (2.0*rise_gaussian_stdev*rise_gaussian_stdev) ) ) modelmags[times > flare_peak_time] = ( mags[times > flare_peak_time] + amplitude * np.exp( -((times[times > flare_peak_time] - flare_peak_time)) / (decay_time_constant) ) ) return modelmags, times, mags, errs
This is a flare model function, similar to Kowalski+ 2011. From the paper by Pitkin+ 2014: http://adsabs.harvard.edu/abs/2014MNRAS.445.2268P Parameters ---------- flareparams : list of float This defines the flare model:: [amplitude, flare_peak_time, rise_gaussian_stdev, decay_time_constant] where: `amplitude`: the maximum flare amplitude in mags or flux. If flux, then amplitude should be positive. If mags, amplitude should be negative. `flare_peak_time`: time at which the flare maximum happens. `rise_gaussian_stdev`: the stdev of the gaussian describing the rise of the flare. `decay_time_constant`: the time constant of the exponential fall of the flare. times,mags,errs : np.array The input time-series of measurements and associated errors for which the model will be generated. The times will be used to generate model mags. Returns ------- (modelmags, times, mags, errs) : tuple Returns the model mags evaluated at the input time values. Also returns the input `times`, `mags`, and `errs`.
def _coeff4(N, a0, a1, a2, a3): if N == 1: return ones(1) n = arange(0, N) N1 = N - 1. w = a0 -a1*cos(2.*pi*n / N1) + a2*cos(4.*pi*n / N1) - a3*cos(6.*pi*n / N1) return w
a common internal function to some window functions with 4 coeffs For the blackmna harris for instance, the results are identical to octave if N is odd but not for even values...if n =0 whatever N is, the w(0) must be equal to a0-a1+a2-a3, which is the case here, but not in octave...
def update(self): if self.delay > 0: self.delay -= 1; return if self.fi == 0: if len(self.q) == 1: self.fn = float("inf") else: self.fn = len(self.q[self.i]) / self.speed self.fn = max(self.fn, self.mf) self.fi += 1 if self.fi > self.fn: self.fi = 0 self.i = (self.i+1) % len(self.q)
Rotates the queued texts and determines display time.
def _get_zoom(zoom, input_raster, pyramid_type): if not zoom: minzoom = 1 maxzoom = get_best_zoom_level(input_raster, pyramid_type) elif len(zoom) == 1: minzoom = zoom[0] maxzoom = zoom[0] elif len(zoom) == 2: if zoom[0] < zoom[1]: minzoom = zoom[0] maxzoom = zoom[1] else: minzoom = zoom[1] maxzoom = zoom[0] return minzoom, maxzoom
Determine minimum and maximum zoomlevel.
def calculate(self, T, method): r if method == CRC_INORG_S: Vms = self.CRC_INORG_S_Vm elif method in self.tabular_data: Vms = self.interpolate(T, method) return Vms
r'''Method to calculate the molar volume of a solid at tempearture `T` with a given method. This method has no exception handling; see `T_dependent_property` for that. Parameters ---------- T : float Temperature at which to calculate molar volume, [K] method : str Name of the method to use Returns ------- Vms : float Molar volume of the solid at T, [m^3/mol]
def get_command(self, ctx, name): if name in misc.__all__: return getattr(misc, name) try: resource = tower_cli.get_resource(name) return ResSubcommand(resource) except ImportError: pass secho('No such command: %s.' % name, fg='red', bold=True) sys.exit(2)
Given a command identified by its name, import the appropriate module and return the decorated command. Resources are automatically commands, but if both a resource and a command are defined, the command takes precedence.
def extract_execution_state(self, topology): execution_state = topology.execution_state executionState = { "cluster": execution_state.cluster, "environ": execution_state.environ, "role": execution_state.role, "jobname": topology.name, "submission_time": execution_state.submission_time, "submission_user": execution_state.submission_user, "release_username": execution_state.release_state.release_username, "release_tag": execution_state.release_state.release_tag, "release_version": execution_state.release_state.release_version, "has_physical_plan": None, "has_tmaster_location": None, "has_scheduler_location": None, "extra_links": [], } for extra_link in self.config.extra_links: link = extra_link.copy() link["url"] = self.config.get_formatted_url(executionState, link[EXTRA_LINK_FORMATTER_KEY]) executionState["extra_links"].append(link) return executionState
Returns the repesentation of execution state that will be returned from Tracker.
def run(self, x, y, lr=0.01, train_epochs=1000, test_epochs=1000, idx=0, verbose=None, **kwargs): verbose = SETTINGS.get_default(verbose=verbose) optim = th.optim.Adam(self.parameters(), lr=lr) running_loss = 0 teloss = 0 for i in range(train_epochs + test_epochs): optim.zero_grad() pred = self.forward(x) loss = self.criterion(pred, y) running_loss += loss.item() if i < train_epochs: loss.backward() optim.step() else: teloss += running_loss if verbose and not i % 300: print('Idx:{}; epoch:{}; score:{}'. format(idx, i, running_loss/300)) running_loss = 0.0 return teloss / test_epochs
Run the GNN on a pair x,y of FloatTensor data.
def _byteify(data, ignore_dicts=False): if isinstance(data, unicode): return data.encode("utf-8") if isinstance(data, list): return [_byteify(item, ignore_dicts=True) for item in data] if isinstance(data, dict) and not ignore_dicts: return { _byteify(key, ignore_dicts=True): _byteify(value, ignore_dicts=True) for key, value in data.iteritems() } return data
converts unicode to utf-8 when reading in json files
def contains_sequence(self, *items): if len(items) == 0: raise ValueError('one or more args must be given') else: try: for i in xrange(len(self.val) - len(items) + 1): for j in xrange(len(items)): if self.val[i+j] != items[j]: break else: return self except TypeError: raise TypeError('val is not iterable') self._err('Expected <%s> to contain sequence %s, but did not.' % (self.val, self._fmt_items(items)))
Asserts that val contains the given sequence of items in order.
def is_dicom_file(filepath): if not os.path.exists(filepath): raise IOError('File {} not found.'.format(filepath)) filename = os.path.basename(filepath) if filename == 'DICOMDIR': return False try: _ = dicom.read_file(filepath) except Exception as exc: log.debug('Checking if {0} was a DICOM, but returned ' 'False.'.format(filepath)) return False return True
Tries to read the file using dicom.read_file, if the file exists and dicom.read_file does not raise and Exception returns True. False otherwise. :param filepath: str Path to DICOM file :return: bool
def CALLDATALOAD(self, offset): if issymbolic(offset): if solver.can_be_true(self._constraints, offset == self._used_calldata_size): self.constraints.add(offset == self._used_calldata_size) raise ConcretizeArgument(1, policy='SAMPLED') self._use_calldata(offset, 32) data_length = len(self.data) bytes = [] for i in range(32): try: c = Operators.ITEBV(8, offset + i < data_length, self.data[offset + i], 0) except IndexError: c = 0 bytes.append(c) return Operators.CONCAT(256, *bytes)
Get input data of current environment
def _assure_dir(self): try: os.makedirs(self._state_dir) except OSError as err: if err.errno != errno.EEXIST: raise
Make sure the state directory exists
def _addRoute(self, f, matcher): self._routes.append((f.func_name, f, matcher))
Add a route handler and matcher to the collection of possible routes.
def plot_drawdown_periods(returns, top=10, ax=None, **kwargs): if ax is None: ax = plt.gca() y_axis_formatter = FuncFormatter(utils.two_dec_places) ax.yaxis.set_major_formatter(FuncFormatter(y_axis_formatter)) df_cum_rets = ep.cum_returns(returns, starting_value=1.0) df_drawdowns = timeseries.gen_drawdown_table(returns, top=top) df_cum_rets.plot(ax=ax, **kwargs) lim = ax.get_ylim() colors = sns.cubehelix_palette(len(df_drawdowns))[::-1] for i, (peak, recovery) in df_drawdowns[ ['Peak date', 'Recovery date']].iterrows(): if pd.isnull(recovery): recovery = returns.index[-1] ax.fill_between((peak, recovery), lim[0], lim[1], alpha=.4, color=colors[i]) ax.set_ylim(lim) ax.set_title('Top %i drawdown periods' % top) ax.set_ylabel('Cumulative returns') ax.legend(['Portfolio'], loc='upper left', frameon=True, framealpha=0.5) ax.set_xlabel('') return ax
Plots cumulative returns highlighting top drawdown periods. Parameters ---------- returns : pd.Series Daily returns of the strategy, noncumulative. - See full explanation in tears.create_full_tear_sheet. top : int, optional Amount of top drawdowns periods to plot (default 10). ax : matplotlib.Axes, optional Axes upon which to plot. **kwargs, optional Passed to plotting function. Returns ------- ax : matplotlib.Axes The axes that were plotted on.
def calculate(self, T, P, zs, ws, method): r if method == SIMPLE: sigmas = [i(T) for i in self.SurfaceTensions] return mixing_simple(zs, sigmas) elif method == DIGUILIOTEJA: return Diguilio_Teja(T=T, xs=zs, sigmas_Tb=self.sigmas_Tb, Tbs=self.Tbs, Tcs=self.Tcs) elif method == WINTERFELDSCRIVENDAVIS: sigmas = [i(T) for i in self.SurfaceTensions] rhoms = [1./i(T, P) for i in self.VolumeLiquids] return Winterfeld_Scriven_Davis(zs, sigmas, rhoms) else: raise Exception('Method not valid')
r'''Method to calculate surface tension of a liquid mixture at temperature `T`, pressure `P`, mole fractions `zs` and weight fractions `ws` with a given method. This method has no exception handling; see `mixture_property` for that. Parameters ---------- T : float Temperature at which to calculate the property, [K] P : float Pressure at which to calculate the property, [Pa] zs : list[float] Mole fractions of all species in the mixture, [-] ws : list[float] Weight fractions of all species in the mixture, [-] method : str Name of the method to use Returns ------- sigma : float Surface tension of the liquid at given conditions, [N/m]
def mmGetMetricSequencesPredictedActiveCellsShared(self): self._mmComputeTransitionTraces() numSequencesForCell = defaultdict(lambda: 0) for predictedActiveCells in ( self._mmData["predictedActiveCellsForSequence"].values()): for cell in predictedActiveCells: numSequencesForCell[cell] += 1 return Metric(self, " numSequencesForCell.values())
Metric for number of sequences each predicted => active cell appears in Note: This metric is flawed when it comes to high-order sequences. @return (Metric) metric
def template_apiserver_hcl(cl_args, masters, zookeepers): single_master = masters[0] apiserver_config_template = "%s/standalone/templates/apiserver.template.hcl" \ % cl_args["config_path"] apiserver_config_actual = "%s/standalone/resources/apiserver.hcl" % cl_args["config_path"] replacements = { "<heron_apiserver_hostname>": '"%s"' % get_hostname(single_master, cl_args), "<heron_apiserver_executable>": '"%s/heron-apiserver"' % config.get_heron_bin_dir() if is_self(single_master) else '"%s/.heron/bin/heron-apiserver"' % get_remote_home(single_master, cl_args), "<zookeeper_host:zookeeper_port>": ",".join( ['%s' % zk if ":" in zk else '%s:2181' % zk for zk in zookeepers]), "<scheduler_uri>": "http://%s:4646" % single_master } template_file(apiserver_config_template, apiserver_config_actual, replacements)
template apiserver.hcl
def multiselect(self, window_name, object_name, row_text_list, partial_match=False): object_handle = self._get_object_handle(window_name, object_name) if not object_handle.AXEnabled: raise LdtpServerException(u"Object %s state disabled" % object_name) object_handle.activate() selected = False try: window = self._get_front_most_window() except (IndexError,): window = self._get_any_window() for row_text in row_text_list: selected = False for cell in object_handle.AXRows: parent_cell = cell cell = self._getfirstmatchingchild(cell, "(AXTextField|AXStaticText)") if not cell: continue if re.match(row_text, cell.AXValue): selected = True if not parent_cell.AXSelected: x, y, width, height = self._getobjectsize(parent_cell) window.clickMouseButtonLeftWithMods((x + width / 2, y + height / 2), ['<command_l>']) self.wait(0.5) else: pass break if not selected: raise LdtpServerException(u"Unable to select row: %s" % row_text) if not selected: raise LdtpServerException(u"Unable to select any row") return 1
Select multiple row @param window_name: Window name to type in, either full name, LDTP's name convention, or a Unix glob. @type window_name: string @param object_name: Object name to type in, either full name, LDTP's name convention, or a Unix glob. @type object_name: string @param row_text_list: Row list with matching text to select @type row_text: string @return: 1 on success. @rtype: integer
def HHV(self, HHV): self._HHV = HHV if self.isCoal: self._DH298 = self._calculate_DH298_coal()
Set the higher heating value of the stream to the specified value, and recalculate the formation enthalpy of the daf coal. :param HHV: MJ/kg coal, higher heating value
def get_variables_with_name(name=None, train_only=True, verbose=False): if name is None: raise Exception("please input a name") logging.info(" [*] geting variables with %s" % name) if train_only: t_vars = tf.trainable_variables() else: t_vars = tf.global_variables() d_vars = [var for var in t_vars if name in var.name] if verbose: for idx, v in enumerate(d_vars): logging.info(" got {:3}: {:15} {}".format(idx, v.name, str(v.get_shape()))) return d_vars
Get a list of TensorFlow variables by a given name scope. Parameters ---------- name : str Get the variables that contain this name. train_only : boolean If Ture, only get the trainable variables. verbose : boolean If True, print the information of all variables. Returns ------- list of Tensor A list of TensorFlow variables Examples -------- >>> import tensorlayer as tl >>> dense_vars = tl.layers.get_variables_with_name('dense', True, True)
def SLOAD(self, offset): storage_address = self.address self._publish('will_evm_read_storage', storage_address, offset) value = self.world.get_storage_data(storage_address, offset) self._publish('did_evm_read_storage', storage_address, offset, value) return value
Load word from storage
def add_item(self, item, replace = False): if item.jid in self._jids: if replace: self.remove_item(item.jid) else: raise ValueError("JID already in the roster") index = len(self._items) self._items.append(item) self._jids[item.jid] = index
Add an item to the roster. This will not automatically update the roster on the server. :Parameters: - `item`: the item to add - `replace`: if `True` then existing item will be replaced, otherwise a `ValueError` will be raised on conflict :Types: - `item`: `RosterItem` - `replace`: `bool`
def setup_editor(self, editor): editor.cursorPositionChanged.connect(self.on_cursor_pos_changed) try: m = editor.modes.get(modes.GoToAssignmentsMode) except KeyError: pass else: assert isinstance(m, modes.GoToAssignmentsMode) m.out_of_doc.connect(self.on_goto_out_of_doc)
Setup the python editor, run the server and connect a few signals. :param editor: editor to setup.
def init(self): self.es.indices.create(index=self.params['index'], ignore=400)
Create an Elasticsearch index if necessary
def generate_thumbnail(source, outname, box, fit=True, options=None, thumb_fit_centering=(0.5, 0.5)): logger = logging.getLogger(__name__) img = _read_image(source) original_format = img.format if fit: img = ImageOps.fit(img, box, PILImage.ANTIALIAS, centering=thumb_fit_centering) else: img.thumbnail(box, PILImage.ANTIALIAS) outformat = img.format or original_format or 'JPEG' logger.debug('Save thumnail image: %s (%s)', outname, outformat) save_image(img, outname, outformat, options=options, autoconvert=True)
Create a thumbnail image.
def _cacheSequenceInfoType(self): hasReset = self.resetFieldName is not None hasSequenceId = self.sequenceIdFieldName is not None if hasReset and not hasSequenceId: self._sequenceInfoType = self.SEQUENCEINFO_RESET_ONLY self._prevSequenceId = 0 elif not hasReset and hasSequenceId: self._sequenceInfoType = self.SEQUENCEINFO_SEQUENCEID_ONLY self._prevSequenceId = None elif hasReset and hasSequenceId: self._sequenceInfoType = self.SEQUENCEINFO_BOTH else: self._sequenceInfoType = self.SEQUENCEINFO_NONE
Figure out whether reset, sequenceId, both or neither are present in the data. Compute once instead of every time. Taken from filesource.py
def drawCircle(self, x0, y0, r, color=None): md.draw_circle(self.set, x0, y0, r, color)
Draw a circle in an RGB color, with center x0, y0 and radius r.
def mmap(self, addr, size, perms, data_init=None, name=None): assert addr is None or isinstance(addr, int), 'Address shall be concrete' self.cpu._publish('will_map_memory', addr, size, perms, None, None) if addr is not None: assert addr < self.memory_size, 'Address too big' addr = self._floor(addr) size = self._ceil(size) addr = self._search(size, addr) for i in range(self._page(addr), self._page(addr + size)): assert i not in self._page2map, 'Map already used' m = AnonMap(start=addr, size=size, perms=perms, data_init=data_init, name=name) self._add(m) logger.debug('New memory map @%x size:%x', addr, size) self.cpu._publish('did_map_memory', addr, size, perms, None, None, addr) return addr
Creates a new mapping in the memory address space. :param addr: the starting address (took as hint). If C{addr} is C{0} the first big enough chunk of memory will be selected as starting address. :param size: the length of the mapping. :param perms: the access permissions to this memory. :param data_init: optional data to initialize this memory. :param name: optional name to give to this mapping :return: the starting address where the memory was mapped. :raises error: - 'Address shall be concrete' if C{addr} is not an integer number. - 'Address too big' if C{addr} goes beyond the limit of the memory. - 'Map already used' if the piece of memory starting in C{addr} and with length C{size} isn't free. :rtype: int
def task_ids(self): if not self.id: raise WorkflowError('Workflow is not running. Cannot get task IDs.') if self.batch_values: raise NotImplementedError("Query Each Workflow Id within the Batch Workflow for task IDs.") wf = self.workflow.get(self.id) return [task['id'] for task in wf['tasks']]
Get the task IDs of a running workflow Args: None Returns: List of task IDs
def slice_clip(filename, start, stop, n_samples, sr, mono=True): with psf.SoundFile(str(filename), mode='r') as soundf: n_target = stop - start soundf.seek(start) y = soundf.read(n_target).T if mono: y = librosa.to_mono(y) y = librosa.resample(y, soundf.samplerate, sr) y = librosa.util.fix_length(y, n_samples) return y
Slice a fragment of audio from a file. This uses pysoundfile to efficiently seek without loading the entire stream. Parameters ---------- filename : str Path to the input file start : int The sample index of `filename` at which the audio fragment should start stop : int The sample index of `filename` at which the audio fragment should stop (e.g. y = audio[start:stop]) n_samples : int > 0 The number of samples to load sr : int > 0 The target sampling rate mono : bool Ensure monophonic audio Returns ------- y : np.ndarray [shape=(n_samples,)] A fragment of audio sampled from `filename` Raises ------ ValueError If the source file is shorter than the requested length
def runModelGivenBaseAndParams(modelID, jobID, baseDescription, params, predictedField, reportKeys, optimizeKey, jobsDAO, modelCheckpointGUID, logLevel=None, predictionCacheMaxRecords=None): from nupic.swarming.ModelRunner import OPFModelRunner logger = logging.getLogger('com.numenta.nupic.hypersearch.utils') experimentDir = tempfile.mkdtemp() try: logger.info("Using experiment directory: %s" % (experimentDir)) paramsFilePath = os.path.join(experimentDir, 'description.py') paramsFile = open(paramsFilePath, 'wb') paramsFile.write(_paramsFileHead()) items = params.items() items.sort() for (key,value) in items: quotedKey = _quoteAndEscape(key) if isinstance(value, basestring): paramsFile.write(" %s : '%s',\n" % (quotedKey , value)) else: paramsFile.write(" %s : %s,\n" % (quotedKey , value)) paramsFile.write(_paramsFileTail()) paramsFile.close() baseParamsFile = open(os.path.join(experimentDir, 'base.py'), 'wb') baseParamsFile.write(baseDescription) baseParamsFile.close() fd = open(paramsFilePath) expDescription = fd.read() fd.close() jobsDAO.modelSetFields(modelID, {'genDescription': expDescription}) try: runner = OPFModelRunner( modelID=modelID, jobID=jobID, predictedField=predictedField, experimentDir=experimentDir, reportKeyPatterns=reportKeys, optimizeKeyPattern=optimizeKey, jobsDAO=jobsDAO, modelCheckpointGUID=modelCheckpointGUID, logLevel=logLevel, predictionCacheMaxRecords=predictionCacheMaxRecords) signal.signal(signal.SIGINT, runner.handleWarningSignal) (completionReason, completionMsg) = runner.run() except InvalidConnectionException: raise except Exception, e: (completionReason, completionMsg) = _handleModelRunnerException(jobID, modelID, jobsDAO, experimentDir, logger, e) finally: shutil.rmtree(experimentDir) signal.signal(signal.SIGINT, signal.default_int_handler) return (completionReason, completionMsg)
This creates an experiment directory with a base.py description file created from 'baseDescription' and a description.py generated from the given params dict and then runs the experiment. Parameters: ------------------------------------------------------------------------- modelID: ID for this model in the models table jobID: ID for this hypersearch job in the jobs table baseDescription: Contents of a description.py with the base experiment description params: Dictionary of specific parameters to override within the baseDescriptionFile. predictedField: Name of the input field for which this model is being optimized reportKeys: Which metrics of the experiment to store into the results dict of the model's database entry optimizeKey: Which metric we are optimizing for jobsDAO Jobs data access object - the interface to the jobs database which has the model's table. modelCheckpointGUID: A persistent, globally-unique identifier for constructing the model checkpoint key logLevel: override logging level to this value, if not None retval: (completionReason, completionMsg)
def update_desc_rcin_path(desc,sibs_len,pdesc_level): psibs_len = pdesc_level.__len__() parent_breadth = desc['parent_breadth_path'][-1] if(desc['sib_seq']==(sibs_len - 1)): if(parent_breadth==(psibs_len -1)): pass else: parent_rsib_breadth = parent_breadth + 1 prsib_desc = pdesc_level[parent_rsib_breadth] if(prsib_desc['leaf']): pass else: rcin_path = copy.deepcopy(prsib_desc['path']) rcin_path.append(0) desc['rcin_path'] = rcin_path else: pass return(desc)
rightCousin nextCousin rightCin nextCin rcin ncin parents are neighbors,and on the right
def Integer(name, base=10, encoding=None): def _match(request, value): return name, query.Integer( value, base=base, encoding=contentEncoding(request.requestHeaders, encoding)) return _match
Match an integer route parameter. :type name: `bytes` :param name: Route parameter name. :type base: `int` :param base: Base to interpret the value in. :type encoding: `bytes` :param encoding: Default encoding to assume if the ``Content-Type`` header is lacking one. :return: ``callable`` suitable for use with `route` or `subroute`.
def isMine(self, scriptname): suffix = os.path.splitext(scriptname)[1].lower() if suffix.startswith('.'): suffix = suffix[1:] return self.suffix == suffix
Primitive queuing system detection; only looks at suffix at the moment.
def _find_zero(cpu, constrs, ptr): offset = 0 while True: byt = cpu.read_int(ptr + offset, 8) if issymbolic(byt): if not solver.can_be_true(constrs, byt != 0): break else: if byt == 0: break offset += 1 return offset
Helper for finding the closest NULL or, effectively NULL byte from a starting address. :param Cpu cpu: :param ConstraintSet constrs: Constraints for current `State` :param int ptr: Address to start searching for a zero from :return: Offset from `ptr` to first byte that is 0 or an `Expression` that must be zero
def camel2word(string): def wordize(match): return ' ' + match.group(1).lower() return string[0] + re.sub(r'([A-Z])', wordize, string[1:])
Covert name from CamelCase to "Normal case". >>> camel2word('CamelCase') 'Camel case' >>> camel2word('CaseWithSpec') 'Case with spec'
def python_value(self, value): value = super(OrderedUUIDField, self).python_value(value) u = binascii.b2a_hex(value) value = u[8:16] + u[4:8] + u[0:4] + u[16:22] + u[22:32] return UUID(value.decode())
Convert binary blob to UUID instance
def aggregate(self, clazz, new_col, *args): if is_callable(clazz) and not is_none(new_col) and has_elements(*args): return self.__do_aggregate(clazz, new_col, *args)
Aggregate the rows of the DataFrame into a single value. :param clazz: name of a class that extends class Callable :type clazz: class :param new_col: name of the new column :type new_col: str :param args: list of column names of the object that function should be applied to :type args: tuple :return: returns a new dataframe object with the aggregated value :rtype: DataFrame
def row(self, idx): return DataFrameRow(idx, [x[idx] for x in self], self.colnames)
Returns DataFrameRow of the DataFrame given its index. :param idx: the index of the row in the DataFrame. :return: returns a DataFrameRow
def set_permissions(self): r = self.local_renderer for path in r.env.paths_owned: r.env.path_owned = path r.sudo('chown {celery_daemon_user}:{celery_daemon_user} {celery_path_owned}')
Sets ownership and permissions for Celery-related files.
def bresenham_line(setter, x0, y0, x1, y1, color=None, colorFunc=None): steep = abs(y1 - y0) > abs(x1 - x0) if steep: x0, y0 = y0, x0 x1, y1 = y1, x1 if x0 > x1: x0, x1 = x1, x0 y0, y1 = y1, y0 dx = x1 - x0 dy = abs(y1 - y0) err = dx / 2 if y0 < y1: ystep = 1 else: ystep = -1 count = 0 for x in range(x0, x1 + 1): if colorFunc: color = colorFunc(count) count += 1 if steep: setter(y0, x, color) else: setter(x, y0, color) err -= dy if err < 0: y0 += ystep err += dx
Draw line from point x0,y0 to x,1,y1. Will draw beyond matrix bounds.
def sim_minkowski(src, tar, qval=2, pval=1, alphabet=None): return Minkowski().sim(src, tar, qval, pval, alphabet)
Return normalized Minkowski similarity of two strings. This is a wrapper for :py:meth:`Minkowski.sim`. Parameters ---------- src : str Source string (or QGrams/Counter objects) for comparison tar : str Target string (or QGrams/Counter objects) for comparison qval : int The length of each q-gram; 0 for non-q-gram version pval : int or float The :math:`p`-value of the :math:`L^p`-space alphabet : collection or int The values or size of the alphabet Returns ------- float The normalized Minkowski similarity Examples -------- >>> sim_minkowski('cat', 'hat') 0.5 >>> round(sim_minkowski('Niall', 'Neil'), 12) 0.363636363636 >>> round(sim_minkowski('Colin', 'Cuilen'), 12) 0.307692307692 >>> sim_minkowski('ATCG', 'TAGC') 0.0
def dump(self, itemkey, filename=None, path=None): if not filename: filename = self.item(itemkey)["data"]["filename"] if path: pth = os.path.join(path, filename) else: pth = filename file = self.file(itemkey) if self.snapshot: self.snapshot = False pth = pth + ".zip" with open(pth, "wb") as f: f.write(file)
Dump a file attachment to disk, with optional filename and path
def split_storage(path, default='osfstorage'): path = norm_remote_path(path) for provider in KNOWN_PROVIDERS: if path.startswith(provider + '/'): if six.PY3: return path.split('/', maxsplit=1) else: return path.split('/', 1) return (default, path)
Extract storage name from file path. If a path begins with a known storage provider the name is removed from the path. Otherwise the `default` storage provider is returned and the path is not modified.
def setup_a_alpha_and_derivatives(self, i, T=None): r self.a, self.Tc, self.S1, self.S2 = self.ais[i], self.Tcs[i], self.S1s[i], self.S2s[i]
r'''Sets `a`, `S1`, `S2` and `Tc` for a specific component before the pure-species EOS's `a_alpha_and_derivatives` method is called. Both are called by `GCEOSMIX.a_alpha_and_derivatives` for every component.
def application_exists(self): response = self.ebs.describe_applications(application_names=[self.app_name]) return len(response['DescribeApplicationsResponse']['DescribeApplicationsResult']['Applications']) > 0
Returns whether or not the given app_name exists
def discard(self, pid=None): pid = pid or self.pid with db.session.begin_nested(): before_record_update.send( current_app._get_current_object(), record=self) _, record = self.fetch_published() self.model.json = deepcopy(record.model.json) self.model.json['$schema'] = self.build_deposit_schema(record) flag_modified(self.model, 'json') db.session.merge(self.model) after_record_update.send( current_app._get_current_object(), record=self) return self.__class__(self.model.json, model=self.model)
Discard deposit changes. #. The signal :data:`invenio_records.signals.before_record_update` is sent before the edit execution. #. It restores the last published version. #. The following meta information are saved inside the deposit: .. code-block:: python deposit['$schema'] = deposit_schema_from_record_schema #. The signal :data:`invenio_records.signals.after_record_update` is sent after the edit execution. #. The deposit index is updated. Status required: ``'draft'``. :param pid: Force a pid object. (Default: ``None``) :returns: A new Deposit object.
def is_img(obj): try: get_data = getattr(obj, 'get_data') get_affine = getattr(obj, 'get_affine') return isinstance(get_data, collections.Callable) and \ isinstance(get_affine, collections.Callable) except AttributeError: return False
Check for get_data and get_affine method in an object Parameters ---------- obj: any object Tested object Returns ------- is_img: boolean True if get_data and get_affine methods are present and callable, False otherwise.
def create_helpingmaterial(project_id, info, media_url=None, file_path=None): try: helping = dict( project_id=project_id, info=info, media_url=None, ) if file_path: files = {'file': open(file_path, 'rb')} payload = {'project_id': project_id} res = _pybossa_req('post', 'helpingmaterial', payload=payload, files=files) else: res = _pybossa_req('post', 'helpingmaterial', payload=helping) if res.get('id'): return HelpingMaterial(res) else: return res except: raise
Create a helping material for a given project ID. :param project_id: PYBOSSA Project ID :type project_id: integer :param info: PYBOSSA Helping Material info JSON field :type info: dict :param media_url: URL for a media file (image, video or audio) :type media_url: string :param file_path: File path to the local image, video or sound to upload. :type file_path: string :returns: True -- the response status code
def _socket_readlines(self, blocking=False): try: self.sock.setblocking(0) except socket.error as e: self.logger.error("socket error when setblocking(0): %s" % str(e)) raise ConnectionDrop("connection dropped") while True: short_buf = b'' newline = b'\r\n' select.select([self.sock], [], [], None if blocking else 0) try: short_buf = self.sock.recv(4096) if not short_buf: self.logger.error("socket.recv(): returned empty") raise ConnectionDrop("connection dropped") except socket.error as e: self.logger.error("socket error on recv(): %s" % str(e)) if "Resource temporarily unavailable" in str(e): if not blocking: if len(self.buf) == 0: break self.buf += short_buf while newline in self.buf: line, self.buf = self.buf.split(newline, 1) yield line
Generator for complete lines, received from the server
def position_rates(self): return [self.ode_obj.getPositionRate(i) for i in range(self.LDOF)]
List of position rates for linear degrees of freedom.
def _derive_checksum(self, s): checksum = hashlib.sha256(bytes(s, "ascii")).hexdigest() return checksum[:4]
Derive the checksum :param str s: Random string for which to derive the checksum
def get(ctx): user, project_name, _group = get_project_group_or_local(ctx.obj.get('project'), ctx.obj.get('group')) try: response = PolyaxonClient().experiment_group.get_experiment_group( user, project_name, _group) cache.cache(config_manager=GroupManager, response=response) except (PolyaxonHTTPError, PolyaxonShouldExitError, PolyaxonClientException) as e: Printer.print_error('Could not get experiment group `{}`.'.format(_group)) Printer.print_error('Error message `{}`.'.format(e)) sys.exit(1) get_group_details(response)
Get experiment group by uuid. Uses [Caching](/references/polyaxon-cli/#caching) Examples: \b ```bash $ polyaxon group -g 13 get ```
def load_file_list(path=None, regx='\.jpg', printable=True, keep_prefix=False): r if path is None: path = os.getcwd() file_list = os.listdir(path) return_list = [] for _, f in enumerate(file_list): if re.search(regx, f): return_list.append(f) if keep_prefix: for i, f in enumerate(return_list): return_list[i] = os.path.join(path, f) if printable: logging.info('Match file list = %s' % return_list) logging.info('Number of files = %d' % len(return_list)) return return_list
r"""Return a file list in a folder by given a path and regular expression. Parameters ---------- path : str or None A folder path, if `None`, use the current directory. regx : str The regx of file name. printable : boolean Whether to print the files infomation. keep_prefix : boolean Whether to keep path in the file name. Examples ---------- >>> file_list = tl.files.load_file_list(path=None, regx='w1pre_[0-9]+\.(npz)')
def wait_until_final(self, poll_interval=1, timeout=60): start_time = time.time() elapsed = 0 while (self.status != "complete" and (timeout <= 0 or elapsed < timeout)): time.sleep(poll_interval) self.refresh() elapsed = time.time() - start_time
It will poll the URL to grab the latest status resource in a given timeout and time interval. Args: poll_interval (int): how often to poll the status service. timeout (int): how long to poll the URL until giving up. Use <= 0 to wait forever
def TP_dependent_property(self, T, P): r if self.method_P: if self.test_method_validity_P(T, P, self.method_P): try: prop = self.calculate_P(T, P, self.method_P) if self.test_property_validity(prop): return prop except: pass self.sorted_valid_methods_P = self.select_valid_methods_P(T, P) for method_P in self.sorted_valid_methods_P: try: prop = self.calculate_P(T, P, method_P) if self.test_property_validity(prop): self.method_P = method_P return prop except: pass return None
r'''Method to calculate the property with sanity checking and without specifying a specific method. `select_valid_methods_P` is used to obtain a sorted list of methods to try. Methods are then tried in order until one succeeds. The methods are allowed to fail, and their results are checked with `test_property_validity`. On success, the used method is stored in the variable `method_P`. If `method_P` is set, this method is first checked for validity with `test_method_validity_P` for the specified temperature, and if it is valid, it is then used to calculate the property. The result is checked for validity, and returned if it is valid. If either of the checks fail, the function retrieves a full list of valid methods with `select_valid_methods_P` and attempts them as described above. If no methods are found which succeed, returns None. Parameters ---------- T : float Temperature at which to calculate the property, [K] P : float Pressure at which to calculate the property, [Pa] Returns ------- prop : float Calculated property, [`units`]
def save(self,callit="misc",closeToo=True,fullpath=False): if fullpath is False: fname=self.abf.outPre+"plot_"+callit+".jpg" else: fname=callit if not os.path.exists(os.path.dirname(fname)): os.mkdir(os.path.dirname(fname)) plt.savefig(fname) self.log.info("saved [%s]",os.path.basename(fname)) if closeToo: plt.close()
save the existing figure. does not close it.
def get_last_commit_line(git_path=None): if git_path is None: git_path = GIT_PATH output = check_output([git_path, "log", "--pretty=format:'%ad %h %s'", "--date=short", "-n1"]) return output.strip()[1:-1]
Get one-line description of HEAD commit for repository in current dir.
def might_need_auth(f): @wraps(f) def wrapper(cli_args): try: return_value = f(cli_args) except UnauthorizedException as e: config = config_from_env(config_from_file()) username = _get_username(cli_args, config) if username is None: sys.exit("Please set a username (run `osf -h` for details).") else: sys.exit("You are not authorized to access this project.") return return_value return wrapper
Decorate a CLI function that might require authentication. Catches any UnauthorizedException raised, prints a helpful message and then exits.
def validateRequest(self, uri, postVars, expectedSignature): s = uri for k, v in sorted(postVars.items()): s += k + v return (base64.encodestring(hmac.new(self.auth_token, s, sha1).digest()).\ strip() == expectedSignature)
validate a request from plivo uri: the full URI that Plivo requested on your server postVars: post vars that Plivo sent with the request expectedSignature: signature in HTTP X-Plivo-Signature header returns true if the request passes validation, false if not
def Tok(kind, loc=None): @llrule(loc, lambda parser: [kind]) def rule(parser): return parser._accept(kind) return rule
A rule that accepts a token of kind ``kind`` and returns it, or returns None.
def _append_element(self, render_func, pe): self._render_funcs.append(render_func) self._elements.append(pe)
Append a render function and the parameters to pass an equivilent PathElement, or the PathElement itself.
async def handle_message(self, message, filters): data = self._unpack_message(message) logger.debug(data) if data.get('type') == 'error': raise SlackApiError( data.get('error', {}).get('msg', str(data)) ) elif self.message_is_to_me(data): text = data['text'][len(self.address_as):].strip() if text == 'help': return self._respond( channel=data['channel'], text=self._instruction_list(filters), ) elif text == 'version': return self._respond( channel=data['channel'], text=self.VERSION, ) for _filter in filters: if _filter.matches(data): logger.debug('Response triggered') async for response in _filter: self._respond(channel=data['channel'], text=response)
Handle an incoming message appropriately. Arguments: message (:py:class:`aiohttp.websocket.Message`): The incoming message to handle. filters (:py:class:`list`): The filters to apply to incoming messages.
def _parse_field_value(line): if line.startswith(':'): return None, None if ':' not in line: return line, '' field, value = line.split(':', 1) value = value[1:] if value.startswith(' ') else value return field, value
Parse the field and value from a line.
def format(obj, options): formatters = { float_types: lambda x: '{:.{}g}'.format(x, options.digits), } for _types, fmtr in formatters.items(): if isinstance(obj, _types): return fmtr(obj) try: if six.PY2 and isinstance(obj, six.string_types): return str(obj.encode('utf-8')) return str(obj) except: return 'OBJECT'
Return a string representation of the Python object Args: obj: The Python object options: Format options
def _ref(self, param, base_name=None): name = base_name or param.get('title', '') or param.get('name', '') pointer = self.json_pointer + name self.parameter_registry[name] = param return {'$ref': pointer}
Store a parameter schema and return a reference to it. :param schema: Swagger parameter definition. :param base_name: Name that should be used for the reference. :rtype: dict :returns: JSON pointer to the original parameter definition.
def store_drop(cls, resource: str, session: Optional[Session] = None) -> 'Action': action = cls.make_drop(resource) _store_helper(action, session=session) return action
Store a "drop" event. :param resource: The normalized name of the resource to store Example: >>> from bio2bel.models import Action >>> Action.store_drop('hgnc')
def create_task(self): return self.spec_class(self.spec, self.get_task_spec_name(), lane=self.get_lane(), description=self.node.get('name', None))
Create an instance of the task appropriately. A subclass can override this method to get extra information from the node.
def str_to_rgb(self, str): str = str.lower() for ch in "_- ": str = str.replace(ch, "") if named_colors.has_key(str): return named_colors[str] for suffix in ["ish", "ed", "y", "like"]: str = re.sub("(.*?)" + suffix + "$", "\\1", str) str = re.sub("(.*?)dd$", "\\1d", str) matches = [] for name in named_colors: if name in str or str in name: matches.append(named_colors[name]) if len(matches) > 0: return choice(matches) return named_colors["transparent"]
Returns RGB values based on a descriptive string. If the given str is a named color, return its RGB values. Otherwise, return a random named color that has str in its name, or a random named color which name appears in str. Specific suffixes (-ish, -ed, -y and -like) are recognised as well, for example, if you need a random variation of "red" you can use reddish (or greenish, yellowy, etc.)
def install_setuptools(python_cmd='python', use_sudo=True): setuptools_version = package_version('setuptools', python_cmd) distribute_version = package_version('distribute', python_cmd) if setuptools_version is None: _install_from_scratch(python_cmd, use_sudo) else: if distribute_version is None: _upgrade_from_setuptools(python_cmd, use_sudo) else: _upgrade_from_distribute(python_cmd, use_sudo)
Install the latest version of `setuptools`_. :: import burlap burlap.python_setuptools.install_setuptools()
def readheaders(self): self.dict = {} self.unixfrom = '' self.headers = lst = [] self.status = '' headerseen = "" firstline = 1 startofline = unread = tell = None if hasattr(self.fp, 'unread'): unread = self.fp.unread elif self.seekable: tell = self.fp.tell while 1: if tell: try: startofline = tell() except IOError: startofline = tell = None self.seekable = 0 line = self.fp.readline() if not line: self.status = 'EOF in headers' break if firstline and line.startswith('From '): self.unixfrom = self.unixfrom + line continue firstline = 0 if headerseen and line[0] in ' \t': lst.append(line) x = (self.dict[headerseen] + "\n " + line.strip()) self.dict[headerseen] = x.strip() continue elif self.iscomment(line): continue elif self.islast(line): break headerseen = self.isheader(line) if headerseen: lst.append(line) self.dict[headerseen] = line[len(headerseen)+1:].strip() continue elif headerseen is not None: continue else: if not self.dict: self.status = 'No headers' else: self.status = 'Non-header line where header expected' if unread: unread(line) elif tell: self.fp.seek(startofline) else: self.status = self.status + '; bad seek' break
Read header lines. Read header lines up to the entirely blank line that terminates them. The (normally blank) line that ends the headers is skipped, but not included in the returned list. If a non-header line ends the headers, (which is an error), an attempt is made to backspace over it; it is never included in the returned list. The variable self.status is set to the empty string if all went well, otherwise it is an error message. The variable self.headers is a completely uninterpreted list of lines contained in the header (so printing them will reproduce the header exactly as it appears in the file).
def remove_highlight_nodes(graph: BELGraph, nodes: Optional[Iterable[BaseEntity]]=None) -> None: for node in graph if nodes is None else nodes: if is_node_highlighted(graph, node): del graph.node[node][NODE_HIGHLIGHT]
Removes the highlight from the given nodes, or all nodes if none given. :param graph: A BEL graph :param nodes: The list of nodes to un-highlight
def do_vars(self, line): if self.bot._vars: max_name_len = max([len(name) for name in self.bot._vars]) for i, (name, v) in enumerate(self.bot._vars.items()): keep = i < len(self.bot._vars) - 1 self.print_response("%s = %s" % (name.ljust(max_name_len), v.value), keep=keep) else: self.print_response("No vars")
List bot variables and values