code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def set_default_encoder_parameters(): ARGTYPES = [ctypes.POINTER(CompressionParametersType)] OPENJP2.opj_set_default_encoder_parameters.argtypes = ARGTYPES OPENJP2.opj_set_default_encoder_parameters.restype = ctypes.c_void_p cparams = CompressionParametersType() OPENJP2.opj_set_default_encoder_parameters(ctypes.byref(cparams)) return cparams
Wraps openjp2 library function opj_set_default_encoder_parameters. Sets encoding parameters to default values. That means lossless 1 tile size of precinct : 2^15 x 2^15 (means 1 precinct) size of code-block : 64 x 64 number of resolutions: 6 no SOP marker in the codestream no EPH marker in the codestream no sub-sampling in x or y direction no mode switch activated progression order: LRCP no index file no ROI upshifted no offset of the origin of the image no offset of the origin of the tiles reversible DWT 5-3 The signature for this function differs from its C library counterpart, as the the C function pass-by-reference parameter becomes the Python return value. Returns ------- cparameters : CompressionParametersType Compression parameters.
def get_current_path(self): path = self.tree_view.fileInfo( self.tree_view.currentIndex()).filePath() if not path: path = self.tree_view.root_path return path
Gets the path of the currently selected item.
def iter_setup_packages(srcdir, packages): for packagename in packages: package_parts = packagename.split('.') package_path = os.path.join(srcdir, *package_parts) setup_package = os.path.relpath( os.path.join(package_path, 'setup_package.py')) if os.path.isfile(setup_package): module = import_file(setup_package, name=packagename + '.setup_package') yield module
A generator that finds and imports all of the ``setup_package.py`` modules in the source packages. Returns ------- modgen : generator A generator that yields (modname, mod), where `mod` is the module and `modname` is the module name for the ``setup_package.py`` modules.
async def sendmail( self, sender, recipients, message, mail_options=None, rcpt_options=None ): if isinstance(recipients, str): recipients = [recipients] if mail_options is None: mail_options = [] if rcpt_options is None: rcpt_options = [] await self.ehlo_or_helo_if_needed() if self.supports_esmtp: if "size" in self.esmtp_extensions: mail_options.append("size={}".format(len(message))) await self.mail(sender, mail_options) errors = [] for recipient in recipients: try: await self.rcpt(recipient, rcpt_options) except SMTPCommandFailedError as e: errors.append(e) if len(recipients) == len(errors): raise SMTPNoRecipientError(errors) await self.data(message) return errors
Performs an entire e-mail transaction. Example: >>> try: >>> with SMTP() as client: >>> try: >>> r = client.sendmail(sender, recipients, message) >>> except SMTPException: >>> print("Error while sending message.") >>> else: >>> print("Result: {}.".format(r)) >>> except ConnectionError as e: >>> print(e) Result: {}. Args: sender (str): E-mail address of the sender. recipients (list of str or str): E-mail(s) address(es) of the recipient(s). message (str or bytes): Message body. mail_options (list of str): ESMTP options (such as *8BITMIME*) to send along the *MAIL* command. rcpt_options (list of str): ESMTP options (such as *DSN*) to send along all the *RCPT* commands. Raises: ConnectionResetError: If the connection with the server is unexpectedely lost. SMTPCommandFailedError: If the server refuses our EHLO/HELO greeting. SMTPCommandFailedError: If the server refuses our MAIL command. SMTPCommandFailedError: If the server refuses our DATA command. SMTPNoRecipientError: If the server refuses all given recipients. Returns: dict: A dict containing an entry for each recipient that was refused. Each entry is associated with a (code, message) 2-tuple containing the error code and message, as returned by the server. When everythign runs smoothly, the returning dict is empty. .. note:: The connection remains open after. It's your responsibility to close it. A good practice is to use the asynchronous context manager instead. See :meth:`SMTP.__aenter__` for further details.
def runCommandReturnOutput(cmd): splits = shlex.split(cmd) proc = subprocess.Popen( splits, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = proc.communicate() if proc.returncode != 0: raise subprocess.CalledProcessError(stdout, stderr) return stdout, stderr
Runs a shell command and return the stdout and stderr
def _str_dtype(dtype): assert dtype.byteorder != '>' if dtype.kind == 'i': assert dtype.itemsize == 8 return 'int64' elif dtype.kind == 'f': assert dtype.itemsize == 8 return 'float64' elif dtype.kind == 'U': return 'U%d' % (dtype.itemsize / 4) else: raise UnhandledDtypeException("Bad dtype '%s'" % dtype)
Represent dtypes without byte order, as earlier Java tickstore code doesn't support explicit byte order.
def _psed(text, before, after, limit, flags): atext = text if limit: limit = re.compile(limit) comps = text.split(limit) atext = ''.join(comps[1:]) count = 1 if 'g' in flags: count = 0 flags = flags.replace('g', '') aflags = 0 for flag in flags: aflags |= RE_FLAG_TABLE[flag] before = re.compile(before, flags=aflags) text = re.sub(before, after, atext, count=count) return text
Does the actual work for file.psed, so that single lines can be passed in
def trigger_actions(self, subsystem): for py3_module, trigger_action in self.udev_consumers[subsystem]: if trigger_action in ON_TRIGGER_ACTIONS: self.py3_wrapper.log( "%s udev event, refresh consumer %s" % (subsystem, py3_module.module_full_name) ) py3_module.force_update()
Refresh all modules which subscribed to the given subsystem.
def to_text(self, line): return getattr(self, self.ENTRY_TRANSFORMERS[line.__class__])(line)
Return the textual representation of the given `line`.
def fault_zone(self, zone, simulate_wire_problem=False): if isinstance(zone, tuple): expander_idx, channel = zone zone = self._zonetracker.expander_to_zone(expander_idx, channel) status = 2 if simulate_wire_problem else 1 self.send("L{0:02}{1}\r".format(zone, status))
Faults a zone if we are emulating a zone expander. :param zone: zone to fault :type zone: int :param simulate_wire_problem: Whether or not to simulate a wire fault :type simulate_wire_problem: bool
def query_certificate(self, cert_hash): try: cquery = self.pssl.query_cert(cert_hash) except Exception: self.error('Exception during processing with passiveSSL. ' 'This happens if the given hash is not sha1 or contains dashes/colons etc. ' 'Please make sure to submit a clean formatted sha1 hash.') try: cfetch = self.pssl.fetch_cert(cert_hash, make_datetime=False) except Exception: cfetch = {} return {'query': cquery, 'cert': cfetch}
Queries Circl.lu Passive SSL for a certificate hash using PyPSSL class. Returns error if nothing is found. :param cert_hash: hash to query for :type cert_hash: str :return: python dict of results :rtype: dict
def finish(self): os.system('setterm -cursor on') if self.nl: Echo(self.label).done()
Update widgets on finish
def check_token(self, token, allowed_roles, resource, method): resource_conf = config.DOMAIN[resource] audiences = resource_conf.get('audiences', config.JWT_AUDIENCES) return self._perform_verification(token, audiences, allowed_roles)
This function is called when a token is sent throught the access_token parameter or the Authorization header as specified in the oAuth 2 specification. The provided token is validated with the JWT_SECRET defined in the Eve configuration. The token issuer (iss claim) must be the one specified by JWT_ISSUER and the audience (aud claim) must be one of the value(s) defined by the either the "audiences" resource parameter or the global JWT_AUDIENCES configuration. If JWT_ROLES_CLAIM is defined and a claim by that name is present in the token, roles are checked using this claim. If a JWT_SCOPE_CLAIM is defined and a claim by that name is present in the token, the claim value is check, and if "viewer" is present, only GET and HEAD methods will be allowed. The scope name is then added to the list of roles with the scope: prefix. If the validation succeed, the claims are stored and accessible thru the get_authen_claims() method.
def extractVersion(string, default='?'): return extract(VERSION_PATTERN, string, condense=True, default=default, one=True)
Extracts a three digit standard format version number
def calc_fft_with_PyCUDA(Signal): print("starting fft") Signal = Signal.astype(_np.float32) Signal_gpu = _gpuarray.to_gpu(Signal) Signalfft_gpu = _gpuarray.empty(len(Signal)//2+1,_np.complex64) plan = _Plan(Signal.shape,_np.float32,_np.complex64) _fft(Signal_gpu, Signalfft_gpu, plan) Signalfft = Signalfft_gpu.get() Signalfft = _np.hstack((Signalfft,_np.conj(_np.flipud(Signalfft[1:len(Signal)//2])))) print("fft done") return Signalfft
Calculates the FFT of the passed signal by using the scikit-cuda libary which relies on PyCUDA Parameters ---------- Signal : ndarray Signal to be transformed into Fourier space Returns ------- Signalfft : ndarray Array containing the signal's FFT
def list_ifd(self): i = self._first_ifd() ifds = [] while i: ifds.append(i) i = self._next_ifd(i) return ifds
Return the list of IFDs in the header.
def highlight_null(self, null_color='red'): self.applymap(self._highlight_null, null_color=null_color) return self
Shade the background ``null_color`` for missing values. Parameters ---------- null_color : str Returns ------- self : Styler
def with_host(self, host): if not isinstance(host, str): raise TypeError("Invalid host type") if not self.is_absolute(): raise ValueError("host replacement is not allowed " "for relative URLs") if not host: raise ValueError("host removing is not allowed") host = self._encode_host(host) val = self._val return URL( self._val._replace( netloc=self._make_netloc( val.username, val.password, host, val.port, encode=False ) ), encoded=True, )
Return a new URL with host replaced. Autoencode host if needed. Changing host for relative URLs is not allowed, use .join() instead.
def get_all_json_from_indexq(self): files = self.get_all_as_list() out = [] for efile in files: out.extend(self._open_file(efile)) return out
Gets all data from the todo files in indexq and returns one huge list of all data.
def write_data(self, data, start_position=0): if len(data) > self.height(): raise ValueError('Data too long (too many strings)') for i in range(len(data)): self.write_line(start_position + i, data[i])
Write data from the specified line :param data: string to write, each one on new line :param start_position: starting line :return:
def bind_unix_socket(file_, mode=0o600, backlog=_DEFAULT_BACKLOG): sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setblocking(0) try: st = os.stat(file_) except OSError as err: if err.errno != errno.ENOENT: raise else: if stat.S_ISSOCK(st.st_mode): os.remove(file_) else: raise ValueError('File %s exists and is not a socket', file_) sock.bind(file_) os.chmod(file_, mode) sock.listen(backlog) return sock
Creates a listening unix socket. If a socket with the given name already exists, it will be deleted. If any other file with that name exists, an exception will be raised. Returns a socket object (not a list of socket objects like `bind_sockets`)
def _edge_list_to_sframe(ls, src_column_name, dst_column_name): sf = SFrame() if type(ls) == list: cols = reduce(set.union, (set(v.attr.keys()) for v in ls)) sf[src_column_name] = [e.src_vid for e in ls] sf[dst_column_name] = [e.dst_vid for e in ls] for c in cols: sf[c] = [e.attr.get(c) for e in ls] elif type(ls) == Edge: sf[src_column_name] = [ls.src_vid] sf[dst_column_name] = [ls.dst_vid] else: raise TypeError('Edges type {} is Not supported.'.format(type(ls))) return sf
Convert a list of edges into an SFrame.
def dict_head(d, N=5): return {k: d[k] for k in list(d.keys())[:N]}
Return the head of a dictionary. It will be random! Default is to return the first 5 key/value pairs in a dictionary. Args: d: Dictionary to get head. N: Number of elements to display. Returns: dict: the first N items of the dictionary.
def on_lstCategories_itemSelectionChanged(self): self.clear_further_steps() purpose = self.selected_purpose() if not purpose: return self.lblDescribeCategory.setText(purpose["description"]) self.lblIconCategory.setPixmap(QPixmap( resources_path('img', 'wizard', 'keyword-category-%s.svg' % (purpose['key'] or 'notset')))) self.parent.pbnNext.setEnabled(True)
Update purpose description label. .. note:: This is an automatic Qt slot executed when the purpose selection changes.
def optimise_z(z, *args): x, y, elements, coordinates = args window_com = np.array([x, y, z]) return pore_diameter(elements, coordinates, com=window_com)[0]
Return pore diameter for coordinates optimisation in z direction.
def simulated_quantize(x, num_bits, noise): shape = x.get_shape().as_list() if not (len(shape) >= 2 and shape[-1] > 1): return x max_abs = tf.reduce_max(tf.abs(x), -1, keepdims=True) + 1e-9 max_int = 2 ** (num_bits - 1) - 1 scale = max_abs / max_int x /= scale x = tf.floor(x + noise) x *= scale return x
Simulate quantization to num_bits bits, with externally-stored scale. num_bits is the number of bits used to store each value. noise is a float32 Tensor containing values in [0, 1). Each value in noise should take different values across different steps, approximating a uniform distribution over [0, 1). In the case of replicated TPU training, noise should be identical across replicas in order to keep the parameters identical across replicas. The natural choice for noise would be tf.random_uniform(), but this is not possible for TPU, since there is currently no way to seed the different cores to produce identical values across replicas. Instead we use noise_from_step_num() (see below). The quantization scheme is as follows: Compute the maximum absolute value by row (call this max_abs). Store this either in an auxiliary variable or in an extra column. Divide the parameters by (max_abs / (2^(num_bits-1)-1)). This gives a float32 value in the range [-2^(num_bits-1)-1, 2^(num_bits-1)-1] Unbiased randomized roundoff by adding noise and rounding down. This produces a signed integer with num_bits bits which can then be stored. Args: x: a float32 Tensor num_bits: an integer between 1 and 22 noise: a float Tensor broadcastable to the shape of x. Returns: a float32 Tensor
def clear(self): self.io.seek(0) self.io.truncate() for item in self.monitors: item[2] = 0
Removes all data from the buffer.
def fill_notebook(work_notebook, script_blocks): for blabel, bcontent, lineno in script_blocks: if blabel == 'code': add_code_cell(work_notebook, bcontent) else: add_markdown_cell(work_notebook, bcontent + '\n')
Writes the Jupyter notebook cells Parameters ---------- script_blocks : list Each list element should be a tuple of (label, content, lineno).
def validate_enum_attribute(self, attribute: str, candidates: Set[Union[str, int, float]]) -> None: self.add_errors( validate_enum_attribute(self.fully_qualified_name, self._spec, attribute, candidates))
Validates that the attribute value is among the candidates
def get_migrations_to_down(self, migration_id): migration_id = MigrationFile.validate_id(migration_id) if not migration_id: return [] migrations = self.get_migration_files() last_migration_id = self.get_last_migrated_id() if migration_id in (m.id for m in self.get_unregistered_migrations()): logger.error('Migration is not applied %s' % migration_id) return [] try: migration = [m for m in migrations if m.id == migration_id][0] except IndexError: logger.error('Migration does not exists %s' % migration_id) return [] return list(reversed([m for m in migrations if migration.id <= m.id <= last_migration_id]))
Find migrations to rollback.
def pointspace(self, **kwargs): scale_array = numpy.array([ [prefix_factor(self.independent)**(-1)], [prefix_factor(self.dependent)**(-1)] ]) linspace = numpy.linspace(self.limits[0], self.limits[1], **kwargs) return { 'data': self.data.array * scale_array, 'fit': numpy.array([linspace, self.fitted_function(linspace)]) * scale_array }
Returns a dictionary with the keys `data` and `fit`. `data` is just `scipy_data_fitting.Data.array`. `fit` is a two row [`numpy.ndarray`][1], the first row values correspond to the independent variable and are generated using [`numpy.linspace`][2]. The second row are the values of `scipy_data_fitting.Fit.fitted_function` evaluated on the linspace. For both `fit` and `data`, each row will be scaled by the corresponding inverse prefix if given in `scipy_data_fitting.Fit.independent` or `scipy_data_fitting.Fit.dependent`. Any keyword arguments are passed to [`numpy.linspace`][2]. [1]: http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html [2]: http://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html
def to_frame(self, index=True, name=None): from pandas import DataFrame if name is not None: if not is_list_like(name): raise TypeError("'name' must be a list / sequence " "of column names.") if len(name) != len(self.levels): raise ValueError("'name' should have same length as " "number of levels on index.") idx_names = name else: idx_names = self.names result = DataFrame( OrderedDict([ ((level if lvlname is None else lvlname), self._get_level_values(level)) for lvlname, level in zip(idx_names, range(len(self.levels))) ]), copy=False ) if index: result.index = self return result
Create a DataFrame with the levels of the MultiIndex as columns. Column ordering is determined by the DataFrame constructor with data as a dict. .. versionadded:: 0.24.0 Parameters ---------- index : boolean, default True Set the index of the returned DataFrame as the original MultiIndex. name : list / sequence of strings, optional The passed names should substitute index level names. Returns ------- DataFrame : a DataFrame containing the original MultiIndex data. See Also -------- DataFrame
def getOverlayDualAnalogTransform(self, ulOverlay, eWhich): fn = self.function_table.getOverlayDualAnalogTransform pvCenter = HmdVector2_t() pfRadius = c_float() result = fn(ulOverlay, eWhich, byref(pvCenter), byref(pfRadius)) return result, pvCenter, pfRadius.value
Gets the analog input to Dual Analog coordinate scale for the specified overlay.
def render_path(self, template_path, *context, **kwargs): loader = self._make_loader() template = loader.read(template_path) return self._render_string(template, *context, **kwargs)
Render the template at the given path using the given context. Read the render() docstring for more information.
def OnSafeModeEntry(self, event): self.main_window.main_menu.enable_file_approve(True) self.main_window.grid.Refresh() event.Skip()
Safe mode entry event handler
def apply_visitor(visitor, decl_inst): fname = 'visit_' + \ decl_inst.__class__.__name__[:-2] if not hasattr(visitor, fname): raise runtime_errors.visit_function_has_not_been_found_t( visitor, decl_inst) return getattr(visitor, fname)()
Applies a visitor on declaration instance. :param visitor: instance :type visitor: :class:`type_visitor_t` or :class:`decl_visitor_t`
def _create_dist(self, dist_tgt, dist_target_dir, setup_requires_pex, snapshot_fingerprint, is_platform_specific): self._copy_sources(dist_tgt, dist_target_dir) setup_py_snapshot_version_argv = self._generate_snapshot_bdist_wheel_argv( snapshot_fingerprint, is_platform_specific) cmd = safe_shlex_join(setup_requires_pex.cmdline(setup_py_snapshot_version_argv)) with self.context.new_workunit('setup.py', cmd=cmd, labels=[WorkUnitLabel.TOOL]) as workunit: with pushd(dist_target_dir): result = setup_requires_pex.run(args=setup_py_snapshot_version_argv, stdout=workunit.output('stdout'), stderr=workunit.output('stderr')) if result != 0: raise self.BuildLocalPythonDistributionsError( "Installation of python distribution from target {target} into directory {into_dir} " "failed (return value of run() was: {rc!r}).\n" "The pex with any requirements is located at: {interpreter}.\n" "The host system's compiler and linker were used.\n" "The setup command was: {command}." .format(target=dist_tgt, into_dir=dist_target_dir, rc=result, interpreter=setup_requires_pex.path(), command=setup_py_snapshot_version_argv))
Create a .whl file for the specified python_distribution target.
def greg2julian(year, month, day, hour, minute, second): year = year.astype(float) month = month.astype(float) day = day.astype(float) timeut = hour.astype(float) + (minute.astype(float) / 60.0) + \ (second / 3600.0) julian_time = ((367.0 * year) - np.floor( 7.0 * (year + np.floor((month + 9.0) / 12.0)) / 4.0) - np.floor(3.0 * (np.floor((year + (month - 9.0) / 7.0) / 100.0) + 1.0) / 4.0) + np.floor((275.0 * month) / 9.0) + day + 1721028.5 + (timeut / 24.0)) return julian_time
Function to convert a date from Gregorian to Julian format :param year: Year of events (integer numpy.ndarray) :param month: Month of events (integer numpy.ndarray) :param day: Days of event (integer numpy.ndarray) :param hour: Hour of event (integer numpy.ndarray) :param minute: Minute of event (integer numpy.ndarray) :param second: Second of event (float numpy.ndarray) :returns julian_time: Julian representation of the time (as float numpy.ndarray)
def load_or_import_from_config(key, app=None, default=None): app = app or current_app imp = app.config.get(key) return obj_or_import_string(imp, default=default)
Load or import value from config. :returns: The loaded value.
def probe(w, name=None): if not isinstance(w, WireVector): raise PyrtlError('Only WireVectors can be probed') if name is None: name = '(%s: %s)' % (probeIndexer.make_valid_string(), w.name) print("Probe: " + name + ' ' + get_stack(w)) p = Output(name=name) p <<= w return w
Print useful information about a WireVector when in debug mode. :param w: WireVector from which to get info :param name: optional name for probe (defaults to an autogenerated name) :return: original WireVector w Probe can be inserted into a existing design easily as it returns the original wire unmodified. For example ``y <<= x[0:3] + 4`` could be turned into ``y <<= probe(x)[0:3] + 4`` to give visibility into both the origin of ``x`` (including the line that WireVector was originally created) and the run-time values of ``x`` (which will be named and thus show up by default in a trace. Likewise ``y <<= probe(x[0:3]) + 4``, ``y <<= probe(x[0:3] + 4)``, and ``probe(y) <<= x[0:3] + 4`` are all valid uses of `probe`. Note: `probe` does actually add a wire to the working block of w (which can confuse various post-processing transforms such as output to verilog).
def process_selectors(self, index=0, flags=0): return self.parse_selectors(self.selector_iter(self.pattern), index, flags)
Process selectors. We do our own selectors as BeautifulSoup4 has some annoying quirks, and we don't really need to do nth selectors or siblings or descendants etc.
def enclosure_directed(self): root, enclosure = polygons.enclosure_tree(self.polygons_closed) self._cache['root'] = root return enclosure
Networkx DiGraph of polygon enclosure
def rename(self, **mapping): params = {k: v for k, v in self.get_param_values() if k != 'name'} return self.__class__(rename=mapping, source=(self._source() if self._source else None), linked=self.linked, **params)
The rename method allows stream parameters to be allocated to new names to avoid clashes with other stream parameters of the same name. Returns a new clone of the stream instance with the specified name mapping.
def start(self, local_port, remote_address, remote_port): self.local_port = local_port self.remote_address = remote_address self.remote_port = remote_port logger.debug(("Starting ssh tunnel {0}:{1}:{2} for " "{3}@{4}".format(local_port, remote_address, remote_port, self.username, self.address))) self.forward = Forward(local_port, remote_address, remote_port, self.transport) self.forward.start()
Start ssh tunnel type: local_port: int param: local_port: local tunnel endpoint ip binding type: remote_address: str param: remote_address: Remote tunnel endpoing ip binding type: remote_port: int param: remote_port: Remote tunnel endpoint port binding
def rate_of_change(data, period): catch_errors.check_for_period_error(data, period) rocs = [((data[idx] - data[idx - (period - 1)]) / data[idx - (period - 1)]) * 100 for idx in range(period - 1, len(data))] rocs = fill_for_noncomputable_vals(data, rocs) return rocs
Rate of Change. Formula: (Close - Close n periods ago) / (Close n periods ago) * 100
def probes_used_extract_scores(full_scores, same_probes): if full_scores.shape[1] != same_probes.shape[0]: raise "Size mismatch" import numpy as np model_scores = np.ndarray((full_scores.shape[0],np.sum(same_probes)), 'float64') c=0 for i in range(0,full_scores.shape[1]): if same_probes[i]: for j in range(0,full_scores.shape[0]): model_scores[j,c] = full_scores[j,i] c+=1 return model_scores
Extracts a matrix of scores for a model, given a probes_used row vector of boolean
def _readall(self, file, count): data = b"" while len(data) < count: d = file.read(count - len(data)) if not d: raise GeneralProxyError("Connection closed unexpectedly") data += d return data
Receive EXACTLY the number of bytes requested from the file object. Blocks until the required number of bytes have been received.
def lockfile(lockfile_name, lock_wait_timeout=-1): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): lock = FileLock(lockfile_name) try: lock.acquire(lock_wait_timeout) except AlreadyLocked: return except LockTimeout: return try: result = func(*args, **kwargs) finally: lock.release() return result return wrapper return decorator
Only runs the method if the lockfile is not acquired. You should create a setting ``LOCKFILE_PATH`` which points to ``/home/username/tmp/``. In your management command, use it like so:: LOCKFILE = os.path.join( settings.LOCKFILE_FOLDER, 'command_name') class Command(NoArgsCommand): @lockfile(LOCKFILE) def handle_noargs(self, **options): # your command here :lockfile_name: A unique name for a lockfile that belongs to the wrapped method. :lock_wait_timeout: Seconds to wait if lockfile is acquired. If ``-1`` we will not wait and just quit.
def _GetOrderedEntries(data): def Tag(field): if isinstance(field, string_types): return 0, field if isinstance(field, int): return 1, field message = "Unexpected field '{}' of type '{}'".format(field, type(field)) raise TypeError(message) for field in sorted(iterkeys(data), key=Tag): yield data[field]
Gets entries of `RDFProtoStruct` in a well-defined order. Args: data: A raw data dictionary of `RDFProtoStruct`. Yields: Entries of the structured in a well-defined order.
def mean_squared_error(pred:Tensor, targ:Tensor)->Rank0Tensor: "Mean squared error between `pred` and `targ`." pred,targ = flatten_check(pred,targ) return F.mse_loss(pred, targ)
Mean squared error between `pred` and `targ`.
def instance_norm(x): with tf.variable_scope("instance_norm"): epsilon = 1e-5 mean, var = tf.nn.moments(x, [1, 2], keep_dims=True) scale = tf.get_variable( "scale", [x.get_shape()[-1]], initializer=tf.truncated_normal_initializer(mean=1.0, stddev=0.02)) offset = tf.get_variable( "offset", [x.get_shape()[-1]], initializer=tf.constant_initializer(0.0)) out = scale * tf.div(x - mean, tf.sqrt(var + epsilon)) + offset return out
Instance normalization layer.
def funding_info(self, key, value): return { 'agency': value.get('a'), 'grant_number': value.get('c'), 'project_number': value.get('f'), }
Populate the ``funding_info`` key.
def remove_example(self, data, cloud=None, batch=False, api_key=None, version=None, **kwargs): batch = detect_batch(data) data = data_preprocess(data, batch=batch) url_params = {"batch": batch, "api_key": api_key, "version": version, 'method': 'remove_example'} return self._api_handler(data, cloud=cloud, api="custom", url_params=url_params, **kwargs)
This is an API made to remove a single instance of training data. This is useful in cases where a single instance of content has been modified, but the remaining examples remain valid. For example, if a piece of content has been retagged. Inputs data - String: The exact text you wish to remove from the given collection. If the string provided does not match a known piece of text then this will fail. Again, this is required if an id is not provided, and vice-versa. api_key (optional) - String: Your API key, required only if the key has not been declared elsewhere. This allows the API to recognize a request as yours and automatically route it to the appropriate destination. cloud (optional) - String: Your private cloud domain, required only if the key has not been declared elsewhere. This allows the API to recognize a request as yours and automatically route it to the appropriate destination.
def html_encode(text): text = text.replace('&', '&amp;') text = text.replace('<', '&lt;') text = text.replace('>', '&gt;') text = text.replace('"', '&quot;') return text
Encode characters with a special meaning as HTML. :param text: The plain text (a string). :returns: The text converted to HTML (a string).
def _parse_ppm_segment(self, fptr): offset = fptr.tell() - 2 read_buffer = fptr.read(3) length, zppm = struct.unpack('>HB', read_buffer) numbytes = length - 3 read_buffer = fptr.read(numbytes) return PPMsegment(zppm, read_buffer, length, offset)
Parse the PPM segment. Parameters ---------- fptr : file Open file object. Returns ------- PPMSegment The current PPM segment.
def count_courses(self): c = 0 for x in self.tuning: if type(x) == list: c += len(x) else: c += 1 return float(c) / len(self.tuning)
Return the average number of courses per string.
def get_catalog_hierarchy_design_session(self, proxy): if not self.supports_catalog_hierarchy_design(): raise errors.Unimplemented() return sessions.CatalogHierarchyDesignSession(proxy=proxy, runtime=self._runtime)
Gets the catalog hierarchy design session. arg: proxy (osid.proxy.Proxy): proxy return: (osid.cataloging.CatalogHierarchyDesignSession) - a ``CatalogHierarchyDesignSession`` raise: NullArgument - ``proxy`` is null raise: OperationFailed - unable to complete request raise: Unimplemented - ``supports_catalog_hierarchy_design()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_catalog_hierarchy_design()`` is ``true``.*
def transform(self, X): return self.sess.run(self.z_mean, feed_dict={self.x: X})
Transform data by mapping it into the latent space.
def get_logx(nlive, simulate=False): r assert nlive.min() > 0, ( 'nlive contains zeros or negative values! nlive = ' + str(nlive)) if simulate: logx_steps = np.log(np.random.random(nlive.shape)) / nlive else: logx_steps = -1 * (nlive.astype(float) ** -1) return np.cumsum(logx_steps)
r"""Returns a logx vector showing the expected or simulated logx positions of points. The shrinkage factor between two points .. math:: t_i = X_{i-1} / X_{i} is distributed as the largest of :math:`n_i` uniform random variables between 1 and 0, where :math:`n_i` is the local number of live points. We are interested in .. math:: \log(t_i) = \log X_{i-1} - \log X_{i} which has expected value :math:`-1/n_i`. Parameters ---------- nlive_array: 1d numpy array Ordered local number of live points present at each point's iso-likelihood contour. simulate: bool, optional Should log prior volumes logx be simulated from their distribution (if False their expected values are used). Returns ------- logx: 1d numpy array log X values for points.
def _FlushAllRows(self, db_connection, table_name): for sql in db_connection.iterdump(): if (sql.startswith("CREATE TABLE") or sql.startswith("BEGIN TRANSACTION") or sql.startswith("COMMIT")): continue yield self.archive_generator.WriteFileChunk((sql + "\n").encode("utf-8")) with db_connection: db_connection.cursor().execute("DELETE FROM \"%s\";" % table_name)
Copies rows from the given db into the output file then deletes them.
def refactor(self, items, write=False, doctests_only=False): for dir_or_file in items: if os.path.isdir(dir_or_file): self.refactor_dir(dir_or_file, write, doctests_only) else: self.refactor_file(dir_or_file, write, doctests_only)
Refactor a list of files and directories.
def reset(all=False, vms=False, switches=False): ret = False cmd = ['vmctl', 'reset'] if all: cmd.append('all') elif vms: cmd.append('vms') elif switches: cmd.append('switches') result = __salt__['cmd.run_all'](cmd, output_loglevel='trace', python_shell=False) if result['retcode'] == 0: ret = True else: raise CommandExecutionError( 'Problem encountered running vmctl', info={'errors': [result['stderr']], 'changes': ret} ) return ret
Reset the running state of VMM or a subsystem. all: Reset the running state. switches: Reset the configured switches. vms: Reset and terminate all VMs. CLI Example: .. code-block:: bash salt '*' vmctl.reset all=True
def _stream_docker_logs(self): thread = threading.Thread(target=self._stderr_stream_worker) thread.start() for line in self.docker_client.logs(self.container, stdout=True, stderr=False, stream=True): sys.stdout.write(line) thread.join()
Stream stdout and stderr from the task container to this process's stdout and stderr, respectively.
def exists(self): path = self.path if '*' in path or '?' in path or '[' in path or '{' in path: logger.warning("Using wildcards in path %s might lead to processing of an incomplete dataset; " "override exists() to suppress the warning.", path) return self.fs.exists(path)
Returns ``True`` if the path for this FileSystemTarget exists; ``False`` otherwise. This method is implemented by using :py:attr:`fs`.
def run_query(db, query): if db in [x.keys()[0] for x in show_dbs()]: conn = _connect(show_dbs(db)[db]['uri']) else: log.debug('No uri found in pillars - will try to use oratab') conn = _connect(uri=db) return conn.cursor().execute(query).fetchall()
Run SQL query and return result CLI Example: .. code-block:: bash salt '*' oracle.run_query my_db "select * from my_table"
def strictly_positive_int_or_none(val): val = positive_int_or_none(val) if val is None or val > 0: return val raise ValueError('"{}" must be strictly positive'.format(val))
Parse `val` into either `None` or a strictly positive integer.
def _get_top_file_envs(): try: return __context__['saltutil._top_file_envs'] except KeyError: try: st_ = salt.state.HighState(__opts__, initial_pillar=__pillar__) top = st_.get_top() if top: envs = list(st_.top_matches(top).keys()) or 'base' else: envs = 'base' except SaltRenderError as exc: raise CommandExecutionError( 'Unable to render top file(s): {0}'.format(exc) ) __context__['saltutil._top_file_envs'] = envs return envs
Get all environments from the top file
def time_reached(self, current_time, scheduled_call): if current_time >= scheduled_call['ts']: scheduled_call['callback'](scheduled_call['args']) return True else: return False
Checks to see if it's time to run a scheduled call or not. If it IS time to run a scheduled call, this function will execute the method associated with that call. Args: current_time (float): Current timestamp from time.time(). scheduled_call (dict): A scheduled call dictionary that contains the timestamp to execute the call, the method to execute, and the arguments used to call the method. Returns: None Examples: >>> scheduled_call {'callback': <function foo at 0x7f022c42cf50>, 'args': {'k': 'v'}, 'ts': 1415066599.769509}
def determine_band_channel(kal_out): band = "" channel = "" tgt_freq = "" while band == "": for line in kal_out.splitlines(): if "Using " in line and " channel " in line: band = str(line.split()[1]) channel = str(line.split()[3]) tgt_freq = str(line.split()[4]).replace( "(", "").replace(")", "") if band == "": band = None return(band, channel, tgt_freq)
Return band, channel, target frequency from kal output.
def __marshal_matches(matched): json_matches = [] for m in matched: identities = [i.uuid for i in m] if len(identities) == 1: continue json_match = { 'identities': identities, 'processed': False } json_matches.append(json_match) return json_matches
Convert matches to JSON format. :param matched: a list of matched identities :returns json_matches: a list of matches in JSON format
def _extract_delta(expr, idx): from qnet.algebra.core.abstract_quantum_algebra import QuantumExpression from qnet.algebra.core.scalar_algebra import ScalarValue sympy_factor, quantum_factor = _split_sympy_quantum_factor(expr) delta, new_expr = _sympy_extract_delta(sympy_factor, idx) if delta is None: new_expr = expr else: new_expr = new_expr * quantum_factor if isinstance(new_expr, ScalarValue._val_types): new_expr = ScalarValue.create(new_expr) assert isinstance(new_expr, QuantumExpression) return delta, new_expr
Extract a "simple" Kronecker delta containing `idx` from `expr`. Assuming `expr` can be written as the product of a Kronecker Delta and a `new_expr`, return a tuple of the sympy.KroneckerDelta instance and `new_expr`. Otherwise, return a tuple of None and the original `expr` (possibly converted to a :class:`.QuantumExpression`). On input, `expr` can be a :class:`QuantumExpression` or a :class:`sympy.Basic` object. On output, `new_expr` is guaranteed to be a :class:`QuantumExpression`.
def run_cleanup(build_ext_cmd): if not build_ext_cmd.inplace: return bezier_dir = os.path.join("src", "bezier") shutil.move(os.path.join(build_ext_cmd.build_lib, LIB_DIR), bezier_dir) shutil.move(os.path.join(build_ext_cmd.build_lib, DLL_DIR), bezier_dir)
Cleanup after ``BuildFortranThenExt.run``. For in-place builds, moves the built shared library into the source directory.
def connect(self): if not (self.consumer_key and self.consumer_secret and self.access_token and self.access_token_secret): raise RuntimeError("MissingKeys") if self.client: log.info("closing existing http session") self.client.close() if self.last_response: log.info("closing last response") self.last_response.close() log.info("creating http session") self.client = OAuth1Session( client_key=self.consumer_key, client_secret=self.consumer_secret, resource_owner_key=self.access_token, resource_owner_secret=self.access_token_secret )
Sets up the HTTP session to talk to Twitter. If one is active it is closed and another one is opened.
def rm_gos(self, rm_goids): self.edges = self._rm_gos_edges(rm_goids, self.edges) self.edges_rel = self._rm_gos_edges_rel(rm_goids, self.edges_rel)
Remove any edges that contain user-specified edges.
def replace(self, old, new): if old.type != new.type: raise TypeError("new instruction has a different type") pos = self.instructions.index(old) self.instructions.remove(old) self.instructions.insert(pos, new) for bb in self.parent.basic_blocks: for instr in bb.instructions: instr.replace_usage(old, new)
Replace an instruction
def get(self, obj, id, sub_object=None): self.url = '{}{}/{}'.format(self.base_url, obj, id) self.method = 'GET' if sub_object: self.url += '/' + sub_object self.resp = requests.get(url=self.url, auth=self.auth, headers=self.headers, cert=self.ca_cert) if self.__process_resp__(obj): return self.res return False
Function get Get an object by id @param obj: object name ('hosts', 'puppetclasses'...) @param id: the id of the object (name or id) @return RETURN: the targeted object
def concretize_write_addr(self, addr, strategies=None): if isinstance(addr, int): return [ addr ] elif not self.state.solver.symbolic(addr): return [ self.state.solver.eval(addr) ] strategies = self.write_strategies if strategies is None else strategies return self._apply_concretization_strategies(addr, strategies, 'store')
Concretizes an address meant for writing. :param addr: An expression for the address. :param strategies: A list of concretization strategies (to override the default). :returns: A list of concrete addresses.
def run(self, cmd, fn=None, globals=None, locals=None): if globals is None: import __main__ globals = __main__.__dict__ if locals is None: locals = globals self.reset() if isinstance(cmd, str): str_cmd = cmd cmd = compile(str_cmd, fn or "<wdb>", "exec") self.compile_cache[id(cmd)] = str_cmd if fn: from linecache import getline lno = 1 while True: line = getline(fn, lno, globals) if line is None: lno = None break if executable_line(line): break lno += 1 self.start_trace() if lno is not None: self.breakpoints.add(LineBreakpoint(fn, lno, temporary=True)) try: execute(cmd, globals, locals) finally: self.stop_trace()
Run the cmd `cmd` with trace
def add(self, filetype, **kwargs): location = self.location(filetype, **kwargs) source = self.url(filetype, sasdir='sas' if not self.public else '', **kwargs) if 'full' not in kwargs: destination = self.full(filetype, **kwargs) else: destination = kwargs.get('full') if location and source and destination: self.initial_stream.append_task(location=location, source=source, destination=destination) else: print("There is no file with filetype=%r to access in the tree module loaded" % filetype)
Adds a filepath into the list of tasks to download
def confusion_matrix(expected: np.ndarray, predicted: np.ndarray, num_classes: int) -> np.ndarray: assert np.issubclass_(expected.dtype.type, np.integer), " Classes' indices must be integers" assert np.issubclass_(predicted.dtype.type, np.integer), " Classes' indices must be integers" assert expected.shape == predicted.shape, "Predicted and expected data must be the same length" assert num_classes > np.max([predicted, expected]), \ "Number of classes must be at least the number of indices in predicted/expected data" assert np.min([predicted, expected]) >= 0, " Classes' indices must be positive integers" cm_abs = np.zeros((num_classes, num_classes), dtype=np.int32) for pred, exp in zip(predicted, expected): cm_abs[exp, pred] += 1 return cm_abs
Calculate and return confusion matrix for the predicted and expected labels :param expected: array of expected classes (integers) with shape `[num_of_data]` :param predicted: array of predicted classes (integers) with shape `[num_of_data]` :param num_classes: number of classification classes :return: confusion matrix (cm) with absolute values
def principal_inertia_transform(self): order = np.argsort(self.principal_inertia_components)[1:][::-1] vectors = self.principal_inertia_vectors[order] vectors = np.vstack((vectors, np.cross(*vectors))) transform = np.eye(4) transform[:3, :3] = vectors transform = transformations.transform_around( matrix=transform, point=self.centroid) transform[:3, 3] -= self.centroid return transform
A transform which moves the current mesh so the principal inertia vectors are on the X,Y, and Z axis, and the centroid is at the origin. Returns ---------- transform : (4, 4) float Homogenous transformation matrix
def delete_field_value(self, name): name = self.get_real_name(name) if name and self._can_write_field(name): if name in self.__modified_data__: self.__modified_data__.pop(name) if name in self.__original_data__ and name not in self.__deleted_fields__: self.__deleted_fields__.append(name)
Mark this field to be deleted
def delete(self, identity_id, service): return self.request.delete(str(identity_id) + '/limit/' + service)
Delete the limit for the given identity and service :param identity_id: The ID of the identity to retrieve :param service: The service that the token is linked to :return: dict of REST API output with headers attached :rtype: :class:`~datasift.request.DictResponse` :raises: :class:`~datasift.exceptions.DataSiftApiException`, :class:`requests.exceptions.HTTPError`
def _convert_name(self, name): if re.search('^\d+$', name): if len(name) > 1 and name[0] == '0': return name return int(name) return name
Convert ``name`` to int if it looks like an int. Otherwise, return it as is.
def update_view_state(self, view, state): if view.name not in self: self[view.name] = Bunch() self[view.name].update(state)
Update the state of a view.
def resample_dset(dset,template,prefix=None,resam='NN'): if prefix==None: prefix = nl.suffix(dset,'_resam') nl.run(['3dresample','-master',template,'-rmode',resam,'-prefix',prefix,'-inset',dset])
Resamples ``dset`` to the grid of ``template`` using resampling mode ``resam``. Default prefix is to suffix ``_resam`` at the end of ``dset`` Available resampling modes: :NN: Nearest Neighbor :Li: Linear :Cu: Cubic :Bk: Blocky
def _get_ui_content(self, cli, width, height): def get_content(): return self.content.create_content(cli, width=width, height=height) key = (cli.render_counter, width, height) return self._ui_content_cache.get(key, get_content)
Create a `UIContent` instance.
def nb_to_python(nb_path): exporter = python.PythonExporter() output, resources = exporter.from_filename(nb_path) return output
convert notebook to python script
def check_recommended_files(data, vcs): main_files = os.listdir(data['workingdir']) if not 'setup.py' in main_files and not 'setup.cfg' in main_files: return True if not 'MANIFEST.in' in main_files and not 'MANIFEST' in main_files: q = ("This package is missing a MANIFEST.in file. This file is " "recommended. " "See http://docs.python.org/distutils/sourcedist.html" " for more info. Sample contents:" "\n" "recursive-include main_directory *" "recursive-include docs *" "include *" "global-exclude *.pyc" "\n" "You may want to quit and fix this.") if not vcs.is_setuptools_helper_package_installed(): q += "Installing %s may help too.\n" % \ vcs.setuptools_helper_package q += "Do you want to continue with the release?" if not ask(q, default=False): return False print(q) return True
Do check for recommended files. Returns True when all is fine.
def type(self, name: str): for f in self.body: if (hasattr(f, '_ctype') and f._ctype._storage == Storages.TYPEDEF and f._name == name): return f
return the first complete definition of type 'name
def nCr(n, r): f = math.factorial return int(f(n) / f(r) / f(n-r))
Calculates nCr. Args: n (int): total number of items. r (int): items to choose Returns: nCr.
def __log_overview_errors(self): if self.error_file_names: self._io.warning('Routines in the files below are not loaded:') self._io.listing(sorted(self.error_file_names))
Show info about sources files of stored routines that were not loaded successfully.
def discard(samples, chains): samples = np.asarray(samples) if samples.ndim != 2: raise ValueError("expected samples to be a numpy 2D array") num_samples, num_variables = samples.shape num_chains = len(chains) broken = broken_chains(samples, chains) unbroken_idxs, = np.where(~broken.any(axis=1)) chain_variables = np.fromiter((np.asarray(tuple(chain))[0] if isinstance(chain, set) else np.asarray(chain)[0] for chain in chains), count=num_chains, dtype=int) return samples[np.ix_(unbroken_idxs, chain_variables)], unbroken_idxs
Discard broken chains. Args: samples (array_like): Samples as a nS x nV array_like object where nS is the number of samples and nV is the number of variables. The values should all be 0/1 or -1/+1. chains (list[array_like]): List of chains of length nC where nC is the number of chains. Each chain should be an array_like collection of column indices in samples. Returns: tuple: A 2-tuple containing: :obj:`numpy.ndarray`: An array of unembedded samples. Broken chains are discarded. The array has dtype 'int8'. :obj:`numpy.ndarray`: The indicies of the rows with unbroken chains. Examples: This example unembeds two samples that chains nodes 0 and 1 to represent a single source node. The first sample has an unbroken chain, the second a broken chain. >>> import dimod >>> import numpy as np ... >>> chains = [(0, 1), (2,)] >>> samples = np.array([[1, 1, 0], [1, 0, 0]], dtype=np.int8) >>> unembedded, idx = dwave.embedding.discard(samples, chains) >>> unembedded array([[1, 0]], dtype=int8) >>> idx array([0])
def search(self, start_ts, end_ts): return self._stream_search( index=self.meta_index_name, body={"query": {"range": {"_ts": {"gte": start_ts, "lte": end_ts}}}}, )
Query Elasticsearch for documents in a time range. This method is used to find documents that may be in conflict during a rollback event in MongoDB.
def list_records_for_build_configuration(id=None, name=None, page_size=200, page_index=0, sort="", q=""): data = list_records_for_build_configuration_raw(id, name, page_size, page_index, sort, q) if data: return utils.format_json_list(data)
List all BuildRecords for a given BuildConfiguration
def get_default_config(self, jid, node=None): iq = aioxmpp.stanza.IQ(to=jid, type_=aioxmpp.structs.IQType.GET) iq.payload = pubsub_xso.Request( pubsub_xso.Default(node=node) ) response = yield from self.client.send(iq) return response.payload.data
Request the default configuration of a node. :param jid: Address of the PubSub service. :type jid: :class:`aioxmpp.JID` :param node: Name of the PubSub node to query. :type node: :class:`str` :raises aioxmpp.errors.XMPPError: as returned by the service :return: The default configuration of subscriptions at the node. :rtype: :class:`~.forms.Data` On success, the :class:`~.forms.Data` form is returned. If an error occurs, the corresponding :class:`~.errors.XMPPError` is raised.
def i18n_javascript(self, request): if settings.USE_I18N: from django.views.i18n import javascript_catalog else: from django.views.i18n import null_javascript_catalog as javascript_catalog return javascript_catalog(request, packages=['media_tree'])
Displays the i18n JavaScript that the Django admin requires. This takes into account the USE_I18N setting. If it's set to False, the generated JavaScript will be leaner and faster.
def worker_failed(): participant_id = request.args.get("participant_id") if not participant_id: return error_response( error_type="bad request", error_text="participantId parameter is required" ) try: _worker_failed(participant_id) except KeyError: return error_response( error_type="ParticipantId not found: {}".format(participant_id) ) return success_response( field="status", data="success", request_type="worker failed" )
Fail worker. Used by bots only for now.
def collect_hosts(api, wanted_hostnames): all_hosts = api.get_all_hosts(view='full') all_hostnames = set([ h.hostname for h in all_hosts]) wanted_hostnames = set(wanted_hostnames) unknown_hosts = wanted_hostnames.difference(all_hostnames) if len(unknown_hosts) != 0: msg = "The following hosts are not found in Cloudera Manager. "\ "Please check for typos:\n%s" % ('\n'.join(unknown_hosts)) LOG.error(msg) raise RuntimeError(msg) return [ h for h in all_hosts if h.hostname in wanted_hostnames ]
Return a list of ApiHost objects for the set of hosts that we want to change config for.
def AskYesNoCancel(message, title='FontParts', default=0, informativeText=""): return dispatcher["AskYesNoCancel"](message=message, title=title, default=default, informativeText=informativeText)
An ask yes, no or cancel dialog, a `message` is required. Optionally a `title`, `default` and `informativeText` can be provided. The `default` option is to indicate which button is the default button. :: from fontParts.ui import AskYesNoCancel print(AskYesNoCancel("who are you?"))