body
stringlengths
26
98.2k
body_hash
int64
-9,222,864,604,528,158,000
9,221,803,474B
docstring
stringlengths
1
16.8k
path
stringlengths
5
230
name
stringlengths
1
96
repository_name
stringlengths
7
89
lang
stringclasses
1 value
body_without_docstring
stringlengths
20
98.2k
@property @pulumi.getter(name='doNotRunExtensionsOnOverprovisionedVMs') def do_not_run_extensions_on_overprovisioned_vms(self) -> Optional[bool]: '\n When Overprovision is enabled, extensions are launched only on the requested number of VMs which are finally kept. This property will hence ensure that the extensions do not run on the extra overprovisioned VMs.\n ' return pulumi.get(self, 'do_not_run_extensions_on_overprovisioned_vms')
-8,349,281,024,196,889,000
When Overprovision is enabled, extensions are launched only on the requested number of VMs which are finally kept. This property will hence ensure that the extensions do not run on the extra overprovisioned VMs.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
do_not_run_extensions_on_overprovisioned_vms
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='doNotRunExtensionsOnOverprovisionedVMs') def do_not_run_extensions_on_overprovisioned_vms(self) -> Optional[bool]: '\n \n ' return pulumi.get(self, 'do_not_run_extensions_on_overprovisioned_vms')
@property @pulumi.getter(name='extendedLocation') def extended_location(self) -> Optional['outputs.ExtendedLocationResponse']: '\n The extended location of the Virtual Machine Scale Set.\n ' return pulumi.get(self, 'extended_location')
8,015,889,040,813,705,000
The extended location of the Virtual Machine Scale Set.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
extended_location
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='extendedLocation') def extended_location(self) -> Optional['outputs.ExtendedLocationResponse']: '\n \n ' return pulumi.get(self, 'extended_location')
@property @pulumi.getter(name='hostGroup') def host_group(self) -> Optional['outputs.SubResourceResponse']: '\n Specifies information about the dedicated host group that the virtual machine scale set resides in. <br><br>Minimum api-version: 2020-06-01.\n ' return pulumi.get(self, 'host_group')
-3,147,833,368,655,111,700
Specifies information about the dedicated host group that the virtual machine scale set resides in. <br><br>Minimum api-version: 2020-06-01.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
host_group
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='hostGroup') def host_group(self) -> Optional['outputs.SubResourceResponse']: '\n \n ' return pulumi.get(self, 'host_group')
@property @pulumi.getter def id(self) -> str: '\n Resource Id\n ' return pulumi.get(self, 'id')
-8,273,823,637,222,696,000
Resource Id
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
id
polivbr/pulumi-azure-native
python
@property @pulumi.getter def id(self) -> str: '\n \n ' return pulumi.get(self, 'id')
@property @pulumi.getter def identity(self) -> Optional['outputs.VirtualMachineScaleSetIdentityResponse']: '\n The identity of the virtual machine scale set, if configured.\n ' return pulumi.get(self, 'identity')
-7,253,536,697,471,108,000
The identity of the virtual machine scale set, if configured.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
identity
polivbr/pulumi-azure-native
python
@property @pulumi.getter def identity(self) -> Optional['outputs.VirtualMachineScaleSetIdentityResponse']: '\n \n ' return pulumi.get(self, 'identity')
@property @pulumi.getter def location(self) -> str: '\n Resource location\n ' return pulumi.get(self, 'location')
-4,515,321,722,015,717,000
Resource location
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
location
polivbr/pulumi-azure-native
python
@property @pulumi.getter def location(self) -> str: '\n \n ' return pulumi.get(self, 'location')
@property @pulumi.getter def name(self) -> str: '\n Resource name\n ' return pulumi.get(self, 'name')
-7,148,411,979,540,616,000
Resource name
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
name
polivbr/pulumi-azure-native
python
@property @pulumi.getter def name(self) -> str: '\n \n ' return pulumi.get(self, 'name')
@property @pulumi.getter(name='orchestrationMode') def orchestration_mode(self) -> Optional[str]: '\n Specifies the orchestration mode for the virtual machine scale set.\n ' return pulumi.get(self, 'orchestration_mode')
-2,508,759,029,933,910,500
Specifies the orchestration mode for the virtual machine scale set.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
orchestration_mode
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='orchestrationMode') def orchestration_mode(self) -> Optional[str]: '\n \n ' return pulumi.get(self, 'orchestration_mode')
@property @pulumi.getter def overprovision(self) -> Optional[bool]: '\n Specifies whether the Virtual Machine Scale Set should be overprovisioned.\n ' return pulumi.get(self, 'overprovision')
2,195,097,157,299,520,800
Specifies whether the Virtual Machine Scale Set should be overprovisioned.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
overprovision
polivbr/pulumi-azure-native
python
@property @pulumi.getter def overprovision(self) -> Optional[bool]: '\n \n ' return pulumi.get(self, 'overprovision')
@property @pulumi.getter def plan(self) -> Optional['outputs.PlanResponse']: '\n Specifies information about the marketplace image used to create the virtual machine. This element is only used for marketplace images. Before you can use a marketplace image from an API, you must enable the image for programmatic use. In the Azure portal, find the marketplace image that you want to use and then click **Want to deploy programmatically, Get Started ->**. Enter any required information and then click **Save**.\n ' return pulumi.get(self, 'plan')
-479,134,546,514,780,160
Specifies information about the marketplace image used to create the virtual machine. This element is only used for marketplace images. Before you can use a marketplace image from an API, you must enable the image for programmatic use. In the Azure portal, find the marketplace image that you want to use and then click **Want to deploy programmatically, Get Started ->**. Enter any required information and then click **Save**.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
plan
polivbr/pulumi-azure-native
python
@property @pulumi.getter def plan(self) -> Optional['outputs.PlanResponse']: '\n \n ' return pulumi.get(self, 'plan')
@property @pulumi.getter(name='platformFaultDomainCount') def platform_fault_domain_count(self) -> Optional[int]: '\n Fault Domain count for each placement group.\n ' return pulumi.get(self, 'platform_fault_domain_count')
5,235,247,446,457,409,000
Fault Domain count for each placement group.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
platform_fault_domain_count
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='platformFaultDomainCount') def platform_fault_domain_count(self) -> Optional[int]: '\n \n ' return pulumi.get(self, 'platform_fault_domain_count')
@property @pulumi.getter(name='provisioningState') def provisioning_state(self) -> str: '\n The provisioning state, which only appears in the response.\n ' return pulumi.get(self, 'provisioning_state')
1,443,967,780,852,809,500
The provisioning state, which only appears in the response.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
provisioning_state
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='provisioningState') def provisioning_state(self) -> str: '\n \n ' return pulumi.get(self, 'provisioning_state')
@property @pulumi.getter(name='proximityPlacementGroup') def proximity_placement_group(self) -> Optional['outputs.SubResourceResponse']: '\n Specifies information about the proximity placement group that the virtual machine scale set should be assigned to. <br><br>Minimum api-version: 2018-04-01.\n ' return pulumi.get(self, 'proximity_placement_group')
7,432,910,305,572,225,000
Specifies information about the proximity placement group that the virtual machine scale set should be assigned to. <br><br>Minimum api-version: 2018-04-01.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
proximity_placement_group
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='proximityPlacementGroup') def proximity_placement_group(self) -> Optional['outputs.SubResourceResponse']: '\n \n ' return pulumi.get(self, 'proximity_placement_group')
@property @pulumi.getter(name='scaleInPolicy') def scale_in_policy(self) -> Optional['outputs.ScaleInPolicyResponse']: '\n Specifies the scale-in policy that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled-in.\n ' return pulumi.get(self, 'scale_in_policy')
4,456,056,434,484,911,000
Specifies the scale-in policy that decides which virtual machines are chosen for removal when a Virtual Machine Scale Set is scaled-in.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
scale_in_policy
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='scaleInPolicy') def scale_in_policy(self) -> Optional['outputs.ScaleInPolicyResponse']: '\n \n ' return pulumi.get(self, 'scale_in_policy')
@property @pulumi.getter(name='singlePlacementGroup') def single_placement_group(self) -> Optional[bool]: '\n When true this limits the scale set to a single placement group, of max size 100 virtual machines. NOTE: If singlePlacementGroup is true, it may be modified to false. However, if singlePlacementGroup is false, it may not be modified to true.\n ' return pulumi.get(self, 'single_placement_group')
3,963,651,708,174,868,000
When true this limits the scale set to a single placement group, of max size 100 virtual machines. NOTE: If singlePlacementGroup is true, it may be modified to false. However, if singlePlacementGroup is false, it may not be modified to true.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
single_placement_group
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='singlePlacementGroup') def single_placement_group(self) -> Optional[bool]: '\n \n ' return pulumi.get(self, 'single_placement_group')
@property @pulumi.getter def sku(self) -> Optional['outputs.SkuResponse']: '\n The virtual machine scale set sku.\n ' return pulumi.get(self, 'sku')
2,548,485,711,490,900,500
The virtual machine scale set sku.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
sku
polivbr/pulumi-azure-native
python
@property @pulumi.getter def sku(self) -> Optional['outputs.SkuResponse']: '\n \n ' return pulumi.get(self, 'sku')
@property @pulumi.getter def tags(self) -> Optional[Mapping[(str, str)]]: '\n Resource tags\n ' return pulumi.get(self, 'tags')
8,393,960,893,387,821,000
Resource tags
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
tags
polivbr/pulumi-azure-native
python
@property @pulumi.getter def tags(self) -> Optional[Mapping[(str, str)]]: '\n \n ' return pulumi.get(self, 'tags')
@property @pulumi.getter def type(self) -> str: '\n Resource type\n ' return pulumi.get(self, 'type')
-6,187,931,065,480,752,000
Resource type
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
type
polivbr/pulumi-azure-native
python
@property @pulumi.getter def type(self) -> str: '\n \n ' return pulumi.get(self, 'type')
@property @pulumi.getter(name='uniqueId') def unique_id(self) -> str: '\n Specifies the ID which uniquely identifies a Virtual Machine Scale Set.\n ' return pulumi.get(self, 'unique_id')
-1,954,157,736,488,446,200
Specifies the ID which uniquely identifies a Virtual Machine Scale Set.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
unique_id
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='uniqueId') def unique_id(self) -> str: '\n \n ' return pulumi.get(self, 'unique_id')
@property @pulumi.getter(name='upgradePolicy') def upgrade_policy(self) -> Optional['outputs.UpgradePolicyResponse']: '\n The upgrade policy.\n ' return pulumi.get(self, 'upgrade_policy')
-6,645,987,763,808,729,000
The upgrade policy.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
upgrade_policy
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='upgradePolicy') def upgrade_policy(self) -> Optional['outputs.UpgradePolicyResponse']: '\n \n ' return pulumi.get(self, 'upgrade_policy')
@property @pulumi.getter(name='virtualMachineProfile') def virtual_machine_profile(self) -> Optional['outputs.VirtualMachineScaleSetVMProfileResponse']: '\n The virtual machine profile.\n ' return pulumi.get(self, 'virtual_machine_profile')
-8,157,252,936,889,690,000
The virtual machine profile.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
virtual_machine_profile
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='virtualMachineProfile') def virtual_machine_profile(self) -> Optional['outputs.VirtualMachineScaleSetVMProfileResponse']: '\n \n ' return pulumi.get(self, 'virtual_machine_profile')
@property @pulumi.getter(name='zoneBalance') def zone_balance(self) -> Optional[bool]: '\n Whether to force strictly even Virtual Machine distribution cross x-zones in case there is zone outage.\n ' return pulumi.get(self, 'zone_balance')
4,528,459,920,478,171,000
Whether to force strictly even Virtual Machine distribution cross x-zones in case there is zone outage.
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
zone_balance
polivbr/pulumi-azure-native
python
@property @pulumi.getter(name='zoneBalance') def zone_balance(self) -> Optional[bool]: '\n \n ' return pulumi.get(self, 'zone_balance')
@property @pulumi.getter def zones(self) -> Optional[Sequence[str]]: '\n The virtual machine scale set zones. NOTE: Availability zones can only be set when you create the scale set\n ' return pulumi.get(self, 'zones')
6,225,819,024,639,591,000
The virtual machine scale set zones. NOTE: Availability zones can only be set when you create the scale set
sdk/python/pulumi_azure_native/compute/get_virtual_machine_scale_set.py
zones
polivbr/pulumi-azure-native
python
@property @pulumi.getter def zones(self) -> Optional[Sequence[str]]: '\n \n ' return pulumi.get(self, 'zones')
@staticmethod def _mocked_response(alpha_2, alpha_3, numeric, continent): 'Builds a mocked response for the patched country_iso_code function.' response = mock.Mock() response.alpha_2 = alpha_2 response.alpha_3 = alpha_3 response.numeric = numeric response.continent = continent return response
3,662,101,305,375,710,700
Builds a mocked response for the patched country_iso_code function.
nesta/packages/geo_utils/tests/test_geotools.py
_mocked_response
anniyanvr/nesta
python
@staticmethod def _mocked_response(alpha_2, alpha_3, numeric, continent): response = mock.Mock() response.alpha_2 = alpha_2 response.alpha_3 = alpha_3 response.numeric = numeric response.continent = continent return response
def test_collect_distribution(self): '\n Test that emails are collected properly.\n ' test_emails = self.disti.collect_email_addresses() self.assertEqual(len(test_emails), 2) self.assertSetEqual(self.all_emails, set(test_emails))
5,870,409,709,118,060,000
Test that emails are collected properly.
impression/tests/test_distribution.py
test_collect_distribution
gregschmit/django-impression
python
def test_collect_distribution(self): '\n \n ' test_emails = self.disti.collect_email_addresses() self.assertEqual(len(test_emails), 2) self.assertSetEqual(self.all_emails, set(test_emails))
def test_collect_distribution_with_duplicates(self): '\n Test that a distribution with duplicates to ensure it only collects each email\n once.\n ' test_emails = self.dupe_disti.collect_email_addresses() self.assertEqual(len(test_emails), 2) self.assertSetEqual(self.all_emails, set(test_emails))
7,588,306,209,354,281,000
Test that a distribution with duplicates to ensure it only collects each email once.
impression/tests/test_distribution.py
test_collect_distribution_with_duplicates
gregschmit/django-impression
python
def test_collect_distribution_with_duplicates(self): '\n Test that a distribution with duplicates to ensure it only collects each email\n once.\n ' test_emails = self.dupe_disti.collect_email_addresses() self.assertEqual(len(test_emails), 2) self.assertSetEqual(self.all_emails, set(test_emails))
def test_collect_distribution_with_self_references(self): '\n Test that a distribution with self references to ensure it only collects each\n email once, and without looping infinitely.\n ' test_emails = self.self_disti.collect_email_addresses() self.assertEqual(len(test_emails), 1) self.assertSetEqual(set([self.test1]), set(test_emails))
7,214,400,685,393,205,000
Test that a distribution with self references to ensure it only collects each email once, and without looping infinitely.
impression/tests/test_distribution.py
test_collect_distribution_with_self_references
gregschmit/django-impression
python
def test_collect_distribution_with_self_references(self): '\n Test that a distribution with self references to ensure it only collects each\n email once, and without looping infinitely.\n ' test_emails = self.self_disti.collect_email_addresses() self.assertEqual(len(test_emails), 1) self.assertSetEqual(set([self.test1]), set(test_emails))
def test_collect_distribution_with_cyclic_references(self): '\n Test that a distribution with cyclic references only collects each email once,\n and without looping infinitely.\n ' test_emails = self.cyclic_disti1.collect_email_addresses() self.assertEqual(len(test_emails), 2) self.assertSetEqual(self.all_emails, set(test_emails)) test_emails = self.cyclic_disti2.collect_email_addresses() self.assertEqual(len(test_emails), 2) self.assertSetEqual(self.all_emails, set(test_emails))
-2,505,068,576,209,851,000
Test that a distribution with cyclic references only collects each email once, and without looping infinitely.
impression/tests/test_distribution.py
test_collect_distribution_with_cyclic_references
gregschmit/django-impression
python
def test_collect_distribution_with_cyclic_references(self): '\n Test that a distribution with cyclic references only collects each email once,\n and without looping infinitely.\n ' test_emails = self.cyclic_disti1.collect_email_addresses() self.assertEqual(len(test_emails), 2) self.assertSetEqual(self.all_emails, set(test_emails)) test_emails = self.cyclic_disti2.collect_email_addresses() self.assertEqual(len(test_emails), 2) self.assertSetEqual(self.all_emails, set(test_emails))
def get_success(self, obj): '\n Return ``None`` if the build is not finished.\n\n This is needed because ``default=True`` in the model field.\n ' if obj.finished: return obj.success return None
-4,690,954,269,013,599,000
Return ``None`` if the build is not finished. This is needed because ``default=True`` in the model field.
readthedocs/api/v3/serializers.py
get_success
Dithn/readthedocs.org
python
def get_success(self, obj): '\n Return ``None`` if the build is not finished.\n\n This is needed because ``default=True`` in the model field.\n ' if obj.finished: return obj.success return None
def plot_elpd(ax, models, pointwise_data, numvars, figsize, textsize, plot_kwargs, markersize, xlabels, coord_labels, xdata, threshold, backend_kwargs, show): 'Bokeh elpd plot.' if (backend_kwargs is None): backend_kwargs = {} backend_kwargs = {**backend_kwarg_defaults(('dpi', 'plot.bokeh.figure.dpi')), **backend_kwargs} dpi = backend_kwargs.pop('dpi') if (numvars == 2): (figsize, _, _, _, _, markersize) = _scale_fig_size(figsize, textsize, (numvars - 1), (numvars - 1)) plot_kwargs.setdefault('s', markersize) if (ax is None): backend_kwargs.setdefault('width', int((figsize[0] * dpi))) backend_kwargs.setdefault('height', int((figsize[1] * dpi))) ax = bkp.figure(**backend_kwargs) ydata = (pointwise_data[0] - pointwise_data[1]) _plot_atomic_elpd(ax, xdata, ydata, *models, threshold, coord_labels, xlabels, True, True, plot_kwargs) show_layout(ax, show) else: max_plots = ((numvars ** 2) if (rcParams['plot.max_subplots'] is None) else rcParams['plot.max_subplots']) vars_to_plot = np.sum((np.arange(numvars).cumsum() < max_plots)) if (vars_to_plot < numvars): warnings.warn("rcParams['plot.max_subplots'] ({max_plots}) is smaller than the number of resulting ELPD pairwise plots with these variables, generating only a {side}x{side} grid".format(max_plots=max_plots, side=vars_to_plot), UserWarning) numvars = vars_to_plot (figsize, _, _, _, _, markersize) = _scale_fig_size(figsize, textsize, (numvars - 2), (numvars - 2)) plot_kwargs.setdefault('s', markersize) if (ax is None): ax = [] for row in range((numvars - 1)): ax_row = [] for col in range((numvars - 1)): if ((row == 0) and (col == 0)): ax_first = bkp.figure(width=int(((figsize[0] / (numvars - 1)) * dpi)), height=int(((figsize[1] / (numvars - 1)) * dpi)), **backend_kwargs) ax_row.append(ax_first) elif (row < col): ax_row.append(None) else: ax_row.append(bkp.figure(width=int(((figsize[0] / (numvars - 1)) * dpi)), height=int(((figsize[1] / (numvars - 1)) * dpi)), x_range=ax_first.x_range, y_range=ax_first.y_range, **backend_kwargs)) ax.append(ax_row) ax = np.array(ax) for i in range(0, (numvars - 1)): var1 = pointwise_data[i] for j in range(0, (numvars - 1)): if (j < i): continue var2 = pointwise_data[(j + 1)] ydata = (var1 - var2) _plot_atomic_elpd(ax[(j, i)], xdata, ydata, models[i], models[(j + 1)], threshold, coord_labels, xlabels, (j == (numvars - 2)), (i == 0), plot_kwargs) show_layout(ax, show) return ax
5,456,801,688,987,929,000
Bokeh elpd plot.
arviz/plots/backends/bokeh/elpdplot.py
plot_elpd
Brahanyaa98/arviz
python
def plot_elpd(ax, models, pointwise_data, numvars, figsize, textsize, plot_kwargs, markersize, xlabels, coord_labels, xdata, threshold, backend_kwargs, show): if (backend_kwargs is None): backend_kwargs = {} backend_kwargs = {**backend_kwarg_defaults(('dpi', 'plot.bokeh.figure.dpi')), **backend_kwargs} dpi = backend_kwargs.pop('dpi') if (numvars == 2): (figsize, _, _, _, _, markersize) = _scale_fig_size(figsize, textsize, (numvars - 1), (numvars - 1)) plot_kwargs.setdefault('s', markersize) if (ax is None): backend_kwargs.setdefault('width', int((figsize[0] * dpi))) backend_kwargs.setdefault('height', int((figsize[1] * dpi))) ax = bkp.figure(**backend_kwargs) ydata = (pointwise_data[0] - pointwise_data[1]) _plot_atomic_elpd(ax, xdata, ydata, *models, threshold, coord_labels, xlabels, True, True, plot_kwargs) show_layout(ax, show) else: max_plots = ((numvars ** 2) if (rcParams['plot.max_subplots'] is None) else rcParams['plot.max_subplots']) vars_to_plot = np.sum((np.arange(numvars).cumsum() < max_plots)) if (vars_to_plot < numvars): warnings.warn("rcParams['plot.max_subplots'] ({max_plots}) is smaller than the number of resulting ELPD pairwise plots with these variables, generating only a {side}x{side} grid".format(max_plots=max_plots, side=vars_to_plot), UserWarning) numvars = vars_to_plot (figsize, _, _, _, _, markersize) = _scale_fig_size(figsize, textsize, (numvars - 2), (numvars - 2)) plot_kwargs.setdefault('s', markersize) if (ax is None): ax = [] for row in range((numvars - 1)): ax_row = [] for col in range((numvars - 1)): if ((row == 0) and (col == 0)): ax_first = bkp.figure(width=int(((figsize[0] / (numvars - 1)) * dpi)), height=int(((figsize[1] / (numvars - 1)) * dpi)), **backend_kwargs) ax_row.append(ax_first) elif (row < col): ax_row.append(None) else: ax_row.append(bkp.figure(width=int(((figsize[0] / (numvars - 1)) * dpi)), height=int(((figsize[1] / (numvars - 1)) * dpi)), x_range=ax_first.x_range, y_range=ax_first.y_range, **backend_kwargs)) ax.append(ax_row) ax = np.array(ax) for i in range(0, (numvars - 1)): var1 = pointwise_data[i] for j in range(0, (numvars - 1)): if (j < i): continue var2 = pointwise_data[(j + 1)] ydata = (var1 - var2) _plot_atomic_elpd(ax[(j, i)], xdata, ydata, models[i], models[(j + 1)], threshold, coord_labels, xlabels, (j == (numvars - 2)), (i == 0), plot_kwargs) show_layout(ax, show) return ax
def EVBMF(Y, sigma2=None, H=None): 'Implementation of the analytical solution to Empirical Variational\n Bayes Matrix Factorization.\n\n This function can be used to calculate the analytical solution to\n empirical VBMF.\n This is based on the paper and MatLab code by Nakajima et al.:\n "Global analytic solution of fully-observed variational Bayesian matrix\n factorization."\n\n Notes\n -----\n If sigma2 is unspecified, it is estimated by minimizing the free\n energy.\n If H is unspecified, it is set to the smallest of the sides of the\n input Y.\n\n Attributes\n ----------\n Y : numpy-array\n Input matrix that is to be factorized. Y has shape (L,M), where L<=M.\n\n sigma2 : int or None (default=None)\n Variance of the noise on Y.\n\n H : int or None (default = None)\n Maximum rank of the factorized matrices.\n\n Returns\n -------\n U : numpy-array\n Left-singular vectors.\n\n S : numpy-array\n Diagonal matrix of singular values.\n\n V : numpy-array\n Right-singular vectors.\n\n post : dictionary\n Dictionary containing the computed posterior values.\n\n\n References\n ----------\n .. [1] Nakajima, Shinichi, et al. "Global analytic solution of\n fully-observed variational Bayesian matrix factorization." Journal of\n Machine Learning Research 14.Jan (2013): 1-37.\n\n .. [2] Nakajima, Shinichi, et al. "Perfect dimensionality recovery by\n variational Bayesian PCA." Advances in Neural Information Processing\n Systems. 2012.\n ' (L, M) = Y.shape if (H is None): H = L alpha = (L / M) tauubar = (2.5129 * np.sqrt(alpha)) (U, s, V) = torch.svd(Y) U = U[:, :H] s = s[:H] V = V[:H].T residual = 0.0 if (H < L): residual = torch.sum((np.sum((Y ** 2)) - np.sum((s ** 2)))) if (sigma2 is None): xubar = ((1 + tauubar) * (1 + (alpha / tauubar))) eH_ub = (int(np.min([(np.ceil((L / (1 + alpha))) - 1), H])) - 1) upper_bound = ((torch.sum((s ** 2)) + residual) / (L * M)) lower_bound = torch.max(torch.stack([((s[(eH_ub + 1)] ** 2) / (M * xubar)), (torch.mean((s[(eH_ub + 1):] ** 2)) / M)], dim=0)) scale = 1.0 s = (s * np.sqrt(scale)) residual = (residual * scale) lower_bound = (lower_bound * scale) upper_bound = (upper_bound * scale) sigma2_opt = minimize_scalar(EVBsigma2, args=(L, M, s.cpu().numpy(), residual, xubar), bounds=[lower_bound.cpu().numpy(), upper_bound.cpu().numpy()], method='Bounded') sigma2 = sigma2_opt.x threshold = np.sqrt((((M * sigma2) * (1 + tauubar)) * (1 + (alpha / tauubar)))) pos = torch.sum((s > threshold)) d = ((s[:pos] / 2) * ((1 - (((L + M) * sigma2) / (s[:pos] ** 2))) + torch.sqrt((((1 - (((L + M) * sigma2) / (s[:pos] ** 2))) ** 2) - ((((4 * L) * M) * (sigma2 ** 2)) / (s[:pos] ** 4)))))) return (U[:, :pos], torch.diag(d), V[:, :pos])
4,002,221,810,944,570,400
Implementation of the analytical solution to Empirical Variational Bayes Matrix Factorization. This function can be used to calculate the analytical solution to empirical VBMF. This is based on the paper and MatLab code by Nakajima et al.: "Global analytic solution of fully-observed variational Bayesian matrix factorization." Notes ----- If sigma2 is unspecified, it is estimated by minimizing the free energy. If H is unspecified, it is set to the smallest of the sides of the input Y. Attributes ---------- Y : numpy-array Input matrix that is to be factorized. Y has shape (L,M), where L<=M. sigma2 : int or None (default=None) Variance of the noise on Y. H : int or None (default = None) Maximum rank of the factorized matrices. Returns ------- U : numpy-array Left-singular vectors. S : numpy-array Diagonal matrix of singular values. V : numpy-array Right-singular vectors. post : dictionary Dictionary containing the computed posterior values. References ---------- .. [1] Nakajima, Shinichi, et al. "Global analytic solution of fully-observed variational Bayesian matrix factorization." Journal of Machine Learning Research 14.Jan (2013): 1-37. .. [2] Nakajima, Shinichi, et al. "Perfect dimensionality recovery by variational Bayesian PCA." Advances in Neural Information Processing Systems. 2012.
src/transformers/adas.py
EVBMF
MathieuTuli/transformers
python
def EVBMF(Y, sigma2=None, H=None): 'Implementation of the analytical solution to Empirical Variational\n Bayes Matrix Factorization.\n\n This function can be used to calculate the analytical solution to\n empirical VBMF.\n This is based on the paper and MatLab code by Nakajima et al.:\n "Global analytic solution of fully-observed variational Bayesian matrix\n factorization."\n\n Notes\n -----\n If sigma2 is unspecified, it is estimated by minimizing the free\n energy.\n If H is unspecified, it is set to the smallest of the sides of the\n input Y.\n\n Attributes\n ----------\n Y : numpy-array\n Input matrix that is to be factorized. Y has shape (L,M), where L<=M.\n\n sigma2 : int or None (default=None)\n Variance of the noise on Y.\n\n H : int or None (default = None)\n Maximum rank of the factorized matrices.\n\n Returns\n -------\n U : numpy-array\n Left-singular vectors.\n\n S : numpy-array\n Diagonal matrix of singular values.\n\n V : numpy-array\n Right-singular vectors.\n\n post : dictionary\n Dictionary containing the computed posterior values.\n\n\n References\n ----------\n .. [1] Nakajima, Shinichi, et al. "Global analytic solution of\n fully-observed variational Bayesian matrix factorization." Journal of\n Machine Learning Research 14.Jan (2013): 1-37.\n\n .. [2] Nakajima, Shinichi, et al. "Perfect dimensionality recovery by\n variational Bayesian PCA." Advances in Neural Information Processing\n Systems. 2012.\n ' (L, M) = Y.shape if (H is None): H = L alpha = (L / M) tauubar = (2.5129 * np.sqrt(alpha)) (U, s, V) = torch.svd(Y) U = U[:, :H] s = s[:H] V = V[:H].T residual = 0.0 if (H < L): residual = torch.sum((np.sum((Y ** 2)) - np.sum((s ** 2)))) if (sigma2 is None): xubar = ((1 + tauubar) * (1 + (alpha / tauubar))) eH_ub = (int(np.min([(np.ceil((L / (1 + alpha))) - 1), H])) - 1) upper_bound = ((torch.sum((s ** 2)) + residual) / (L * M)) lower_bound = torch.max(torch.stack([((s[(eH_ub + 1)] ** 2) / (M * xubar)), (torch.mean((s[(eH_ub + 1):] ** 2)) / M)], dim=0)) scale = 1.0 s = (s * np.sqrt(scale)) residual = (residual * scale) lower_bound = (lower_bound * scale) upper_bound = (upper_bound * scale) sigma2_opt = minimize_scalar(EVBsigma2, args=(L, M, s.cpu().numpy(), residual, xubar), bounds=[lower_bound.cpu().numpy(), upper_bound.cpu().numpy()], method='Bounded') sigma2 = sigma2_opt.x threshold = np.sqrt((((M * sigma2) * (1 + tauubar)) * (1 + (alpha / tauubar)))) pos = torch.sum((s > threshold)) d = ((s[:pos] / 2) * ((1 - (((L + M) * sigma2) / (s[:pos] ** 2))) + torch.sqrt((((1 - (((L + M) * sigma2) / (s[:pos] ** 2))) ** 2) - ((((4 * L) * M) * (sigma2 ** 2)) / (s[:pos] ** 4)))))) return (U[:, :pos], torch.diag(d), V[:, :pos])
def __init__(self, params, linear: bool=False) -> None: '\n parameters: list of torch.nn.Module.parameters()\n ' self.params = params self.history = list() mask = list() for (param_idx, param) in enumerate(params): param_shape = param.shape if (not linear): if (len(param_shape) != 4): mask.append(param_idx) elif ((len(param_shape) != 4) and (len(param_shape) != 2)): mask.append(param_idx) self.mask = set(mask)
4,446,876,563,983,158,300
parameters: list of torch.nn.Module.parameters()
src/transformers/adas.py
__init__
MathieuTuli/transformers
python
def __init__(self, params, linear: bool=False) -> None: '\n \n ' self.params = params self.history = list() mask = list() for (param_idx, param) in enumerate(params): param_shape = param.shape if (not linear): if (len(param_shape) != 4): mask.append(param_idx) elif ((len(param_shape) != 4) and (len(param_shape) != 2)): mask.append(param_idx) self.mask = set(mask)
def __call__(self) -> List[Tuple[(int, Union[(LayerMetrics, ConvLayerMetrics)])]]: '\n Computes the knowledge gain (S) and mapping condition (condition)\n ' metrics: List[Tuple[(int, Union[(LayerMetrics, ConvLayerMetrics)])]] = list() for (layer_index, layer) in enumerate(self.params): if (layer_index in self.mask): metrics.append((layer_index, None)) continue if (len(layer.shape) == 4): layer_tensor = layer.data tensor_size = layer_tensor.shape mode_3_unfold = layer_tensor.permute(1, 0, 2, 3) mode_3_unfold = torch.reshape(mode_3_unfold, [tensor_size[1], ((tensor_size[0] * tensor_size[2]) * tensor_size[3])]) mode_4_unfold = layer_tensor mode_4_unfold = torch.reshape(mode_4_unfold, [tensor_size[0], ((tensor_size[1] * tensor_size[2]) * tensor_size[3])]) (in_rank, in_KG, in_condition) = self.compute_low_rank(mode_3_unfold, tensor_size[1]) if ((in_rank is None) and (in_KG is None) and (in_condition is None)): if (len(self.history) > 0): in_rank = self.history[(- 1)][layer_index][1].input_channel.rank in_KG = self.history[(- 1)][layer_index][1].input_channel.KG in_condition = self.history[(- 1)][layer_index][1].input_channel.condition else: in_rank = in_KG = in_condition = 0.0 (out_rank, out_KG, out_condition) = self.compute_low_rank(mode_4_unfold, tensor_size[0]) if ((out_rank is None) and (out_KG is None) and (out_condition is None)): if (len(self.history) > 0): out_rank = self.history[(- 1)][layer_index][1].output_channel.rank out_KG = self.history[(- 1)][layer_index][1].output_channel.KG out_condition = self.history[(- 1)][layer_index][1].output_channel.condition else: out_rank = out_KG = out_condition = 0.0 metrics.append((layer_index, ConvLayerMetrics(input_channel=LayerMetrics(rank=in_rank, KG=in_KG, condition=in_condition), output_channel=LayerMetrics(rank=out_rank, KG=out_KG, condition=out_condition)))) elif (len(layer.shape) == 2): (rank, KG, condition) = self.compute_low_rank(layer, layer.shape[0]) if ((rank is None) and (KG is None) and (condition is None)): if (len(self.history) > 0): rank = self.history[(- 1)][layer_index][1].rank KG = self.history[(- 1)][layer_index][1].KG condition = self.history[(- 1)][layer_index][1].condition else: rank = KG = condition = 0.0 metrics.append((layer_index, LayerMetrics(rank=rank, KG=KG, condition=condition))) else: metrics.append((layer_index, None)) self.history.append(metrics) return metrics
6,542,438,939,856,511,000
Computes the knowledge gain (S) and mapping condition (condition)
src/transformers/adas.py
__call__
MathieuTuli/transformers
python
def __call__(self) -> List[Tuple[(int, Union[(LayerMetrics, ConvLayerMetrics)])]]: '\n \n ' metrics: List[Tuple[(int, Union[(LayerMetrics, ConvLayerMetrics)])]] = list() for (layer_index, layer) in enumerate(self.params): if (layer_index in self.mask): metrics.append((layer_index, None)) continue if (len(layer.shape) == 4): layer_tensor = layer.data tensor_size = layer_tensor.shape mode_3_unfold = layer_tensor.permute(1, 0, 2, 3) mode_3_unfold = torch.reshape(mode_3_unfold, [tensor_size[1], ((tensor_size[0] * tensor_size[2]) * tensor_size[3])]) mode_4_unfold = layer_tensor mode_4_unfold = torch.reshape(mode_4_unfold, [tensor_size[0], ((tensor_size[1] * tensor_size[2]) * tensor_size[3])]) (in_rank, in_KG, in_condition) = self.compute_low_rank(mode_3_unfold, tensor_size[1]) if ((in_rank is None) and (in_KG is None) and (in_condition is None)): if (len(self.history) > 0): in_rank = self.history[(- 1)][layer_index][1].input_channel.rank in_KG = self.history[(- 1)][layer_index][1].input_channel.KG in_condition = self.history[(- 1)][layer_index][1].input_channel.condition else: in_rank = in_KG = in_condition = 0.0 (out_rank, out_KG, out_condition) = self.compute_low_rank(mode_4_unfold, tensor_size[0]) if ((out_rank is None) and (out_KG is None) and (out_condition is None)): if (len(self.history) > 0): out_rank = self.history[(- 1)][layer_index][1].output_channel.rank out_KG = self.history[(- 1)][layer_index][1].output_channel.KG out_condition = self.history[(- 1)][layer_index][1].output_channel.condition else: out_rank = out_KG = out_condition = 0.0 metrics.append((layer_index, ConvLayerMetrics(input_channel=LayerMetrics(rank=in_rank, KG=in_KG, condition=in_condition), output_channel=LayerMetrics(rank=out_rank, KG=out_KG, condition=out_condition)))) elif (len(layer.shape) == 2): (rank, KG, condition) = self.compute_low_rank(layer, layer.shape[0]) if ((rank is None) and (KG is None) and (condition is None)): if (len(self.history) > 0): rank = self.history[(- 1)][layer_index][1].rank KG = self.history[(- 1)][layer_index][1].KG condition = self.history[(- 1)][layer_index][1].condition else: rank = KG = condition = 0.0 metrics.append((layer_index, LayerMetrics(rank=rank, KG=KG, condition=condition))) else: metrics.append((layer_index, None)) self.history.append(metrics) return metrics
def step(self, closure: callable=None): 'Performs a single optimization step.\n\n Arguments:\n closure (callable, optional): A closure that reevaluates the model\n and returns the loss.\n ' loss = None if (closure is not None): loss = closure() iteration_group = 0 for group in self.param_groups: iteration_group += 1 weight_decay = group['weight_decay'] momentum = group['momentum'] dampening = group['dampening'] nesterov = group['nesterov'] for (p_index, p) in enumerate(group['params']): if (p.grad is None): continue d_p = p.grad.data if (weight_decay != 0): d_p.add_(p.data, alpha=weight_decay) if (momentum != 0): param_state = self.state[p] if ('momentum_buffer' not in param_state): buf = param_state['momentum_buffer'] = torch.clone(d_p).detach() else: buf = param_state['momentum_buffer'] buf.mul_(momentum).add_(d_p, alpha=(1 - dampening)) if nesterov: d_p = d_p.add(momentum, buf) else: d_p = buf p.data.add_(d_p, alpha=(- self.lr_vector[p_index])) return loss
-5,650,415,255,564,906,000
Performs a single optimization step. Arguments: closure (callable, optional): A closure that reevaluates the model and returns the loss.
src/transformers/adas.py
step
MathieuTuli/transformers
python
def step(self, closure: callable=None): 'Performs a single optimization step.\n\n Arguments:\n closure (callable, optional): A closure that reevaluates the model\n and returns the loss.\n ' loss = None if (closure is not None): loss = closure() iteration_group = 0 for group in self.param_groups: iteration_group += 1 weight_decay = group['weight_decay'] momentum = group['momentum'] dampening = group['dampening'] nesterov = group['nesterov'] for (p_index, p) in enumerate(group['params']): if (p.grad is None): continue d_p = p.grad.data if (weight_decay != 0): d_p.add_(p.data, alpha=weight_decay) if (momentum != 0): param_state = self.state[p] if ('momentum_buffer' not in param_state): buf = param_state['momentum_buffer'] = torch.clone(d_p).detach() else: buf = param_state['momentum_buffer'] buf.mul_(momentum).add_(d_p, alpha=(1 - dampening)) if nesterov: d_p = d_p.add(momentum, buf) else: d_p = buf p.data.add_(d_p, alpha=(- self.lr_vector[p_index])) return loss
def clientServerUploadOptions(self, options, input=None, transmitname=None, server_kwargs=None): 'Fire up a client and a server and do an upload.' root = '/tmp' home = os.path.dirname(os.path.abspath(__file__)) filename = '640KBFILE' input_path = os.path.join(home, filename) if (not input): input = input_path if transmitname: filename = transmitname server_kwargs = (server_kwargs or {}) server = tftpy.TftpServer(root, **server_kwargs) client = tftpy.TftpClient('localhost', 20001, options) child_pid = os.fork() if child_pid: try: time.sleep(1) client.upload(filename, input) finally: os.kill(child_pid, 15) os.waitpid(child_pid, 0) else: server.listen('localhost', 20001)
2,230,971,969,529,344,300
Fire up a client and a server and do an upload.
t/test.py
clientServerUploadOptions
mapcollab/python-tftpy
python
def clientServerUploadOptions(self, options, input=None, transmitname=None, server_kwargs=None): root = '/tmp' home = os.path.dirname(os.path.abspath(__file__)) filename = '640KBFILE' input_path = os.path.join(home, filename) if (not input): input = input_path if transmitname: filename = transmitname server_kwargs = (server_kwargs or {}) server = tftpy.TftpServer(root, **server_kwargs) client = tftpy.TftpClient('localhost', 20001, options) child_pid = os.fork() if child_pid: try: time.sleep(1) client.upload(filename, input) finally: os.kill(child_pid, 15) os.waitpid(child_pid, 0) else: server.listen('localhost', 20001)
def clientServerDownloadOptions(self, options, output='/tmp/out'): 'Fire up a client and a server and do a download.' root = os.path.dirname(os.path.abspath(__file__)) server = tftpy.TftpServer(root) client = tftpy.TftpClient('localhost', 20001, options) child_pid = os.fork() if child_pid: try: time.sleep(1) client.download('640KBFILE', output) finally: os.kill(child_pid, 15) os.waitpid(child_pid, 0) else: server.listen('localhost', 20001)
1,382,451,886,181,627,400
Fire up a client and a server and do a download.
t/test.py
clientServerDownloadOptions
mapcollab/python-tftpy
python
def clientServerDownloadOptions(self, options, output='/tmp/out'): root = os.path.dirname(os.path.abspath(__file__)) server = tftpy.TftpServer(root) client = tftpy.TftpClient('localhost', 20001, options) child_pid = os.fork() if child_pid: try: time.sleep(1) client.download('640KBFILE', output) finally: os.kill(child_pid, 15) os.waitpid(child_pid, 0) else: server.listen('localhost', 20001)
def __init__(self, *, host: str='googleads.googleapis.com', credentials: credentials.Credentials=None, credentials_file: str=None, scopes: Sequence[str]=None, channel: grpc.Channel=None, api_mtls_endpoint: str=None, client_cert_source: Callable[([], Tuple[(bytes, bytes)])]=None, ssl_channel_credentials: grpc.ChannelCredentials=None, quota_project_id: Optional[str]=None, client_info: gapic_v1.client_info.ClientInfo=DEFAULT_CLIENT_INFO) -> None: "Instantiate the transport.\n\n Args:\n host (Optional[str]): The hostname to connect to.\n credentials (Optional[google.auth.credentials.Credentials]): The\n authorization credentials to attach to requests. These\n credentials identify the application to the service; if none\n are specified, the client will attempt to ascertain the\n credentials from the environment.\n This argument is ignored if ``channel`` is provided.\n credentials_file (Optional[str]): A file with credentials that can\n be loaded with :func:`google.auth.load_credentials_from_file`.\n This argument is ignored if ``channel`` is provided.\n scopes (Optional(Sequence[str])): A list of scopes. This argument is\n ignored if ``channel`` is provided.\n channel (Optional[grpc.Channel]): A ``Channel`` instance through\n which to make calls.\n api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint.\n If provided, it overrides the ``host`` argument and tries to create\n a mutual TLS channel with client SSL credentials from\n ``client_cert_source`` or applicatin default SSL credentials.\n client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]):\n Deprecated. A callback to provide client SSL certificate bytes and\n private key bytes, both in PEM format. It is ignored if\n ``api_mtls_endpoint`` is None.\n ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials\n for grpc channel. It is ignored if ``channel`` is provided.\n quota_project_id (Optional[str]): An optional project to use for billing\n and quota.\n client_info (google.api_core.gapic_v1.client_info.ClientInfo):\n The client info used to send a user-agent string along with\n API requests. If ``None``, then default info will be used.\n Generally, you only need to set this if you're developing\n your own client library.\n\n Raises:\n google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport\n creation failed for any reason.\n " self._ssl_channel_credentials = ssl_channel_credentials if channel: credentials = False self._grpc_channel = channel self._ssl_channel_credentials = None elif api_mtls_endpoint: warnings.warn('api_mtls_endpoint and client_cert_source are deprecated', DeprecationWarning) host = (api_mtls_endpoint if (':' in api_mtls_endpoint) else (api_mtls_endpoint + ':443')) if (credentials is None): (credentials, _) = auth.default(scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id) if client_cert_source: (cert, key) = client_cert_source() ssl_credentials = grpc.ssl_channel_credentials(certificate_chain=cert, private_key=key) else: ssl_credentials = SslCredentials().ssl_credentials self._grpc_channel = type(self).create_channel(host, credentials=credentials, credentials_file=credentials_file, ssl_credentials=ssl_credentials, scopes=(scopes or self.AUTH_SCOPES), quota_project_id=quota_project_id, options=[('grpc.max_send_message_length', (- 1)), ('grpc.max_receive_message_length', (- 1))]) self._ssl_channel_credentials = ssl_credentials else: host = (host if (':' in host) else (host + ':443')) if (credentials is None): (credentials, _) = auth.default(scopes=self.AUTH_SCOPES) self._grpc_channel = type(self).create_channel(host, credentials=credentials, ssl_credentials=ssl_channel_credentials, scopes=self.AUTH_SCOPES, options=[('grpc.max_send_message_length', (- 1)), ('grpc.max_receive_message_length', (- 1))]) self._stubs = {} super().__init__(host=host, credentials=credentials, client_info=client_info)
5,966,026,779,169,955,000
Instantiate the transport. Args: host (Optional[str]): The hostname to connect to. credentials (Optional[google.auth.credentials.Credentials]): The authorization credentials to attach to requests. These credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. This argument is ignored if ``channel`` is provided. credentials_file (Optional[str]): A file with credentials that can be loaded with :func:`google.auth.load_credentials_from_file`. This argument is ignored if ``channel`` is provided. scopes (Optional(Sequence[str])): A list of scopes. This argument is ignored if ``channel`` is provided. channel (Optional[grpc.Channel]): A ``Channel`` instance through which to make calls. api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. If provided, it overrides the ``host`` argument and tries to create a mutual TLS channel with client SSL credentials from ``client_cert_source`` or applicatin default SSL credentials. client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): Deprecated. A callback to provide client SSL certificate bytes and private key bytes, both in PEM format. It is ignored if ``api_mtls_endpoint`` is None. ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials for grpc channel. It is ignored if ``channel`` is provided. quota_project_id (Optional[str]): An optional project to use for billing and quota. client_info (google.api_core.gapic_v1.client_info.ClientInfo): The client info used to send a user-agent string along with API requests. If ``None``, then default info will be used. Generally, you only need to set this if you're developing your own client library. Raises: google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport creation failed for any reason.
google/ads/googleads/v4/services/services/ad_group_service/transports/grpc.py
__init__
batardo/google-ads-python
python
def __init__(self, *, host: str='googleads.googleapis.com', credentials: credentials.Credentials=None, credentials_file: str=None, scopes: Sequence[str]=None, channel: grpc.Channel=None, api_mtls_endpoint: str=None, client_cert_source: Callable[([], Tuple[(bytes, bytes)])]=None, ssl_channel_credentials: grpc.ChannelCredentials=None, quota_project_id: Optional[str]=None, client_info: gapic_v1.client_info.ClientInfo=DEFAULT_CLIENT_INFO) -> None: "Instantiate the transport.\n\n Args:\n host (Optional[str]): The hostname to connect to.\n credentials (Optional[google.auth.credentials.Credentials]): The\n authorization credentials to attach to requests. These\n credentials identify the application to the service; if none\n are specified, the client will attempt to ascertain the\n credentials from the environment.\n This argument is ignored if ``channel`` is provided.\n credentials_file (Optional[str]): A file with credentials that can\n be loaded with :func:`google.auth.load_credentials_from_file`.\n This argument is ignored if ``channel`` is provided.\n scopes (Optional(Sequence[str])): A list of scopes. This argument is\n ignored if ``channel`` is provided.\n channel (Optional[grpc.Channel]): A ``Channel`` instance through\n which to make calls.\n api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint.\n If provided, it overrides the ``host`` argument and tries to create\n a mutual TLS channel with client SSL credentials from\n ``client_cert_source`` or applicatin default SSL credentials.\n client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]):\n Deprecated. A callback to provide client SSL certificate bytes and\n private key bytes, both in PEM format. It is ignored if\n ``api_mtls_endpoint`` is None.\n ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials\n for grpc channel. It is ignored if ``channel`` is provided.\n quota_project_id (Optional[str]): An optional project to use for billing\n and quota.\n client_info (google.api_core.gapic_v1.client_info.ClientInfo):\n The client info used to send a user-agent string along with\n API requests. If ``None``, then default info will be used.\n Generally, you only need to set this if you're developing\n your own client library.\n\n Raises:\n google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport\n creation failed for any reason.\n " self._ssl_channel_credentials = ssl_channel_credentials if channel: credentials = False self._grpc_channel = channel self._ssl_channel_credentials = None elif api_mtls_endpoint: warnings.warn('api_mtls_endpoint and client_cert_source are deprecated', DeprecationWarning) host = (api_mtls_endpoint if (':' in api_mtls_endpoint) else (api_mtls_endpoint + ':443')) if (credentials is None): (credentials, _) = auth.default(scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id) if client_cert_source: (cert, key) = client_cert_source() ssl_credentials = grpc.ssl_channel_credentials(certificate_chain=cert, private_key=key) else: ssl_credentials = SslCredentials().ssl_credentials self._grpc_channel = type(self).create_channel(host, credentials=credentials, credentials_file=credentials_file, ssl_credentials=ssl_credentials, scopes=(scopes or self.AUTH_SCOPES), quota_project_id=quota_project_id, options=[('grpc.max_send_message_length', (- 1)), ('grpc.max_receive_message_length', (- 1))]) self._ssl_channel_credentials = ssl_credentials else: host = (host if (':' in host) else (host + ':443')) if (credentials is None): (credentials, _) = auth.default(scopes=self.AUTH_SCOPES) self._grpc_channel = type(self).create_channel(host, credentials=credentials, ssl_credentials=ssl_channel_credentials, scopes=self.AUTH_SCOPES, options=[('grpc.max_send_message_length', (- 1)), ('grpc.max_receive_message_length', (- 1))]) self._stubs = {} super().__init__(host=host, credentials=credentials, client_info=client_info)
@classmethod def create_channel(cls, host: str='googleads.googleapis.com', credentials: credentials.Credentials=None, scopes: Optional[Sequence[str]]=None, **kwargs) -> grpc.Channel: 'Create and return a gRPC channel object.\n Args:\n address (Optionsl[str]): The host for the channel to use.\n credentials (Optional[~.Credentials]): The\n authorization credentials to attach to requests. These\n credentials identify this application to the service. If\n none are specified, the client will attempt to ascertain\n the credentials from the environment.\n scopes (Optional[Sequence[str]]): A optional list of scopes needed for this\n service. These are only used when credentials are not specified and\n are passed to :func:`google.auth.default`.\n kwargs (Optional[dict]): Keyword arguments, which are passed to the\n channel creation.\n Returns:\n grpc.Channel: A gRPC channel object.\n ' return grpc_helpers.create_channel(host, credentials=credentials, scopes=(scopes or cls.AUTH_SCOPES), **kwargs)
-5,144,630,308,523,399,000
Create and return a gRPC channel object. Args: address (Optionsl[str]): The host for the channel to use. credentials (Optional[~.Credentials]): The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. scopes (Optional[Sequence[str]]): A optional list of scopes needed for this service. These are only used when credentials are not specified and are passed to :func:`google.auth.default`. kwargs (Optional[dict]): Keyword arguments, which are passed to the channel creation. Returns: grpc.Channel: A gRPC channel object.
google/ads/googleads/v4/services/services/ad_group_service/transports/grpc.py
create_channel
batardo/google-ads-python
python
@classmethod def create_channel(cls, host: str='googleads.googleapis.com', credentials: credentials.Credentials=None, scopes: Optional[Sequence[str]]=None, **kwargs) -> grpc.Channel: 'Create and return a gRPC channel object.\n Args:\n address (Optionsl[str]): The host for the channel to use.\n credentials (Optional[~.Credentials]): The\n authorization credentials to attach to requests. These\n credentials identify this application to the service. If\n none are specified, the client will attempt to ascertain\n the credentials from the environment.\n scopes (Optional[Sequence[str]]): A optional list of scopes needed for this\n service. These are only used when credentials are not specified and\n are passed to :func:`google.auth.default`.\n kwargs (Optional[dict]): Keyword arguments, which are passed to the\n channel creation.\n Returns:\n grpc.Channel: A gRPC channel object.\n ' return grpc_helpers.create_channel(host, credentials=credentials, scopes=(scopes or cls.AUTH_SCOPES), **kwargs)
@property def grpc_channel(self) -> grpc.Channel: 'Return the channel designed to connect to this service.\n ' return self._grpc_channel
-1,956,682,971,687,930,400
Return the channel designed to connect to this service.
google/ads/googleads/v4/services/services/ad_group_service/transports/grpc.py
grpc_channel
batardo/google-ads-python
python
@property def grpc_channel(self) -> grpc.Channel: '\n ' return self._grpc_channel
@property def get_ad_group(self) -> Callable[([ad_group_service.GetAdGroupRequest], ad_group.AdGroup)]: 'Return a callable for the get ad group method over gRPC.\n\n Returns the requested ad group in full detail.\n\n Returns:\n Callable[[~.GetAdGroupRequest],\n ~.AdGroup]:\n A function that, when called, will call the underlying RPC\n on the server.\n ' if ('get_ad_group' not in self._stubs): self._stubs['get_ad_group'] = self.grpc_channel.unary_unary('/google.ads.googleads.v4.services.AdGroupService/GetAdGroup', request_serializer=ad_group_service.GetAdGroupRequest.serialize, response_deserializer=ad_group.AdGroup.deserialize) return self._stubs['get_ad_group']
-2,812,506,310,219,055,600
Return a callable for the get ad group method over gRPC. Returns the requested ad group in full detail. Returns: Callable[[~.GetAdGroupRequest], ~.AdGroup]: A function that, when called, will call the underlying RPC on the server.
google/ads/googleads/v4/services/services/ad_group_service/transports/grpc.py
get_ad_group
batardo/google-ads-python
python
@property def get_ad_group(self) -> Callable[([ad_group_service.GetAdGroupRequest], ad_group.AdGroup)]: 'Return a callable for the get ad group method over gRPC.\n\n Returns the requested ad group in full detail.\n\n Returns:\n Callable[[~.GetAdGroupRequest],\n ~.AdGroup]:\n A function that, when called, will call the underlying RPC\n on the server.\n ' if ('get_ad_group' not in self._stubs): self._stubs['get_ad_group'] = self.grpc_channel.unary_unary('/google.ads.googleads.v4.services.AdGroupService/GetAdGroup', request_serializer=ad_group_service.GetAdGroupRequest.serialize, response_deserializer=ad_group.AdGroup.deserialize) return self._stubs['get_ad_group']
@property def mutate_ad_groups(self) -> Callable[([ad_group_service.MutateAdGroupsRequest], ad_group_service.MutateAdGroupsResponse)]: 'Return a callable for the mutate ad groups method over gRPC.\n\n Creates, updates, or removes ad groups. Operation\n statuses are returned.\n\n Returns:\n Callable[[~.MutateAdGroupsRequest],\n ~.MutateAdGroupsResponse]:\n A function that, when called, will call the underlying RPC\n on the server.\n ' if ('mutate_ad_groups' not in self._stubs): self._stubs['mutate_ad_groups'] = self.grpc_channel.unary_unary('/google.ads.googleads.v4.services.AdGroupService/MutateAdGroups', request_serializer=ad_group_service.MutateAdGroupsRequest.serialize, response_deserializer=ad_group_service.MutateAdGroupsResponse.deserialize) return self._stubs['mutate_ad_groups']
-8,380,430,057,967,699,000
Return a callable for the mutate ad groups method over gRPC. Creates, updates, or removes ad groups. Operation statuses are returned. Returns: Callable[[~.MutateAdGroupsRequest], ~.MutateAdGroupsResponse]: A function that, when called, will call the underlying RPC on the server.
google/ads/googleads/v4/services/services/ad_group_service/transports/grpc.py
mutate_ad_groups
batardo/google-ads-python
python
@property def mutate_ad_groups(self) -> Callable[([ad_group_service.MutateAdGroupsRequest], ad_group_service.MutateAdGroupsResponse)]: 'Return a callable for the mutate ad groups method over gRPC.\n\n Creates, updates, or removes ad groups. Operation\n statuses are returned.\n\n Returns:\n Callable[[~.MutateAdGroupsRequest],\n ~.MutateAdGroupsResponse]:\n A function that, when called, will call the underlying RPC\n on the server.\n ' if ('mutate_ad_groups' not in self._stubs): self._stubs['mutate_ad_groups'] = self.grpc_channel.unary_unary('/google.ads.googleads.v4.services.AdGroupService/MutateAdGroups', request_serializer=ad_group_service.MutateAdGroupsRequest.serialize, response_deserializer=ad_group_service.MutateAdGroupsResponse.deserialize) return self._stubs['mutate_ad_groups']
def build_batch_cells_update(spreadsheet_key, worksheet_id): 'Creates an empty cells feed for adding batch cell updates to.\n\n Call batch_set_cell on the resulting CellsFeed instance then send the batch\n request TODO: fill in\n\n Args:\n spreadsheet_key: The ID of the spreadsheet\n worksheet_id:\n ' feed_id_text = (BATCH_POST_ID_TEMPLATE % (spreadsheet_key, worksheet_id)) return CellsFeed(id=atom.data.Id(text=feed_id_text), link=[atom.data.Link(rel='edit', href=(BATCH_EDIT_LINK_TEMPLATE % (feed_id_text,)))])
810,300,953,523,937,000
Creates an empty cells feed for adding batch cell updates to. Call batch_set_cell on the resulting CellsFeed instance then send the batch request TODO: fill in Args: spreadsheet_key: The ID of the spreadsheet worksheet_id:
src/gdata/spreadsheets/data.py
build_batch_cells_update
BinaryMuse/gdata-python3
python
def build_batch_cells_update(spreadsheet_key, worksheet_id): 'Creates an empty cells feed for adding batch cell updates to.\n\n Call batch_set_cell on the resulting CellsFeed instance then send the batch\n request TODO: fill in\n\n Args:\n spreadsheet_key: The ID of the spreadsheet\n worksheet_id:\n ' feed_id_text = (BATCH_POST_ID_TEMPLATE % (spreadsheet_key, worksheet_id)) return CellsFeed(id=atom.data.Id(text=feed_id_text), link=[atom.data.Link(rel='edit', href=(BATCH_EDIT_LINK_TEMPLATE % (feed_id_text,)))])
def get_spreadsheet_key(self): 'Extracts the spreadsheet key unique to this spreadsheet.' return self.get_id().split('/')[(- 1)]
1,261,120,473,276,283,000
Extracts the spreadsheet key unique to this spreadsheet.
src/gdata/spreadsheets/data.py
get_spreadsheet_key
BinaryMuse/gdata-python3
python
def get_spreadsheet_key(self): return self.get_id().split('/')[(- 1)]
def get_worksheet_id(self): 'The worksheet ID identifies this worksheet in its spreadsheet.' return self.get_id().split('/')[(- 1)]
2,508,608,035,589,404,000
The worksheet ID identifies this worksheet in its spreadsheet.
src/gdata/spreadsheets/data.py
get_worksheet_id
BinaryMuse/gdata-python3
python
def get_worksheet_id(self): return self.get_id().split('/')[(- 1)]
def get_value(self, column_name): "Returns the displayed text for the desired column in this row.\n\n The formula or input which generated the displayed value is not accessible\n through the list feed, to see the user's input, use the cells feed.\n\n If a column is not present in this spreadsheet, or there is no value\n for a column in this row, this method will return None.\n " values = self.get_elements(column_name, GSX_NAMESPACE) if (len(values) == 0): return None return values[0].text
7,342,499,590,514,441,000
Returns the displayed text for the desired column in this row. The formula or input which generated the displayed value is not accessible through the list feed, to see the user's input, use the cells feed. If a column is not present in this spreadsheet, or there is no value for a column in this row, this method will return None.
src/gdata/spreadsheets/data.py
get_value
BinaryMuse/gdata-python3
python
def get_value(self, column_name): "Returns the displayed text for the desired column in this row.\n\n The formula or input which generated the displayed value is not accessible\n through the list feed, to see the user's input, use the cells feed.\n\n If a column is not present in this spreadsheet, or there is no value\n for a column in this row, this method will return None.\n " values = self.get_elements(column_name, GSX_NAMESPACE) if (len(values) == 0): return None return values[0].text
def set_value(self, column_name, value): 'Changes the value of cell in this row under the desired column name.\n\n Warning: if the cell contained a formula, it will be wiped out by setting\n the value using the list feed since the list feed only works with\n displayed values.\n\n No client side checking is performed on the column_name, you need to\n ensure that the column_name is the local tag name in the gsx tag for the\n column. For example, the column_name will not contain special characters,\n spaces, uppercase letters, etc.\n ' values = self.get_elements(column_name, GSX_NAMESPACE) if (len(values) > 0): values[0].text = value else: new_value = ListRow(text=value) new_value._qname = (new_value._qname % (column_name,)) self._other_elements.append(new_value)
-3,940,375,273,505,523,700
Changes the value of cell in this row under the desired column name. Warning: if the cell contained a formula, it will be wiped out by setting the value using the list feed since the list feed only works with displayed values. No client side checking is performed on the column_name, you need to ensure that the column_name is the local tag name in the gsx tag for the column. For example, the column_name will not contain special characters, spaces, uppercase letters, etc.
src/gdata/spreadsheets/data.py
set_value
BinaryMuse/gdata-python3
python
def set_value(self, column_name, value): 'Changes the value of cell in this row under the desired column name.\n\n Warning: if the cell contained a formula, it will be wiped out by setting\n the value using the list feed since the list feed only works with\n displayed values.\n\n No client side checking is performed on the column_name, you need to\n ensure that the column_name is the local tag name in the gsx tag for the\n column. For example, the column_name will not contain special characters,\n spaces, uppercase letters, etc.\n ' values = self.get_elements(column_name, GSX_NAMESPACE) if (len(values) > 0): values[0].text = value else: new_value = ListRow(text=value) new_value._qname = (new_value._qname % (column_name,)) self._other_elements.append(new_value)
def to_dict(self): 'Converts this row to a mapping of column names to their values.' result = {} values = self.get_elements(namespace=GSX_NAMESPACE) for item in values: result[item._get_tag()] = item.text return result
6,996,222,690,848,394,000
Converts this row to a mapping of column names to their values.
src/gdata/spreadsheets/data.py
to_dict
BinaryMuse/gdata-python3
python
def to_dict(self): result = {} values = self.get_elements(namespace=GSX_NAMESPACE) for item in values: result[item._get_tag()] = item.text return result
def from_dict(self, values): 'Sets values for this row from the dictionary.\n\n Old values which are already in the entry will not be removed unless\n they are overwritten with new values from the dict.\n ' for (column, value) in values.items(): self.set_value(column, value)
3,441,590,298,280,555,500
Sets values for this row from the dictionary. Old values which are already in the entry will not be removed unless they are overwritten with new values from the dict.
src/gdata/spreadsheets/data.py
from_dict
BinaryMuse/gdata-python3
python
def from_dict(self, values): 'Sets values for this row from the dictionary.\n\n Old values which are already in the entry will not be removed unless\n they are overwritten with new values from the dict.\n ' for (column, value) in values.items(): self.set_value(column, value)
def add_set_cell(self, row, col, input_value): 'Adds a request to change the contents of a cell to this batch request.\n\n Args:\n row: int, The row number for this cell. Numbering starts at 1.\n col: int, The column number for this cell. Starts at 1.\n input_value: str, The desired formula/content this cell should contain.\n ' self.add_update(CellEntry(id=atom.data.Id(text=(BATCH_ENTRY_ID_TEMPLATE % (self.id.text, row, col))), cell=Cell(col=str(col), row=str(row), input_value=input_value))) return self
8,668,246,841,363,812,000
Adds a request to change the contents of a cell to this batch request. Args: row: int, The row number for this cell. Numbering starts at 1. col: int, The column number for this cell. Starts at 1. input_value: str, The desired formula/content this cell should contain.
src/gdata/spreadsheets/data.py
add_set_cell
BinaryMuse/gdata-python3
python
def add_set_cell(self, row, col, input_value): 'Adds a request to change the contents of a cell to this batch request.\n\n Args:\n row: int, The row number for this cell. Numbering starts at 1.\n col: int, The column number for this cell. Starts at 1.\n input_value: str, The desired formula/content this cell should contain.\n ' self.add_update(CellEntry(id=atom.data.Id(text=(BATCH_ENTRY_ID_TEMPLATE % (self.id.text, row, col))), cell=Cell(col=str(col), row=str(row), input_value=input_value))) return self
def print_array_to_excel(array, first_cell, ws, axis=2): '\n Print an np array to excel using openpyxl\n :param array: np array\n :param first_cell: first cell to start dumping values in\n :param ws: worksheet reference. From openpyxl, ws=wb[sheetname]\n :param axis: to determine if the array is a col vector (0), row vector (1), or 2d matrix (2)\n ' if isinstance(array, (list,)): array = np.array(array) shape = array.shape if (axis == 0): array.flatten() for i in range(shape[0]): j = 0 ws.cell((i + first_cell[0]), (j + first_cell[1])).value = array[i] elif (axis == 1): array.flatten() for j in range(shape[0]): i = 0 ws.cell((i + first_cell[0]), (j + first_cell[1])).value = array[j] elif (axis == 2): for i in range(shape[0]): for j in range(shape[1]): ws.cell((i + first_cell[0]), (j + first_cell[1])).value = array[(i, j)]
-598,473,824,827,451,600
Print an np array to excel using openpyxl :param array: np array :param first_cell: first cell to start dumping values in :param ws: worksheet reference. From openpyxl, ws=wb[sheetname] :param axis: to determine if the array is a col vector (0), row vector (1), or 2d matrix (2)
gold nanocluster synthesis/own_package/others.py
print_array_to_excel
acceleratedmaterials/NUS_AMDworkshop
python
def print_array_to_excel(array, first_cell, ws, axis=2): '\n Print an np array to excel using openpyxl\n :param array: np array\n :param first_cell: first cell to start dumping values in\n :param ws: worksheet reference. From openpyxl, ws=wb[sheetname]\n :param axis: to determine if the array is a col vector (0), row vector (1), or 2d matrix (2)\n ' if isinstance(array, (list,)): array = np.array(array) shape = array.shape if (axis == 0): array.flatten() for i in range(shape[0]): j = 0 ws.cell((i + first_cell[0]), (j + first_cell[1])).value = array[i] elif (axis == 1): array.flatten() for j in range(shape[0]): i = 0 ws.cell((i + first_cell[0]), (j + first_cell[1])).value = array[j] elif (axis == 2): for i in range(shape[0]): for j in range(shape[1]): ws.cell((i + first_cell[0]), (j + first_cell[1])).value = array[(i, j)]
def findMedianSortedArrays(self, nums1, nums2): '\n :type nums1: List[int]\n :type nums2: List[int]\n :rtype: float\n ' m = len(nums1) n = len(nums2) def find_kth(nums1, nums2, index1, index2, k): if (index1 >= len(nums1)): return nums2[((index2 + k) - 1)] if (index2 >= len(nums2)): return nums1[((index1 + k) - 1)] if (k == 1): return (nums1[index1] if (nums1[index1] < nums2[index2]) else nums2[index2]) do_discard_nums1 = True mid = ((k // 2) - 1) if (((index1 + mid) >= len(nums1)) or (((index2 + mid) < len(nums2)) and (nums1[(index1 + mid)] > nums2[(index2 + mid)]))): do_discard_nums1 = False mid += 1 if do_discard_nums1: return find_kth(nums1, nums2, (index1 + mid), index2, (k - mid)) else: return find_kth(nums1, nums2, index1, (index2 + mid), (k - mid)) return ((find_kth(nums1, nums2, 0, 0, (((m + n) + 1) // 2)) + find_kth(nums1, nums2, 0, 0, (((m + n) + 2) // 2))) / 2.0)
7,292,337,121,661,577,000
:type nums1: List[int] :type nums2: List[int] :rtype: float
Codes/xiaohong2019/leetcode/4_median_of_two_sorted_arrays.py
findMedianSortedArrays
Buddy119/algorithm
python
def findMedianSortedArrays(self, nums1, nums2): '\n :type nums1: List[int]\n :type nums2: List[int]\n :rtype: float\n ' m = len(nums1) n = len(nums2) def find_kth(nums1, nums2, index1, index2, k): if (index1 >= len(nums1)): return nums2[((index2 + k) - 1)] if (index2 >= len(nums2)): return nums1[((index1 + k) - 1)] if (k == 1): return (nums1[index1] if (nums1[index1] < nums2[index2]) else nums2[index2]) do_discard_nums1 = True mid = ((k // 2) - 1) if (((index1 + mid) >= len(nums1)) or (((index2 + mid) < len(nums2)) and (nums1[(index1 + mid)] > nums2[(index2 + mid)]))): do_discard_nums1 = False mid += 1 if do_discard_nums1: return find_kth(nums1, nums2, (index1 + mid), index2, (k - mid)) else: return find_kth(nums1, nums2, index1, (index2 + mid), (k - mid)) return ((find_kth(nums1, nums2, 0, 0, (((m + n) + 1) // 2)) + find_kth(nums1, nums2, 0, 0, (((m + n) + 2) // 2))) / 2.0)
@bot.command() @commands.is_owner() async def prepare(ctx: commands.Context): 'Starts a persistent view.' (await ctx.send("What's your favourite colour?", view=PersistentView()))
5,189,294,021,437,334,000
Starts a persistent view.
examples/views/persistent.py
prepare
Chromosomologist/disnake
python
@bot.command() @commands.is_owner() async def prepare(ctx: commands.Context): (await ctx.send("What's your favourite colour?", view=PersistentView()))
@property def categories(self): 'Category names' return self._meta['categories']
-957,717,788,498,336,500
Category names
seamseg/data/dataset.py
categories
030Solutions/seamseg
python
@property def categories(self): return self._meta['categories']
@property def num_categories(self): 'Number of categories' return len(self.categories)
-8,312,312,894,611,809,000
Number of categories
seamseg/data/dataset.py
num_categories
030Solutions/seamseg
python
@property def num_categories(self): return len(self.categories)
@property def num_stuff(self): 'Number of "stuff" categories' return self._meta['num_stuff']
8,253,697,926,380,450,000
Number of "stuff" categories
seamseg/data/dataset.py
num_stuff
030Solutions/seamseg
python
@property def num_stuff(self): return self._meta['num_stuff']
@property def num_thing(self): 'Number of "thing" categories' return (self.num_categories - self.num_stuff)
-8,476,085,295,398,408,000
Number of "thing" categories
seamseg/data/dataset.py
num_thing
030Solutions/seamseg
python
@property def num_thing(self): return (self.num_categories - self.num_stuff)
@property def original_ids(self): 'Original class id of each category' return self._meta['original_ids']
8,745,717,664,375,170,000
Original class id of each category
seamseg/data/dataset.py
original_ids
030Solutions/seamseg
python
@property def original_ids(self): return self._meta['original_ids']
@property def palette(self): 'Default palette to be used when color-coding semantic labels' return np.array(self._meta['palette'], dtype=np.uint8)
-7,619,175,149,919,133,000
Default palette to be used when color-coding semantic labels
seamseg/data/dataset.py
palette
030Solutions/seamseg
python
@property def palette(self): return np.array(self._meta['palette'], dtype=np.uint8)
@property def img_sizes(self): 'Size of each image of the dataset' return [img_desc['size'] for img_desc in self._images]
3,391,114,829,995,243,500
Size of each image of the dataset
seamseg/data/dataset.py
img_sizes
030Solutions/seamseg
python
@property def img_sizes(self): return [img_desc['size'] for img_desc in self._images]
@property def img_categories(self): 'Categories present in each image of the dataset' return [img_desc['cat'] for img_desc in self._images]
-1,921,090,118,317,093,600
Categories present in each image of the dataset
seamseg/data/dataset.py
img_categories
030Solutions/seamseg
python
@property def img_categories(self): return [img_desc['cat'] for img_desc in self._images]
def get_raw_image(self, idx): 'Load a single, unmodified image with given id from the dataset' img_file = path.join(self._img_dir, idx) if path.exists((img_file + '.png')): img_file = (img_file + '.png') elif path.exists((img_file + '.jpg')): img_file = (img_file + '.jpg') else: raise IOError('Cannot find any image for id {} in {}'.format(idx, self._img_dir)) return Image.open(img_file)
4,054,153,896,457,395,700
Load a single, unmodified image with given id from the dataset
seamseg/data/dataset.py
get_raw_image
030Solutions/seamseg
python
def get_raw_image(self, idx): img_file = path.join(self._img_dir, idx) if path.exists((img_file + '.png')): img_file = (img_file + '.png') elif path.exists((img_file + '.jpg')): img_file = (img_file + '.jpg') else: raise IOError('Cannot find any image for id {} in {}'.format(idx, self._img_dir)) return Image.open(img_file)
def get_image_desc(self, idx): 'Look up an image descriptor given the id' matching = [img_desc for img_desc in self._images if (img_desc['id'] == idx)] if (len(matching) == 1): return matching[0] else: raise ValueError(('No image found with id %s' % idx))
3,199,198,709,214,985,700
Look up an image descriptor given the id
seamseg/data/dataset.py
get_image_desc
030Solutions/seamseg
python
def get_image_desc(self, idx): matching = [img_desc for img_desc in self._images if (img_desc['id'] == idx)] if (len(matching) == 1): return matching[0] else: raise ValueError(('No image found with id %s' % idx))
@property def img_sizes(self): 'Size of each image of the dataset' return [img_desc['size'] for img_desc in self._images]
3,391,114,829,995,243,500
Size of each image of the dataset
seamseg/data/dataset.py
img_sizes
030Solutions/seamseg
python
@property def img_sizes(self): return [img_desc['size'] for img_desc in self._images]
def onMessage(self, message): 'Messages sent to the bot will arrive here. Command handling routing\n is done in this function.' if (not isinstance(message.body, DomishElement)): return None text = unicode(message.body).encode('utf-8').strip() (from_addr, _, _) = message['from'].partition('/') self.message_callback(to_addr=self.jid.userhost(), from_addr=from_addr, content=text, transport_type='xmpp', transport_metadata={'xmpp_id': message.getAttribute('id')})
7,766,323,072,201,223,000
Messages sent to the bot will arrive here. Command handling routing is done in this function.
vumi/transports/xmpp/xmpp.py
onMessage
rapidsms/vumi
python
def onMessage(self, message): 'Messages sent to the bot will arrive here. Command handling routing\n is done in this function.' if (not isinstance(message.body, DomishElement)): return None text = unicode(message.body).encode('utf-8').strip() (from_addr, _, _) = message['from'].partition('/') self.message_callback(to_addr=self.jid.userhost(), from_addr=from_addr, content=text, transport_type='xmpp', transport_metadata={'xmpp_id': message.getAttribute('id')})
def __new__(metacls, name, bases, namespace, **kwargs): 'Remove directives from the class namespace.\n\n It does not make sense to have some directives available after the\n class was created or even at the instance level (e.g. doing\n ``self.parameter([1, 2, 3])`` does not make sense). So here, we\n intercept those directives out of the namespace before the class is\n constructed.\n ' directives = ['parameter', 'variable', 'bind', 'run_before', 'run_after', 'require_deps', 'required', 'deferrable', 'sanity_function', 'final', 'performance_function'] for b in directives: namespace.pop(b, None) for item in namespace.pop('_rfm_ext_bound'): namespace.reset(item) return super().__new__(metacls, name, bases, dict(namespace), **kwargs)
-2,115,828,187,254,994,700
Remove directives from the class namespace. It does not make sense to have some directives available after the class was created or even at the instance level (e.g. doing ``self.parameter([1, 2, 3])`` does not make sense). So here, we intercept those directives out of the namespace before the class is constructed.
reframe/core/meta.py
__new__
ChristopherBignamini/reframe
python
def __new__(metacls, name, bases, namespace, **kwargs): 'Remove directives from the class namespace.\n\n It does not make sense to have some directives available after the\n class was created or even at the instance level (e.g. doing\n ``self.parameter([1, 2, 3])`` does not make sense). So here, we\n intercept those directives out of the namespace before the class is\n constructed.\n ' directives = ['parameter', 'variable', 'bind', 'run_before', 'run_after', 'require_deps', 'required', 'deferrable', 'sanity_function', 'final', 'performance_function'] for b in directives: namespace.pop(b, None) for item in namespace.pop('_rfm_ext_bound'): namespace.reset(item) return super().__new__(metacls, name, bases, dict(namespace), **kwargs)
def __call__(cls, *args, **kwargs): 'Inject parameter and variable spaces during object construction.\n\n When a class is instantiated, this method intercepts the arguments\n associated to the parameter and variable spaces. This prevents both\n :func:`__new__` and :func:`__init__` methods from ever seing these\n arguments.\n\n The parameter and variable spaces are injected into the object after\n construction and before initialization.\n ' _rfm_use_params = kwargs.pop('_rfm_use_params', False) obj = cls.__new__(cls, *args, **kwargs) cls._rfm_var_space.inject(obj, cls) cls._rfm_param_space.inject(obj, cls, _rfm_use_params) obj.__init__(*args, **kwargs) return obj
-5,211,590,653,261,494,000
Inject parameter and variable spaces during object construction. When a class is instantiated, this method intercepts the arguments associated to the parameter and variable spaces. This prevents both :func:`__new__` and :func:`__init__` methods from ever seing these arguments. The parameter and variable spaces are injected into the object after construction and before initialization.
reframe/core/meta.py
__call__
ChristopherBignamini/reframe
python
def __call__(cls, *args, **kwargs): 'Inject parameter and variable spaces during object construction.\n\n When a class is instantiated, this method intercepts the arguments\n associated to the parameter and variable spaces. This prevents both\n :func:`__new__` and :func:`__init__` methods from ever seing these\n arguments.\n\n The parameter and variable spaces are injected into the object after\n construction and before initialization.\n ' _rfm_use_params = kwargs.pop('_rfm_use_params', False) obj = cls.__new__(cls, *args, **kwargs) cls._rfm_var_space.inject(obj, cls) cls._rfm_param_space.inject(obj, cls, _rfm_use_params) obj.__init__(*args, **kwargs) return obj
def __getattribute__(cls, name): "Attribute lookup method for custom class attributes.\n\n ReFrame test variables are descriptors injected at the class level.\n If a variable descriptor has already been injected into the class,\n do not return the descriptor object and return the default value\n associated with that variable instead.\n\n .. warning::\n .. versionchanged:: 3.7.0\n Prior versions exposed the variable descriptor object if this\n was already present in the class, instead of returning the\n variable's default value.\n " try: var_space = super().__getattribute__('_rfm_var_space') except AttributeError: var_space = None if (var_space and (name in var_space.injected_vars)): raise AttributeError('delegate variable lookup to __getattr__') return super().__getattribute__(name)
-2,813,964,682,816,430,000
Attribute lookup method for custom class attributes. ReFrame test variables are descriptors injected at the class level. If a variable descriptor has already been injected into the class, do not return the descriptor object and return the default value associated with that variable instead. .. warning:: .. versionchanged:: 3.7.0 Prior versions exposed the variable descriptor object if this was already present in the class, instead of returning the variable's default value.
reframe/core/meta.py
__getattribute__
ChristopherBignamini/reframe
python
def __getattribute__(cls, name): "Attribute lookup method for custom class attributes.\n\n ReFrame test variables are descriptors injected at the class level.\n If a variable descriptor has already been injected into the class,\n do not return the descriptor object and return the default value\n associated with that variable instead.\n\n .. warning::\n .. versionchanged:: 3.7.0\n Prior versions exposed the variable descriptor object if this\n was already present in the class, instead of returning the\n variable's default value.\n " try: var_space = super().__getattribute__('_rfm_var_space') except AttributeError: var_space = None if (var_space and (name in var_space.injected_vars)): raise AttributeError('delegate variable lookup to __getattr__') return super().__getattribute__(name)
def __getattr__(cls, name): 'Backup attribute lookup method into custom namespaces.\n\n Some ReFrame built-in types are stored under their own sub-namespaces.\n This method will perform an attribute lookup on these sub-namespaces\n if a call to the default :func:`__getattribute__` method fails to\n retrieve the requested class attribute.\n ' try: var_space = super().__getattribute__('_rfm_var_space') return var_space.vars[name] except AttributeError: 'Catch early access attempt to the variable space.' except KeyError: 'Requested name not in variable space.' try: param_space = super().__getattribute__('_rfm_param_space') return param_space.params[name] except AttributeError: 'Catch early access attempt to the parameter space.' except KeyError: 'Requested name not in parameter space.' raise AttributeError(f'class {cls.__qualname__!r} has no attribute {name!r}') from None
-5,505,047,425,900,703,000
Backup attribute lookup method into custom namespaces. Some ReFrame built-in types are stored under their own sub-namespaces. This method will perform an attribute lookup on these sub-namespaces if a call to the default :func:`__getattribute__` method fails to retrieve the requested class attribute.
reframe/core/meta.py
__getattr__
ChristopherBignamini/reframe
python
def __getattr__(cls, name): 'Backup attribute lookup method into custom namespaces.\n\n Some ReFrame built-in types are stored under their own sub-namespaces.\n This method will perform an attribute lookup on these sub-namespaces\n if a call to the default :func:`__getattribute__` method fails to\n retrieve the requested class attribute.\n ' try: var_space = super().__getattribute__('_rfm_var_space') return var_space.vars[name] except AttributeError: 'Catch early access attempt to the variable space.' except KeyError: 'Requested name not in variable space.' try: param_space = super().__getattribute__('_rfm_param_space') return param_space.params[name] except AttributeError: 'Catch early access attempt to the parameter space.' except KeyError: 'Requested name not in parameter space.' raise AttributeError(f'class {cls.__qualname__!r} has no attribute {name!r}') from None
def setvar(cls, name, value): "Set the value of a variable.\n\n :param name: The name of the variable.\n :param value: The value of the variable.\n\n :returns: :class:`True` if the variable was set.\n A variable will *not* be set, if it does not exist or when an\n attempt is made to set it with its underlying descriptor.\n This happens during the variable injection time and it should be\n delegated to the class' :func:`__setattr__` method.\n\n :raises ReframeSyntaxError: If an attempt is made to override a\n variable with a descriptor other than its underlying one.\n\n " try: var_space = super().__getattribute__('_rfm_var_space') if (name in var_space): if (not hasattr(value, '__get__')): var_space[name].define(value) return True elif (var_space[name].field is not value): desc = '.'.join([cls.__qualname__, name]) raise ReframeSyntaxError(f'cannot override variable descriptor {desc!r}') else: return False except AttributeError: 'Catch early access attempt to the variable space.' return False
7,055,573,840,113,530,000
Set the value of a variable. :param name: The name of the variable. :param value: The value of the variable. :returns: :class:`True` if the variable was set. A variable will *not* be set, if it does not exist or when an attempt is made to set it with its underlying descriptor. This happens during the variable injection time and it should be delegated to the class' :func:`__setattr__` method. :raises ReframeSyntaxError: If an attempt is made to override a variable with a descriptor other than its underlying one.
reframe/core/meta.py
setvar
ChristopherBignamini/reframe
python
def setvar(cls, name, value): "Set the value of a variable.\n\n :param name: The name of the variable.\n :param value: The value of the variable.\n\n :returns: :class:`True` if the variable was set.\n A variable will *not* be set, if it does not exist or when an\n attempt is made to set it with its underlying descriptor.\n This happens during the variable injection time and it should be\n delegated to the class' :func:`__setattr__` method.\n\n :raises ReframeSyntaxError: If an attempt is made to override a\n variable with a descriptor other than its underlying one.\n\n " try: var_space = super().__getattribute__('_rfm_var_space') if (name in var_space): if (not hasattr(value, '__get__')): var_space[name].define(value) return True elif (var_space[name].field is not value): desc = '.'.join([cls.__qualname__, name]) raise ReframeSyntaxError(f'cannot override variable descriptor {desc!r}') else: return False except AttributeError: 'Catch early access attempt to the variable space.' return False
def __setattr__(cls, name, value): "Handle the special treatment required for variables and parameters.\n\n A variable's default value can be updated when accessed as a regular\n class attribute. This behavior does not apply when the assigned value\n is a descriptor object. In that case, the task of setting the value is\n delegated to the base :func:`__setattr__` (this is to comply with\n standard Python behavior). However, since the variables are already\n descriptors which are injected during class instantiation, we disallow\n any attempt to override this descriptor (since it would be silently\n re-overridden in any case).\n\n Altering the value of a parameter when accessed as a class attribute\n is not allowed. This would break the parameter space internals.\n " if cls.setvar(name, value): return try: param_space = super().__getattribute__('_rfm_param_space') if (name in param_space.params): raise ReframeSyntaxError(f'cannot override parameter {name!r}') except AttributeError: 'Catch early access attempt to the parameter space.' super().__setattr__(name, value)
-5,539,171,203,802,578,000
Handle the special treatment required for variables and parameters. A variable's default value can be updated when accessed as a regular class attribute. This behavior does not apply when the assigned value is a descriptor object. In that case, the task of setting the value is delegated to the base :func:`__setattr__` (this is to comply with standard Python behavior). However, since the variables are already descriptors which are injected during class instantiation, we disallow any attempt to override this descriptor (since it would be silently re-overridden in any case). Altering the value of a parameter when accessed as a class attribute is not allowed. This would break the parameter space internals.
reframe/core/meta.py
__setattr__
ChristopherBignamini/reframe
python
def __setattr__(cls, name, value): "Handle the special treatment required for variables and parameters.\n\n A variable's default value can be updated when accessed as a regular\n class attribute. This behavior does not apply when the assigned value\n is a descriptor object. In that case, the task of setting the value is\n delegated to the base :func:`__setattr__` (this is to comply with\n standard Python behavior). However, since the variables are already\n descriptors which are injected during class instantiation, we disallow\n any attempt to override this descriptor (since it would be silently\n re-overridden in any case).\n\n Altering the value of a parameter when accessed as a class attribute\n is not allowed. This would break the parameter space internals.\n " if cls.setvar(name, value): return try: param_space = super().__getattribute__('_rfm_param_space') if (name in param_space.params): raise ReframeSyntaxError(f'cannot override parameter {name!r}') except AttributeError: 'Catch early access attempt to the parameter space.' super().__setattr__(name, value)
@property def param_space(cls): ' Make the parameter space available as read-only.' return cls._rfm_param_space
942,503,294,481,041,700
Make the parameter space available as read-only.
reframe/core/meta.py
param_space
ChristopherBignamini/reframe
python
@property def param_space(cls): ' ' return cls._rfm_param_space
def is_abstract(cls): 'Check if the class is an abstract test.\n\n This is the case when some parameters are undefined, which results in\n the length of the parameter space being 0.\n\n :return: bool indicating whether the test has undefined parameters.\n\n :meta private:\n ' return (len(cls.param_space) == 0)
9,123,215,739,712,699,000
Check if the class is an abstract test. This is the case when some parameters are undefined, which results in the length of the parameter space being 0. :return: bool indicating whether the test has undefined parameters. :meta private:
reframe/core/meta.py
is_abstract
ChristopherBignamini/reframe
python
def is_abstract(cls): 'Check if the class is an abstract test.\n\n This is the case when some parameters are undefined, which results in\n the length of the parameter space being 0.\n\n :return: bool indicating whether the test has undefined parameters.\n\n :meta private:\n ' return (len(cls.param_space) == 0)
def __getitem__(self, key): 'Expose and control access to the local namespaces.\n\n Variables may only be retrieved if their value has been previously\n set. Accessing a parameter in the class body is disallowed (the\n actual test parameter is set during the class instantiation).\n ' try: return super().__getitem__(key) except KeyError as err: try: return self['_rfm_local_var_space'][key] except KeyError: if (key in self['_rfm_local_param_space']): raise ReframeSyntaxError('accessing a test parameter from the class body is disallowed') from None else: for b in self['_rfm_bases']: if (key in b._rfm_var_space): v = b._rfm_var_space[key].default_value self._namespace[key] = v return self._namespace[key] raise err from None
431,080,028,754,792,600
Expose and control access to the local namespaces. Variables may only be retrieved if their value has been previously set. Accessing a parameter in the class body is disallowed (the actual test parameter is set during the class instantiation).
reframe/core/meta.py
__getitem__
ChristopherBignamini/reframe
python
def __getitem__(self, key): 'Expose and control access to the local namespaces.\n\n Variables may only be retrieved if their value has been previously\n set. Accessing a parameter in the class body is disallowed (the\n actual test parameter is set during the class instantiation).\n ' try: return super().__getitem__(key) except KeyError as err: try: return self['_rfm_local_var_space'][key] except KeyError: if (key in self['_rfm_local_param_space']): raise ReframeSyntaxError('accessing a test parameter from the class body is disallowed') from None else: for b in self['_rfm_bases']: if (key in b._rfm_var_space): v = b._rfm_var_space[key].default_value self._namespace[key] = v return self._namespace[key] raise err from None
def reset(self, key): 'Reset an item to rerun it through the __setitem__ logic.' self[key] = self[key]
710,586,851,198,783,600
Reset an item to rerun it through the __setitem__ logic.
reframe/core/meta.py
reset
ChristopherBignamini/reframe
python
def reset(self, key): self[key] = self[key]
def bind(fn, name=None): 'Directive to bind a free function to a class.\n\n See online docs for more information.\n\n .. note::\n Functions bound using this directive must be re-inspected after\n the class body execution has completed. This directive attaches\n the external method into the class namespace and returns the\n associated instance of the :class:`WrappedFunction`. However,\n this instance may be further modified by other ReFrame builtins\n such as :func:`run_before`, :func:`run_after`, :func:`final` and\n so on after it was added to the namespace, which would bypass\n the logic implemented in the :func:`__setitem__` method from the\n :class:`MetaNamespace` class. Hence, we track the items set by\n this directive in the ``_rfm_ext_bound`` set, so they can be\n later re-inspected.\n ' inst = metacls.WrappedFunction(fn, name) namespace[inst.__name__] = inst namespace['_rfm_ext_bound'].add(inst.__name__) return inst
-3,430,508,932,960,869,000
Directive to bind a free function to a class. See online docs for more information. .. note:: Functions bound using this directive must be re-inspected after the class body execution has completed. This directive attaches the external method into the class namespace and returns the associated instance of the :class:`WrappedFunction`. However, this instance may be further modified by other ReFrame builtins such as :func:`run_before`, :func:`run_after`, :func:`final` and so on after it was added to the namespace, which would bypass the logic implemented in the :func:`__setitem__` method from the :class:`MetaNamespace` class. Hence, we track the items set by this directive in the ``_rfm_ext_bound`` set, so they can be later re-inspected.
reframe/core/meta.py
bind
ChristopherBignamini/reframe
python
def bind(fn, name=None): 'Directive to bind a free function to a class.\n\n See online docs for more information.\n\n .. note::\n Functions bound using this directive must be re-inspected after\n the class body execution has completed. This directive attaches\n the external method into the class namespace and returns the\n associated instance of the :class:`WrappedFunction`. However,\n this instance may be further modified by other ReFrame builtins\n such as :func:`run_before`, :func:`run_after`, :func:`final` and\n so on after it was added to the namespace, which would bypass\n the logic implemented in the :func:`__setitem__` method from the\n :class:`MetaNamespace` class. Hence, we track the items set by\n this directive in the ``_rfm_ext_bound`` set, so they can be\n later re-inspected.\n ' inst = metacls.WrappedFunction(fn, name) namespace[inst.__name__] = inst namespace['_rfm_ext_bound'].add(inst.__name__) return inst
def final(fn): 'Indicate that a function is final and cannot be overridden.' fn._rfm_final = True return fn
-7,045,401,230,001,315,000
Indicate that a function is final and cannot be overridden.
reframe/core/meta.py
final
ChristopherBignamini/reframe
python
def final(fn): fn._rfm_final = True return fn
def run_before(stage): 'Decorator for attaching a test method to a given stage.\n\n See online docs for more information.\n ' return hooks.attach_to(('pre_' + stage))
7,346,210,348,767,370,000
Decorator for attaching a test method to a given stage. See online docs for more information.
reframe/core/meta.py
run_before
ChristopherBignamini/reframe
python
def run_before(stage): 'Decorator for attaching a test method to a given stage.\n\n See online docs for more information.\n ' return hooks.attach_to(('pre_' + stage))
def run_after(stage): 'Decorator for attaching a test method to a given stage.\n\n See online docs for more information.\n ' return hooks.attach_to(('post_' + stage))
5,219,522,190,465,508,000
Decorator for attaching a test method to a given stage. See online docs for more information.
reframe/core/meta.py
run_after
ChristopherBignamini/reframe
python
def run_after(stage): 'Decorator for attaching a test method to a given stage.\n\n See online docs for more information.\n ' return hooks.attach_to(('post_' + stage))
def sanity_function(fn): "Mark a function as the test's sanity function.\n\n Decorated functions must be unary and they will be converted into\n deferred expressions.\n " _def_fn = deferrable(fn) setattr(_def_fn, '_rfm_sanity_fn', True) return _def_fn
800,883,873,856,133,400
Mark a function as the test's sanity function. Decorated functions must be unary and they will be converted into deferred expressions.
reframe/core/meta.py
sanity_function
ChristopherBignamini/reframe
python
def sanity_function(fn): "Mark a function as the test's sanity function.\n\n Decorated functions must be unary and they will be converted into\n deferred expressions.\n " _def_fn = deferrable(fn) setattr(_def_fn, '_rfm_sanity_fn', True) return _def_fn
def performance_function(units, *, perf_key=None): 'Decorate a function to extract a performance variable.\n\n The ``units`` argument indicates the units of the performance\n variable to be extracted.\n The ``perf_key`` optional arg will be used as the name of the\n performance variable. If not provided, the function name will\n be used as the performance variable name.\n ' if (not isinstance(units, str)): raise TypeError('performance units must be a string') if (perf_key and (not isinstance(perf_key, str))): raise TypeError("'perf_key' must be a string") def _deco_wrapper(func): if (not utils.is_trivially_callable(func, non_def_args=1)): raise TypeError(f'performance function {func.__name__!r} has more than one argument without a default value') @functools.wraps(func) def _perf_fn(*args, **kwargs): return _DeferredPerformanceExpression(func, units, *args, **kwargs) _perf_key = (perf_key if perf_key else func.__name__) setattr(_perf_fn, '_rfm_perf_key', _perf_key) return _perf_fn return _deco_wrapper
-1,649,085,680,087,831,600
Decorate a function to extract a performance variable. The ``units`` argument indicates the units of the performance variable to be extracted. The ``perf_key`` optional arg will be used as the name of the performance variable. If not provided, the function name will be used as the performance variable name.
reframe/core/meta.py
performance_function
ChristopherBignamini/reframe
python
def performance_function(units, *, perf_key=None): 'Decorate a function to extract a performance variable.\n\n The ``units`` argument indicates the units of the performance\n variable to be extracted.\n The ``perf_key`` optional arg will be used as the name of the\n performance variable. If not provided, the function name will\n be used as the performance variable name.\n ' if (not isinstance(units, str)): raise TypeError('performance units must be a string') if (perf_key and (not isinstance(perf_key, str))): raise TypeError("'perf_key' must be a string") def _deco_wrapper(func): if (not utils.is_trivially_callable(func, non_def_args=1)): raise TypeError(f'performance function {func.__name__!r} has more than one argument without a default value') @functools.wraps(func) def _perf_fn(*args, **kwargs): return _DeferredPerformanceExpression(func, units, *args, **kwargs) _perf_key = (perf_key if perf_key else func.__name__) setattr(_perf_fn, '_rfm_perf_key', _perf_key) return _perf_fn return _deco_wrapper
def plot_ei(self, LAXIS, bconv, tconv, xbl, xbr, ybu, ybd, ilg): 'Plot mean Favrian internal energy stratification in the model' if ((self.ig != 1) and (self.ig != 2)): print(('ERROR(InternalEnergyEquation.py):' + self.errorGeometry(self.ig))) sys.exit() grd1 = self.xzn0 plt1 = self.fht_ei plt.figure(figsize=(7, 6)) plt.gca().yaxis.get_major_formatter().set_powerlimits((0, 0)) to_plot = [plt1] self.set_plt_axis(LAXIS, xbl, xbr, ybu, ybd, to_plot) plt.title('internal energy') plt.plot(grd1, plt1, color='brown', label='$\\widetilde{\\varepsilon}_I$') plt.axvline(bconv, linestyle='--', linewidth=0.7, color='k') plt.axvline(tconv, linestyle='--', linewidth=0.7, color='k') if (self.ig == 1): setxlabel = 'x (cm)' setylabel = '$\\widetilde{\\varepsilon}_I$ (erg g$^{-1}$)' plt.xlabel(setxlabel) plt.ylabel(setylabel) elif (self.ig == 2): setxlabel = 'r (cm)' setylabel = '$\\widetilde{\\varepsilon}_I$ (erg g$^{-1}$)' plt.xlabel(setxlabel) plt.ylabel(setylabel) plt.legend(loc=ilg, prop={'size': 18}) plt.show(block=False) if (self.fext == 'png'): plt.savefig((('RESULTS/' + self.data_prefix) + 'mean_ei.png')) elif (self.fext == 'eps'): plt.savefig((('RESULTS/' + self.data_prefix) + 'mean_ei.eps'))
4,726,730,710,213,819,000
Plot mean Favrian internal energy stratification in the model
EQUATIONS/InternalEnergyEquation.py
plot_ei
mmicromegas/ransX
python
def plot_ei(self, LAXIS, bconv, tconv, xbl, xbr, ybu, ybd, ilg): if ((self.ig != 1) and (self.ig != 2)): print(('ERROR(InternalEnergyEquation.py):' + self.errorGeometry(self.ig))) sys.exit() grd1 = self.xzn0 plt1 = self.fht_ei plt.figure(figsize=(7, 6)) plt.gca().yaxis.get_major_formatter().set_powerlimits((0, 0)) to_plot = [plt1] self.set_plt_axis(LAXIS, xbl, xbr, ybu, ybd, to_plot) plt.title('internal energy') plt.plot(grd1, plt1, color='brown', label='$\\widetilde{\\varepsilon}_I$') plt.axvline(bconv, linestyle='--', linewidth=0.7, color='k') plt.axvline(tconv, linestyle='--', linewidth=0.7, color='k') if (self.ig == 1): setxlabel = 'x (cm)' setylabel = '$\\widetilde{\\varepsilon}_I$ (erg g$^{-1}$)' plt.xlabel(setxlabel) plt.ylabel(setylabel) elif (self.ig == 2): setxlabel = 'r (cm)' setylabel = '$\\widetilde{\\varepsilon}_I$ (erg g$^{-1}$)' plt.xlabel(setxlabel) plt.ylabel(setylabel) plt.legend(loc=ilg, prop={'size': 18}) plt.show(block=False) if (self.fext == 'png'): plt.savefig((('RESULTS/' + self.data_prefix) + 'mean_ei.png')) elif (self.fext == 'eps'): plt.savefig((('RESULTS/' + self.data_prefix) + 'mean_ei.eps'))
def plot_ei_equation(self, LAXIS, bconv, tconv, xbl, xbr, ybu, ybd, ilg): 'Plot internal energy equation in the model' if ((self.ig != 1) and (self.ig != 2)): print(('ERROR(InternalEnergyEquation.py):' + self.errorGeometry(self.ig))) sys.exit() grd1 = self.xzn0 lhs0 = self.minus_dt_dd_fht_ei lhs1 = self.minus_div_dd_fht_ux_fht_ei rhs0 = self.minus_div_fei rhs1 = self.minus_div_ftt rhs2 = self.minus_pp_div_ux rhs3 = self.minus_eht_ppf_df rhs4 = self.plus_dd_fht_enuc rhs5 = self.plus_disstke res = self.minus_resEiEquation plt.figure(figsize=(7, 6)) plt.gca().yaxis.get_major_formatter().set_powerlimits((0, 0)) to_plot = [lhs0, lhs1, rhs0, rhs1, rhs2, rhs3, rhs4, rhs5, res] self.set_plt_axis(LAXIS, xbl, xbr, ybu, ybd, to_plot) plt.title('internal energy equation') if (self.ig == 1): plt.plot(grd1, lhs0, color='#FF6EB4', label='$-\\partial_t (\\overline{\\rho} \\widetilde{\\epsilon}_I )$') plt.plot(grd1, lhs1, color='k', label='$-\\nabla_x (\\overline{\\rho}\\widetilde{u}_x \\widetilde{\\epsilon}_I$)') plt.plot(grd1, rhs0, color='#FF8C00', label='$-\\nabla_x f_I $') plt.plot(grd1, rhs1, color='c', label='$-\\nabla_x f_T$ (not incl.)') plt.plot(grd1, rhs2, color='#802A2A', label='$-\\bar{P} \\bar{d}$') plt.plot(grd1, rhs3, color='r', label='$-W_P$') plt.plot(grd1, rhs4, color='b', label='$+\\overline{\\rho}\\widetilde{\\epsilon}_{nuc}$') plt.plot(grd1, rhs5, color='m', label='$+\\varepsilon_k$') plt.plot(grd1, res, color='k', linestyle='--', label='res $\\sim N_\\epsilon$') elif (self.ig == 2): plt.plot(grd1, lhs0, color='#FF6EB4', label='$-\\partial_t (\\overline{\\rho} \\widetilde{\\epsilon}_I )$') plt.plot(grd1, lhs1, color='k', label='$-\\nabla_r (\\overline{\\rho}\\widetilde{u}_r \\widetilde{\\epsilon}_I$)') plt.plot(grd1, rhs0, color='#FF8C00', label='$-\\nabla_r f_I $') plt.plot(grd1, rhs1, color='c', label='$-\\nabla_r f_T$ (not incl.)') plt.plot(grd1, rhs2, color='#802A2A', label='$-\\bar{P} \\bar{d}$') plt.plot(grd1, rhs3, color='r', label='$-W_P$') plt.plot(grd1, rhs4, color='b', label='$+\\overline{\\rho}\\widetilde{\\epsilon}_{nuc}$') plt.plot(grd1, rhs5, color='m', label='$+\\varepsilon_k$') plt.plot(grd1, res, color='k', linestyle='--', label='res $\\sim N_\\epsilon$') plt.axvline(bconv, linestyle='--', linewidth=0.7, color='k') plt.axvline(tconv, linestyle='--', linewidth=0.7, color='k') if (self.ig == 1): setxlabel = 'x (cm)' setylabel = 'erg cm$^{-3}$ s$^{-1}$' plt.xlabel(setxlabel) plt.ylabel(setylabel) elif (self.ig == 2): setxlabel = 'r (cm)' setylabel = 'erg cm$^{-3}$ s$^{-1}$' plt.xlabel(setxlabel) plt.ylabel(setylabel) plt.legend(loc=ilg, prop={'size': 10}, ncol=2) plt.show(block=False) if (self.fext == 'png'): plt.savefig((('RESULTS/' + self.data_prefix) + 'ei_eq.png')) elif (self.fext == 'eps'): plt.savefig((('RESULTS/' + self.data_prefix) + 'ei_eq.eps'))
6,408,376,364,074,986,000
Plot internal energy equation in the model
EQUATIONS/InternalEnergyEquation.py
plot_ei_equation
mmicromegas/ransX
python
def plot_ei_equation(self, LAXIS, bconv, tconv, xbl, xbr, ybu, ybd, ilg): if ((self.ig != 1) and (self.ig != 2)): print(('ERROR(InternalEnergyEquation.py):' + self.errorGeometry(self.ig))) sys.exit() grd1 = self.xzn0 lhs0 = self.minus_dt_dd_fht_ei lhs1 = self.minus_div_dd_fht_ux_fht_ei rhs0 = self.minus_div_fei rhs1 = self.minus_div_ftt rhs2 = self.minus_pp_div_ux rhs3 = self.minus_eht_ppf_df rhs4 = self.plus_dd_fht_enuc rhs5 = self.plus_disstke res = self.minus_resEiEquation plt.figure(figsize=(7, 6)) plt.gca().yaxis.get_major_formatter().set_powerlimits((0, 0)) to_plot = [lhs0, lhs1, rhs0, rhs1, rhs2, rhs3, rhs4, rhs5, res] self.set_plt_axis(LAXIS, xbl, xbr, ybu, ybd, to_plot) plt.title('internal energy equation') if (self.ig == 1): plt.plot(grd1, lhs0, color='#FF6EB4', label='$-\\partial_t (\\overline{\\rho} \\widetilde{\\epsilon}_I )$') plt.plot(grd1, lhs1, color='k', label='$-\\nabla_x (\\overline{\\rho}\\widetilde{u}_x \\widetilde{\\epsilon}_I$)') plt.plot(grd1, rhs0, color='#FF8C00', label='$-\\nabla_x f_I $') plt.plot(grd1, rhs1, color='c', label='$-\\nabla_x f_T$ (not incl.)') plt.plot(grd1, rhs2, color='#802A2A', label='$-\\bar{P} \\bar{d}$') plt.plot(grd1, rhs3, color='r', label='$-W_P$') plt.plot(grd1, rhs4, color='b', label='$+\\overline{\\rho}\\widetilde{\\epsilon}_{nuc}$') plt.plot(grd1, rhs5, color='m', label='$+\\varepsilon_k$') plt.plot(grd1, res, color='k', linestyle='--', label='res $\\sim N_\\epsilon$') elif (self.ig == 2): plt.plot(grd1, lhs0, color='#FF6EB4', label='$-\\partial_t (\\overline{\\rho} \\widetilde{\\epsilon}_I )$') plt.plot(grd1, lhs1, color='k', label='$-\\nabla_r (\\overline{\\rho}\\widetilde{u}_r \\widetilde{\\epsilon}_I$)') plt.plot(grd1, rhs0, color='#FF8C00', label='$-\\nabla_r f_I $') plt.plot(grd1, rhs1, color='c', label='$-\\nabla_r f_T$ (not incl.)') plt.plot(grd1, rhs2, color='#802A2A', label='$-\\bar{P} \\bar{d}$') plt.plot(grd1, rhs3, color='r', label='$-W_P$') plt.plot(grd1, rhs4, color='b', label='$+\\overline{\\rho}\\widetilde{\\epsilon}_{nuc}$') plt.plot(grd1, rhs5, color='m', label='$+\\varepsilon_k$') plt.plot(grd1, res, color='k', linestyle='--', label='res $\\sim N_\\epsilon$') plt.axvline(bconv, linestyle='--', linewidth=0.7, color='k') plt.axvline(tconv, linestyle='--', linewidth=0.7, color='k') if (self.ig == 1): setxlabel = 'x (cm)' setylabel = 'erg cm$^{-3}$ s$^{-1}$' plt.xlabel(setxlabel) plt.ylabel(setylabel) elif (self.ig == 2): setxlabel = 'r (cm)' setylabel = 'erg cm$^{-3}$ s$^{-1}$' plt.xlabel(setxlabel) plt.ylabel(setylabel) plt.legend(loc=ilg, prop={'size': 10}, ncol=2) plt.show(block=False) if (self.fext == 'png'): plt.savefig((('RESULTS/' + self.data_prefix) + 'ei_eq.png')) elif (self.fext == 'eps'): plt.savefig((('RESULTS/' + self.data_prefix) + 'ei_eq.eps'))
@classmethod def is_requested_microversion_compatible(cls, max_version): "Check the compatibility of selected request microversion\n\n This method will check if selected request microversion\n (cls.request_microversion) for test is compatible with respect\n to 'max_version'. Compatible means if selected request microversion\n is in the range(<=) of 'max_version'.\n\n :param max_version: maximum microversion to compare for compatibility.\n Example: '2.30'\n :returns: True if selected request microversion is compatible with\n 'max_version'. False in other case.\n " try: req_version_obj = api_version_request.APIVersionRequest(cls.request_microversion) except AttributeError: request_microversion = api_version_utils.select_request_microversion(cls.min_microversion, CONF.compute.min_microversion) req_version_obj = api_version_request.APIVersionRequest(request_microversion) max_version_obj = api_version_request.APIVersionRequest(max_version) return (req_version_obj <= max_version_obj)
-9,064,361,423,180,570,000
Check the compatibility of selected request microversion This method will check if selected request microversion (cls.request_microversion) for test is compatible with respect to 'max_version'. Compatible means if selected request microversion is in the range(<=) of 'max_version'. :param max_version: maximum microversion to compare for compatibility. Example: '2.30' :returns: True if selected request microversion is compatible with 'max_version'. False in other case.
tempest/api/compute/base.py
is_requested_microversion_compatible
AurelienLourot/tempest
python
@classmethod def is_requested_microversion_compatible(cls, max_version): "Check the compatibility of selected request microversion\n\n This method will check if selected request microversion\n (cls.request_microversion) for test is compatible with respect\n to 'max_version'. Compatible means if selected request microversion\n is in the range(<=) of 'max_version'.\n\n :param max_version: maximum microversion to compare for compatibility.\n Example: '2.30'\n :returns: True if selected request microversion is compatible with\n 'max_version'. False in other case.\n " try: req_version_obj = api_version_request.APIVersionRequest(cls.request_microversion) except AttributeError: request_microversion = api_version_utils.select_request_microversion(cls.min_microversion, CONF.compute.min_microversion) req_version_obj = api_version_request.APIVersionRequest(request_microversion) max_version_obj = api_version_request.APIVersionRequest(max_version) return (req_version_obj <= max_version_obj)
@classmethod def server_check_teardown(cls): "Checks is the shared server clean enough for subsequent test.\n\n Method will delete the server when it's dirty.\n The setUp method is responsible for creating a new server.\n Exceptions raised in tearDown class are fails the test case,\n This method supposed to use only by tearDown methods, when\n the shared server_id is stored in the server_id of the class.\n " if (getattr(cls, 'server_id', None) is not None): try: waiters.wait_for_server_status(cls.servers_client, cls.server_id, 'ACTIVE') except Exception as exc: LOG.exception(exc) cls.servers_client.delete_server(cls.server_id) waiters.wait_for_server_termination(cls.servers_client, cls.server_id) cls.server_id = None raise
-674,806,159,774,933,900
Checks is the shared server clean enough for subsequent test. Method will delete the server when it's dirty. The setUp method is responsible for creating a new server. Exceptions raised in tearDown class are fails the test case, This method supposed to use only by tearDown methods, when the shared server_id is stored in the server_id of the class.
tempest/api/compute/base.py
server_check_teardown
AurelienLourot/tempest
python
@classmethod def server_check_teardown(cls): "Checks is the shared server clean enough for subsequent test.\n\n Method will delete the server when it's dirty.\n The setUp method is responsible for creating a new server.\n Exceptions raised in tearDown class are fails the test case,\n This method supposed to use only by tearDown methods, when\n the shared server_id is stored in the server_id of the class.\n " if (getattr(cls, 'server_id', None) is not None): try: waiters.wait_for_server_status(cls.servers_client, cls.server_id, 'ACTIVE') except Exception as exc: LOG.exception(exc) cls.servers_client.delete_server(cls.server_id) waiters.wait_for_server_termination(cls.servers_client, cls.server_id) cls.server_id = None raise
@classmethod def create_test_server(cls, validatable=False, volume_backed=False, validation_resources=None, clients=None, **kwargs): 'Wrapper utility that returns a test server.\n\n This wrapper utility calls the common create test server and\n returns a test server. The purpose of this wrapper is to minimize\n the impact on the code of the tests already using this\n function.\n\n :param validatable: Whether the server will be pingable or sshable.\n :param volume_backed: Whether the instance is volume backed or not.\n :param validation_resources: Dictionary of validation resources as\n returned by `get_class_validation_resources`.\n :param clients: Client manager, defaults to os_primary.\n :param kwargs: Extra arguments are passed down to the\n `compute.create_test_server` call.\n ' if ('name' not in kwargs): kwargs['name'] = data_utils.rand_name((cls.__name__ + '-server')) request_version = api_version_request.APIVersionRequest(cls.request_microversion) v2_37_version = api_version_request.APIVersionRequest('2.37') tenant_network = cls.get_tenant_network() if ((request_version >= v2_37_version) and ('networks' not in kwargs) and (not tenant_network)): kwargs['networks'] = 'none' if (clients is None): clients = cls.os_primary (body, servers) = compute.create_test_server(clients, validatable, validation_resources=validation_resources, tenant_network=tenant_network, volume_backed=volume_backed, **kwargs) for server in servers: cls.addClassResourceCleanup(waiters.wait_for_server_termination, clients.servers_client, server['id']) for server in servers: cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc, clients.servers_client.delete_server, server['id']) return body
2,757,388,231,244,504,000
Wrapper utility that returns a test server. This wrapper utility calls the common create test server and returns a test server. The purpose of this wrapper is to minimize the impact on the code of the tests already using this function. :param validatable: Whether the server will be pingable or sshable. :param volume_backed: Whether the instance is volume backed or not. :param validation_resources: Dictionary of validation resources as returned by `get_class_validation_resources`. :param clients: Client manager, defaults to os_primary. :param kwargs: Extra arguments are passed down to the `compute.create_test_server` call.
tempest/api/compute/base.py
create_test_server
AurelienLourot/tempest
python
@classmethod def create_test_server(cls, validatable=False, volume_backed=False, validation_resources=None, clients=None, **kwargs): 'Wrapper utility that returns a test server.\n\n This wrapper utility calls the common create test server and\n returns a test server. The purpose of this wrapper is to minimize\n the impact on the code of the tests already using this\n function.\n\n :param validatable: Whether the server will be pingable or sshable.\n :param volume_backed: Whether the instance is volume backed or not.\n :param validation_resources: Dictionary of validation resources as\n returned by `get_class_validation_resources`.\n :param clients: Client manager, defaults to os_primary.\n :param kwargs: Extra arguments are passed down to the\n `compute.create_test_server` call.\n ' if ('name' not in kwargs): kwargs['name'] = data_utils.rand_name((cls.__name__ + '-server')) request_version = api_version_request.APIVersionRequest(cls.request_microversion) v2_37_version = api_version_request.APIVersionRequest('2.37') tenant_network = cls.get_tenant_network() if ((request_version >= v2_37_version) and ('networks' not in kwargs) and (not tenant_network)): kwargs['networks'] = 'none' if (clients is None): clients = cls.os_primary (body, servers) = compute.create_test_server(clients, validatable, validation_resources=validation_resources, tenant_network=tenant_network, volume_backed=volume_backed, **kwargs) for server in servers: cls.addClassResourceCleanup(waiters.wait_for_server_termination, clients.servers_client, server['id']) for server in servers: cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc, clients.servers_client.delete_server, server['id']) return body
def wait_for(self, condition): 'Repeatedly calls condition() until a timeout.' start_time = int(time.time()) while True: try: condition() except Exception: pass else: return if ((int(time.time()) - start_time) >= self.build_timeout): condition() return time.sleep(self.build_interval)
231,719,249,102,957,250
Repeatedly calls condition() until a timeout.
tempest/api/compute/base.py
wait_for
AurelienLourot/tempest
python
def wait_for(self, condition): start_time = int(time.time()) while True: try: condition() except Exception: pass else: return if ((int(time.time()) - start_time) >= self.build_timeout): condition() return time.sleep(self.build_interval)
@classmethod def create_image_from_server(cls, server_id, **kwargs): 'Wrapper utility that returns an image created from the server.\n\n If compute microversion >= 2.36, the returned image response will\n be from the image service API rather than the compute image proxy API.\n ' name = kwargs.pop('name', data_utils.rand_name((cls.__name__ + '-image'))) wait_until = kwargs.pop('wait_until', None) wait_for_server = kwargs.pop('wait_for_server', True) image = cls.compute_images_client.create_image(server_id, name=name, **kwargs) if api_version_utils.compare_version_header_to_response('OpenStack-API-Version', 'compute 2.45', image.response, 'lt'): image_id = image['image_id'] else: image_id = data_utils.parse_image_id(image.response['location']) if (not cls.is_requested_microversion_compatible('2.35')): client = cls.images_client else: client = cls.compute_images_client cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc, client.delete_image, image_id) if (wait_until is not None): try: wait_until = wait_until.upper() if (not cls.is_requested_microversion_compatible('2.35')): wait_until = wait_until.lower() waiters.wait_for_image_status(client, image_id, wait_until) except lib_exc.NotFound: if (wait_until.upper() == 'ACTIVE'): server = cls.servers_client.show_server(server_id)['server'] if ('fault' in server): raise exceptions.SnapshotNotFoundException(server['fault'], image_id=image_id) else: raise exceptions.SnapshotNotFoundException(image_id=image_id) else: raise image = client.show_image(image_id) if ('image' in image): image = image['image'] if (wait_until.upper() == 'ACTIVE'): if wait_for_server: waiters.wait_for_server_status(cls.servers_client, server_id, 'ACTIVE') return image
-1,321,252,166,117,362,000
Wrapper utility that returns an image created from the server. If compute microversion >= 2.36, the returned image response will be from the image service API rather than the compute image proxy API.
tempest/api/compute/base.py
create_image_from_server
AurelienLourot/tempest
python
@classmethod def create_image_from_server(cls, server_id, **kwargs): 'Wrapper utility that returns an image created from the server.\n\n If compute microversion >= 2.36, the returned image response will\n be from the image service API rather than the compute image proxy API.\n ' name = kwargs.pop('name', data_utils.rand_name((cls.__name__ + '-image'))) wait_until = kwargs.pop('wait_until', None) wait_for_server = kwargs.pop('wait_for_server', True) image = cls.compute_images_client.create_image(server_id, name=name, **kwargs) if api_version_utils.compare_version_header_to_response('OpenStack-API-Version', 'compute 2.45', image.response, 'lt'): image_id = image['image_id'] else: image_id = data_utils.parse_image_id(image.response['location']) if (not cls.is_requested_microversion_compatible('2.35')): client = cls.images_client else: client = cls.compute_images_client cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc, client.delete_image, image_id) if (wait_until is not None): try: wait_until = wait_until.upper() if (not cls.is_requested_microversion_compatible('2.35')): wait_until = wait_until.lower() waiters.wait_for_image_status(client, image_id, wait_until) except lib_exc.NotFound: if (wait_until.upper() == 'ACTIVE'): server = cls.servers_client.show_server(server_id)['server'] if ('fault' in server): raise exceptions.SnapshotNotFoundException(server['fault'], image_id=image_id) else: raise exceptions.SnapshotNotFoundException(image_id=image_id) else: raise image = client.show_image(image_id) if ('image' in image): image = image['image'] if (wait_until.upper() == 'ACTIVE'): if wait_for_server: waiters.wait_for_server_status(cls.servers_client, server_id, 'ACTIVE') return image
@classmethod def recreate_server(cls, server_id, validatable=False, **kwargs): 'Destroy an existing class level server and creates a new one\n\n Some test classes use a test server that can be used by multiple\n tests. This is done to optimise runtime and test load.\n If something goes wrong with the test server, it can be rebuilt\n using this helper.\n\n This helper can also be used for the initial provisioning if no\n server_id is specified.\n\n :param server_id: UUID of the server to be rebuilt. If None is\n specified, a new server is provisioned.\n :param validatable: whether to the server needs to be\n validatable. When True, validation resources are acquired via\n the `get_class_validation_resources` helper.\n :param kwargs: extra paramaters are passed through to the\n `create_test_server` call.\n :return: the UUID of the created server.\n ' if server_id: cls.delete_server(server_id) cls.password = data_utils.rand_password() server = cls.create_test_server(validatable, validation_resources=cls.get_class_validation_resources(cls.os_primary), wait_until='ACTIVE', adminPass=cls.password, **kwargs) return server['id']
-3,965,829,209,142,081,000
Destroy an existing class level server and creates a new one Some test classes use a test server that can be used by multiple tests. This is done to optimise runtime and test load. If something goes wrong with the test server, it can be rebuilt using this helper. This helper can also be used for the initial provisioning if no server_id is specified. :param server_id: UUID of the server to be rebuilt. If None is specified, a new server is provisioned. :param validatable: whether to the server needs to be validatable. When True, validation resources are acquired via the `get_class_validation_resources` helper. :param kwargs: extra paramaters are passed through to the `create_test_server` call. :return: the UUID of the created server.
tempest/api/compute/base.py
recreate_server
AurelienLourot/tempest
python
@classmethod def recreate_server(cls, server_id, validatable=False, **kwargs): 'Destroy an existing class level server and creates a new one\n\n Some test classes use a test server that can be used by multiple\n tests. This is done to optimise runtime and test load.\n If something goes wrong with the test server, it can be rebuilt\n using this helper.\n\n This helper can also be used for the initial provisioning if no\n server_id is specified.\n\n :param server_id: UUID of the server to be rebuilt. If None is\n specified, a new server is provisioned.\n :param validatable: whether to the server needs to be\n validatable. When True, validation resources are acquired via\n the `get_class_validation_resources` helper.\n :param kwargs: extra paramaters are passed through to the\n `create_test_server` call.\n :return: the UUID of the created server.\n ' if server_id: cls.delete_server(server_id) cls.password = data_utils.rand_password() server = cls.create_test_server(validatable, validation_resources=cls.get_class_validation_resources(cls.os_primary), wait_until='ACTIVE', adminPass=cls.password, **kwargs) return server['id']
@classmethod def delete_server(cls, server_id): 'Deletes an existing server and waits for it to be gone.' try: cls.servers_client.delete_server(server_id) waiters.wait_for_server_termination(cls.servers_client, server_id) except Exception: LOG.exception('Failed to delete server %s', server_id)
1,090,790,289,301,993,200
Deletes an existing server and waits for it to be gone.
tempest/api/compute/base.py
delete_server
AurelienLourot/tempest
python
@classmethod def delete_server(cls, server_id): try: cls.servers_client.delete_server(server_id) waiters.wait_for_server_termination(cls.servers_client, server_id) except Exception: LOG.exception('Failed to delete server %s', server_id)
def resize_server(self, server_id, new_flavor_id, **kwargs): 'resize and confirm_resize an server, waits for it to be ACTIVE.' self.servers_client.resize_server(server_id, new_flavor_id, **kwargs) waiters.wait_for_server_status(self.servers_client, server_id, 'VERIFY_RESIZE') self.servers_client.confirm_resize_server(server_id) waiters.wait_for_server_status(self.servers_client, server_id, 'ACTIVE') server = self.servers_client.show_server(server_id)['server'] self.assert_flavor_equal(new_flavor_id, server['flavor'])
8,273,950,907,388,383,000
resize and confirm_resize an server, waits for it to be ACTIVE.
tempest/api/compute/base.py
resize_server
AurelienLourot/tempest
python
def resize_server(self, server_id, new_flavor_id, **kwargs): self.servers_client.resize_server(server_id, new_flavor_id, **kwargs) waiters.wait_for_server_status(self.servers_client, server_id, 'VERIFY_RESIZE') self.servers_client.confirm_resize_server(server_id) waiters.wait_for_server_status(self.servers_client, server_id, 'ACTIVE') server = self.servers_client.show_server(server_id)['server'] self.assert_flavor_equal(new_flavor_id, server['flavor'])
@classmethod def delete_volume(cls, volume_id): 'Deletes the given volume and waits for it to be gone.' try: cls.volumes_client.delete_volume(volume_id) cls.volumes_client.wait_for_resource_deletion(volume_id) except lib_exc.NotFound: LOG.warning("Unable to delete volume '%s' since it was not found. Maybe it was already deleted?", volume_id)
-2,243,379,255,210,685,200
Deletes the given volume and waits for it to be gone.
tempest/api/compute/base.py
delete_volume
AurelienLourot/tempest
python
@classmethod def delete_volume(cls, volume_id): try: cls.volumes_client.delete_volume(volume_id) cls.volumes_client.wait_for_resource_deletion(volume_id) except lib_exc.NotFound: LOG.warning("Unable to delete volume '%s' since it was not found. Maybe it was already deleted?", volume_id)
@classmethod def get_server_ip(cls, server, validation_resources=None): "Get the server fixed or floating IP.\n\n Based on the configuration we're in, return a correct ip\n address for validating that a guest is up.\n\n :param server: The server dict as returned by the API\n :param validation_resources: The dict of validation resources\n provisioned for the server.\n " if (CONF.validation.connect_method == 'floating'): if validation_resources: return validation_resources['floating_ip']['ip'] else: msg = 'When validation.connect_method equals floating, validation_resources cannot be None' raise lib_exc.InvalidParam(invalid_param=msg) elif (CONF.validation.connect_method == 'fixed'): addresses = server['addresses'][CONF.validation.network_for_ssh] for address in addresses: if (address['version'] == CONF.validation.ip_version_for_ssh): return address['addr'] raise exceptions.ServerUnreachable(server_id=server['id']) else: raise lib_exc.InvalidConfiguration()
-8,295,158,105,538,785,000
Get the server fixed or floating IP. Based on the configuration we're in, return a correct ip address for validating that a guest is up. :param server: The server dict as returned by the API :param validation_resources: The dict of validation resources provisioned for the server.
tempest/api/compute/base.py
get_server_ip
AurelienLourot/tempest
python
@classmethod def get_server_ip(cls, server, validation_resources=None): "Get the server fixed or floating IP.\n\n Based on the configuration we're in, return a correct ip\n address for validating that a guest is up.\n\n :param server: The server dict as returned by the API\n :param validation_resources: The dict of validation resources\n provisioned for the server.\n " if (CONF.validation.connect_method == 'floating'): if validation_resources: return validation_resources['floating_ip']['ip'] else: msg = 'When validation.connect_method equals floating, validation_resources cannot be None' raise lib_exc.InvalidParam(invalid_param=msg) elif (CONF.validation.connect_method == 'fixed'): addresses = server['addresses'][CONF.validation.network_for_ssh] for address in addresses: if (address['version'] == CONF.validation.ip_version_for_ssh): return address['addr'] raise exceptions.ServerUnreachable(server_id=server['id']) else: raise lib_exc.InvalidConfiguration()
@classmethod def create_volume(cls, image_ref=None, **kwargs): "Create a volume and wait for it to become 'available'.\n\n :param image_ref: Specify an image id to create a bootable volume.\n :param kwargs: other parameters to create volume.\n :returns: The available volume.\n " if ('size' not in kwargs): kwargs['size'] = CONF.volume.volume_size if ('display_name' not in kwargs): vol_name = data_utils.rand_name((cls.__name__ + '-volume')) kwargs['display_name'] = vol_name if (image_ref is not None): kwargs['imageRef'] = image_ref if CONF.compute.compute_volume_common_az: kwargs.setdefault('availability_zone', CONF.compute.compute_volume_common_az) volume = cls.volumes_client.create_volume(**kwargs)['volume'] cls.addClassResourceCleanup(cls.volumes_client.wait_for_resource_deletion, volume['id']) cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc, cls.volumes_client.delete_volume, volume['id']) waiters.wait_for_volume_resource_status(cls.volumes_client, volume['id'], 'available') return volume
2,551,400,951,215,064,000
Create a volume and wait for it to become 'available'. :param image_ref: Specify an image id to create a bootable volume. :param kwargs: other parameters to create volume. :returns: The available volume.
tempest/api/compute/base.py
create_volume
AurelienLourot/tempest
python
@classmethod def create_volume(cls, image_ref=None, **kwargs): "Create a volume and wait for it to become 'available'.\n\n :param image_ref: Specify an image id to create a bootable volume.\n :param kwargs: other parameters to create volume.\n :returns: The available volume.\n " if ('size' not in kwargs): kwargs['size'] = CONF.volume.volume_size if ('display_name' not in kwargs): vol_name = data_utils.rand_name((cls.__name__ + '-volume')) kwargs['display_name'] = vol_name if (image_ref is not None): kwargs['imageRef'] = image_ref if CONF.compute.compute_volume_common_az: kwargs.setdefault('availability_zone', CONF.compute.compute_volume_common_az) volume = cls.volumes_client.create_volume(**kwargs)['volume'] cls.addClassResourceCleanup(cls.volumes_client.wait_for_resource_deletion, volume['id']) cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc, cls.volumes_client.delete_volume, volume['id']) waiters.wait_for_volume_resource_status(cls.volumes_client, volume['id'], 'available') return volume
def _detach_volume(self, server, volume): 'Helper method to detach a volume.\n\n Ignores 404 responses if the volume or server do not exist, or the\n volume is already detached from the server.\n ' try: volume = self.volumes_client.show_volume(volume['id'])['volume'] if (volume['status'] == 'in-use'): self.servers_client.detach_volume(server['id'], volume['id']) except lib_exc.NotFound: pass
1,405,029,417,197,140,700
Helper method to detach a volume. Ignores 404 responses if the volume or server do not exist, or the volume is already detached from the server.
tempest/api/compute/base.py
_detach_volume
AurelienLourot/tempest
python
def _detach_volume(self, server, volume): 'Helper method to detach a volume.\n\n Ignores 404 responses if the volume or server do not exist, or the\n volume is already detached from the server.\n ' try: volume = self.volumes_client.show_volume(volume['id'])['volume'] if (volume['status'] == 'in-use'): self.servers_client.detach_volume(server['id'], volume['id']) except lib_exc.NotFound: pass
def attach_volume(self, server, volume, device=None, tag=None): "Attaches volume to server and waits for 'in-use' volume status.\n\n The volume will be detached when the test tears down.\n\n :param server: The server to which the volume will be attached.\n :param volume: The volume to attach.\n :param device: Optional mountpoint for the attached volume. Note that\n this is not guaranteed for all hypervisors and is not recommended.\n :param tag: Optional device role tag to apply to the volume.\n " attach_kwargs = dict(volumeId=volume['id']) if device: attach_kwargs['device'] = device if tag: attach_kwargs['tag'] = tag attachment = self.servers_client.attach_volume(server['id'], **attach_kwargs)['volumeAttachment'] if volume['multiattach']: att = waiters.wait_for_volume_attachment_create(self.volumes_client, volume['id'], server['id']) self.addCleanup(waiters.wait_for_volume_attachment_remove, self.volumes_client, volume['id'], att['attachment_id']) else: self.addCleanup(waiters.wait_for_volume_resource_status, self.volumes_client, volume['id'], 'available') waiters.wait_for_volume_resource_status(self.volumes_client, volume['id'], 'in-use') self.addCleanup(self._detach_volume, server, volume) return attachment
-6,207,551,653,731,612,000
Attaches volume to server and waits for 'in-use' volume status. The volume will be detached when the test tears down. :param server: The server to which the volume will be attached. :param volume: The volume to attach. :param device: Optional mountpoint for the attached volume. Note that this is not guaranteed for all hypervisors and is not recommended. :param tag: Optional device role tag to apply to the volume.
tempest/api/compute/base.py
attach_volume
AurelienLourot/tempest
python
def attach_volume(self, server, volume, device=None, tag=None): "Attaches volume to server and waits for 'in-use' volume status.\n\n The volume will be detached when the test tears down.\n\n :param server: The server to which the volume will be attached.\n :param volume: The volume to attach.\n :param device: Optional mountpoint for the attached volume. Note that\n this is not guaranteed for all hypervisors and is not recommended.\n :param tag: Optional device role tag to apply to the volume.\n " attach_kwargs = dict(volumeId=volume['id']) if device: attach_kwargs['device'] = device if tag: attach_kwargs['tag'] = tag attachment = self.servers_client.attach_volume(server['id'], **attach_kwargs)['volumeAttachment'] if volume['multiattach']: att = waiters.wait_for_volume_attachment_create(self.volumes_client, volume['id'], server['id']) self.addCleanup(waiters.wait_for_volume_attachment_remove, self.volumes_client, volume['id'], att['attachment_id']) else: self.addCleanup(waiters.wait_for_volume_resource_status, self.volumes_client, volume['id'], 'available') waiters.wait_for_volume_resource_status(self.volumes_client, volume['id'], 'in-use') self.addCleanup(self._detach_volume, server, volume) return attachment
def assert_flavor_equal(self, flavor_id, server_flavor): 'Check whether server_flavor equals to flavor.\n\n :param flavor_id: flavor id\n :param server_flavor: flavor info returned by show_server.\n ' if server_flavor.get('id'): msg = 'server flavor is not same as flavor!' self.assertEqual(flavor_id, server_flavor['id'], msg) else: flavor = self.flavors_client.show_flavor(flavor_id)['flavor'] self.assertEqual(flavor['name'], server_flavor['original_name'], 'original_name in server flavor is not same as flavor name!') for key in ['ram', 'vcpus', 'disk']: msg = ('attribute %s in server flavor is not same as flavor!' % key) self.assertEqual(flavor[key], server_flavor[key], msg)
-40,787,179,334,714,180
Check whether server_flavor equals to flavor. :param flavor_id: flavor id :param server_flavor: flavor info returned by show_server.
tempest/api/compute/base.py
assert_flavor_equal
AurelienLourot/tempest
python
def assert_flavor_equal(self, flavor_id, server_flavor): 'Check whether server_flavor equals to flavor.\n\n :param flavor_id: flavor id\n :param server_flavor: flavor info returned by show_server.\n ' if server_flavor.get('id'): msg = 'server flavor is not same as flavor!' self.assertEqual(flavor_id, server_flavor['id'], msg) else: flavor = self.flavors_client.show_flavor(flavor_id)['flavor'] self.assertEqual(flavor['name'], server_flavor['original_name'], 'original_name in server flavor is not same as flavor name!') for key in ['ram', 'vcpus', 'disk']: msg = ('attribute %s in server flavor is not same as flavor!' % key) self.assertEqual(flavor[key], server_flavor[key], msg)
@api.route('/webhooks') async def webhooks(req, resp): '\n Handle incoming GitHub webhooks\n ' data = (await req.media()) eventid = req.headers.get('X-GitHub-Delivery') event = req.headers.get('X-GitHub-Event') if (not Subscriptions.is_listening_for(event)): resp.text = f'Accepted, but not listening for {event} events.' return if env.webhook_secret: signature = req.headers.get('X-Hub-Signature') assert signature, 'X-Hub-Signature not found in the header.' (sha_name, signature) = signature.split('=') assert (sha_name == 'sha1') mac = hmac.new(env.webhook_secret, msg=data, digestmod='sha1') assert (str(mac.hexdigest()) == str(signature)) Subscriptions.publish(eventid, event, {'event': event, 'payload': data}) resp.text = 'Accepted'
-3,591,437,164,178,230,300
Handle incoming GitHub webhooks
app/webhooks.py
webhooks
adnrs96/github
python
@api.route('/webhooks') async def webhooks(req, resp): '\n \n ' data = (await req.media()) eventid = req.headers.get('X-GitHub-Delivery') event = req.headers.get('X-GitHub-Event') if (not Subscriptions.is_listening_for(event)): resp.text = f'Accepted, but not listening for {event} events.' return if env.webhook_secret: signature = req.headers.get('X-Hub-Signature') assert signature, 'X-Hub-Signature not found in the header.' (sha_name, signature) = signature.split('=') assert (sha_name == 'sha1') mac = hmac.new(env.webhook_secret, msg=data, digestmod='sha1') assert (str(mac.hexdigest()) == str(signature)) Subscriptions.publish(eventid, event, {'event': event, 'payload': data}) resp.text = 'Accepted'
@classmethod def list(cls, session, paginated=False, **params): 'This method is a generator which yields queue objects.\n\n This is almost the copy of list method of resource.Resource class.\n The only difference is the request header now includes `Client-ID`\n and `X-PROJECT-ID` fields which are required by Zaqar v2 API.\n ' more_data = True query_params = cls._query_mapping._transpose(params) uri = (cls.base_path % params) headers = {'Client-ID': (params.get('client_id', None) or str(uuid.uuid4())), 'X-PROJECT-ID': (params.get('project_id', None) or session.get_project_id())} while more_data: resp = session.get(uri, headers=headers, params=query_params) resp = resp.json() resp = resp[cls.resources_key] if (not resp): more_data = False yielded = 0 new_marker = None for data in resp: value = cls.existing(**data) new_marker = value.id yielded += 1 (yield value) if (not paginated): return if (('limit' in query_params) and (yielded < query_params['limit'])): return query_params['limit'] = yielded query_params['marker'] = new_marker
3,059,643,027,235,729,000
This method is a generator which yields queue objects. This is almost the copy of list method of resource.Resource class. The only difference is the request header now includes `Client-ID` and `X-PROJECT-ID` fields which are required by Zaqar v2 API.
openstack/message/v2/queue.py
list
TeutoNet/openstacksdk
python
@classmethod def list(cls, session, paginated=False, **params): 'This method is a generator which yields queue objects.\n\n This is almost the copy of list method of resource.Resource class.\n The only difference is the request header now includes `Client-ID`\n and `X-PROJECT-ID` fields which are required by Zaqar v2 API.\n ' more_data = True query_params = cls._query_mapping._transpose(params) uri = (cls.base_path % params) headers = {'Client-ID': (params.get('client_id', None) or str(uuid.uuid4())), 'X-PROJECT-ID': (params.get('project_id', None) or session.get_project_id())} while more_data: resp = session.get(uri, headers=headers, params=query_params) resp = resp.json() resp = resp[cls.resources_key] if (not resp): more_data = False yielded = 0 new_marker = None for data in resp: value = cls.existing(**data) new_marker = value.id yielded += 1 (yield value) if (not paginated): return if (('limit' in query_params) and (yielded < query_params['limit'])): return query_params['limit'] = yielded query_params['marker'] = new_marker
def ComputeConvOutputShape(in_shape, t_stride, f_stride, outc=None, padding='SAME'): "Computes output shape for convolution and pooling layers.\n\n If `in_shape` is a dynamic shape, the output will be Tensors, while if\n `in_shape` is a list of ints then the output will also be a list of ints.\n\n Args:\n in_shape: A length 4 Tensor or list representing the input shape.\n t_stride: The stride along the time dimension.\n f_stride: The stride along the frequency dimension.\n outc: The expected output channel. If None, will use the input channel.\n padding: 'SAME' or 'VALID'.\n\n Returns:\n The expected output shape.\n " n = in_shape[0] t = in_shape[1] f = in_shape[2] c = in_shape[3] assert ((f is not None) and (c is not None)) if (padding == 'VALID'): if t: t -= (t_stride - 1) f -= (f_stride - 1) ot = t if (ot is not None): ot = (((ot + t_stride) - 1) // t_stride) of = (((f + f_stride) - 1) // f_stride) if (outc is None): outc = c return [n, ot, of, outc]
6,174,591,343,225,735,000
Computes output shape for convolution and pooling layers. If `in_shape` is a dynamic shape, the output will be Tensors, while if `in_shape` is a list of ints then the output will also be a list of ints. Args: in_shape: A length 4 Tensor or list representing the input shape. t_stride: The stride along the time dimension. f_stride: The stride along the frequency dimension. outc: The expected output channel. If None, will use the input channel. padding: 'SAME' or 'VALID'. Returns: The expected output shape.
lingvo/core/conv_layers_with_time_padding.py
ComputeConvOutputShape
zhoudoufu/lingvo
python
def ComputeConvOutputShape(in_shape, t_stride, f_stride, outc=None, padding='SAME'): "Computes output shape for convolution and pooling layers.\n\n If `in_shape` is a dynamic shape, the output will be Tensors, while if\n `in_shape` is a list of ints then the output will also be a list of ints.\n\n Args:\n in_shape: A length 4 Tensor or list representing the input shape.\n t_stride: The stride along the time dimension.\n f_stride: The stride along the frequency dimension.\n outc: The expected output channel. If None, will use the input channel.\n padding: 'SAME' or 'VALID'.\n\n Returns:\n The expected output shape.\n " n = in_shape[0] t = in_shape[1] f = in_shape[2] c = in_shape[3] assert ((f is not None) and (c is not None)) if (padding == 'VALID'): if t: t -= (t_stride - 1) f -= (f_stride - 1) ot = t if (ot is not None): ot = (((ot + t_stride) - 1) // t_stride) of = (((f + f_stride) - 1) // f_stride) if (outc is None): outc = c return [n, ot, of, outc]
def ComputeConvOutputPadding(paddings, window, stride, padding_algorithm='SAME'): "Computes paddings for convolution and pooling output.\n\n out_padding[i] == 1 iff any in_padding corresponding to that output is 1.\n\n Args:\n paddings: The paddings tensor. It is expected to be of shape [batch, time].\n window: The size of the windows.\n stride: The time-stride between adjacent windows.\n padding_algorithm: 'SAME' or 'VALID'.\n\n Returns:\n out_padding, The new padding tensor of size [batch, ceil(time / stride)].\n " if (stride == 1): return paddings input_length = py_utils.GetShape(paddings)[1] pad_len = (((((input_length + stride) - 1) // stride) * stride) - input_length) paddings = tf.pad(paddings, [[0, 0], [0, pad_len]], constant_values=1.0) out_padding = tf.nn.pool(tf.expand_dims(paddings, (- 1)), [window], 'MAX', padding_algorithm, strides=[stride]) return tf.squeeze(out_padding, (- 1))
-5,047,944,237,518,243,000
Computes paddings for convolution and pooling output. out_padding[i] == 1 iff any in_padding corresponding to that output is 1. Args: paddings: The paddings tensor. It is expected to be of shape [batch, time]. window: The size of the windows. stride: The time-stride between adjacent windows. padding_algorithm: 'SAME' or 'VALID'. Returns: out_padding, The new padding tensor of size [batch, ceil(time / stride)].
lingvo/core/conv_layers_with_time_padding.py
ComputeConvOutputPadding
zhoudoufu/lingvo
python
def ComputeConvOutputPadding(paddings, window, stride, padding_algorithm='SAME'): "Computes paddings for convolution and pooling output.\n\n out_padding[i] == 1 iff any in_padding corresponding to that output is 1.\n\n Args:\n paddings: The paddings tensor. It is expected to be of shape [batch, time].\n window: The size of the windows.\n stride: The time-stride between adjacent windows.\n padding_algorithm: 'SAME' or 'VALID'.\n\n Returns:\n out_padding, The new padding tensor of size [batch, ceil(time / stride)].\n " if (stride == 1): return paddings input_length = py_utils.GetShape(paddings)[1] pad_len = (((((input_length + stride) - 1) // stride) * stride) - input_length) paddings = tf.pad(paddings, [[0, 0], [0, pad_len]], constant_values=1.0) out_padding = tf.nn.pool(tf.expand_dims(paddings, (- 1)), [window], 'MAX', padding_algorithm, strides=[stride]) return tf.squeeze(out_padding, (- 1))