Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Both OnPremise Exchange server and Office 365 Exchange admin Center uses a large set of predefined permissions, those can be used to grant permissions to your administrators and users instantly. Here the permissions features are used to set up role-based permissions for your Exchange server new organization up and running quickly.
In Exchange Server, the permissions that you grant to administrators and users are based on management roles. A role defines the set of tasks that an administrator or user can perform. When a role is assigned to an administrator or user, that person is granted the permissions provided by the role. Roles give permissions to perform tasks to administrators and users by making cmdlets available to those who are assigned the roles.
In this blog, we are detailing two types of roles, Administrative roles and End-user roles & Role groups & role assignment policies and Outlook WebApp Policies
- Administrative roles: These roles contain permissions that can be assigned to administrators or specialist users using role groups that manage a part of the Exchange organization, such as recipients, servers, or databases.
- End-user roles: These roles, assigned using role assignment policies, enable users to manage aspects of their own mailbox and distribution groups that they own. End-user roles begin with the prefix My.
Role groups and role assignment policies
- Roles grant permissions to perform tasks in Exchange Server, but you need an easy way to assign them to administrators and users. Exchange Server provides you with the following to help you do that:
- Role groups: Role groups enable you to grant permissions to administrators and specialist users.
Role assignment policies: Role assignment policies enable you to grant permission to end users to change settings on their own mailbox or distribution groups that they own.
How to access Exchange Admin Center – Permissions
Login to Exchange Admin Center with a Global administrator or an administrator privileged user using the below URL and Select Permissions on the left side.
Exchange admin center – Permission constitutes three parts admin roles, user roles, and Outlook Web app policies.
In the Admin role, you have 19 predefined role groups available for assigning roles to the administrators or specialized users. You can edit any of these 19 predefined role groups, can add roles & members on it. For example, Compliance Management is a role group, on editing you can add/remove roles from the group and can add/remove members from the group.
Important role groups and Assigned roles to it
By default, in Exchange server, you have prebuilt role groups and automatically assigned roles into it. Users can modify each role group and can add/remove roles & can add/remove members into it. Some of the role groups and assigned roles with their description given below.
Discovery Management – Members of this management role group can perform searches of mailboxes in the Exchange organization for data that meets specific criteria.
Assigned Roles – ApplicationImpersonation, Legal Hold, Mailbox Search
Help Desk – Members of this management role group can view and manage the configuration for individual recipients and view recipients in an Exchange organization. Members of this role group can only manage the configuration each user can manage on his or her own mailbox. Additional permissions can be added by assigning additional management roles to this role group.
Assigned Roles – Reset Password, User Options, View-Only Recipients
Hygiene Management – Members of this management role group can manage Exchange anti-spam features and grant permissions for antivirus products to integrate with Exchange.
Assigned Roles – Transport Hygiene, View-Only Configuration, View-Only Recipients
Organization Management – Members of this management role group have permissions to manage Exchange objects and their properties in the Exchange organization. Members can also delegate role groups and management roles in the organization. This role group shouldn’t be deleted.
Assigned Roles – ApplicationImpersonation, Audit Logs, Compliance Admin, Data Loss Prevention, Distribution Groups, E-Mail Address Policies, Federated Sharing, Information Rights Management, Journaling, Legal Hold, Mail Enabled Public Folders, Mail Recipient Creation, Mail Recipients, Mail Tips, Message Tracking, Migration, Move Mailboxes, Org Custom Apps, Org Marketplace Apps, Organization Client Access, Organization Configuration, Organization Transport Settings, Public Folders, Recipient Policies, Remote and Accepted Domains, Reset Password, Retention Management, Role Management, Security Admin, Security Group Creation, and Membership, Security Reader, Team Mailboxes, Transport Hygiene, Transport Rules, UM Mailboxes, UM Prompts, Unified Messaging, User Options, View-Only Audit Logs, View-Only Configuration, View-Only Recipients.
Recipient Management – Members of this management role group have the right to create, manage, and remove Exchange recipient objects in the Exchange organization.
Assigned Roles – Distribution Groups, Mail Recipient Creation, Mail Recipients, Message Tracking, Migration, Move Mailboxes, Recipient Policies, Reset Password, Team Mailboxes.
Records Management – Members of this management role group have permissions to manage and dispose of record content.
Assigned Roles – Audit Logs, Journaling, Message Tracking, Retention Management, Transport Rules
Security Administrator – Membership in this role group is synchronized across services and managed centrally. This role group is not manageable through the administrator portals. Members of this role group may include cross-service administrators, as well as external partner groups, and Microsoft Support. By default, this group may not be assigned any roles. However, it will be a member of the Security Administrators role groups and will inherit the capabilities of that role group.
Assigned Roles – Security Admin
End-User role :
Always assigned to nonadministrator end-users using role assignment policies. By default End-user role has one Default Role Assignment Policy. This policy grants end-users permission to set their options in Outlook on the web and perform other self-administration tasks.
Usually this policy details the end user’s personal information, and settings related to his personal mailbox. All these user’s personal information and personal mailbox settings can be enabled or disabled by selecting/deselecting individual settings checkboxes. You can also customize these policies by creating a new policy and adopting the required policies selectively.
MyContactInformation – This role enables individual users to modify their contact information, including the address and phone numbers.
MyProfileInformation – This role enables individual users to modify their name.
MyDistributionGroups – This role enables individual users to create, modify and view distribution groups and modify, view, remove, and add members to distribution groups they own.
Distribution group memberships
MyDistributionGroupMembership – This role enables individual users to view and modify their membership in distribution groups in an organization, provided that those distribution groups allow manipulation of a group membership.
My ReadWriteMailbox Apps – This role will allow users to install apps with ReadWriteMailbox permissions.
MyRetentionPolicies – This role enables individual users to view their retention tags and view and modify their retention tag settings and defaults.
My Marketplace Apps – This role will allow users to view and modify their marketplace apps.
My Custom Apps – This role will allow users to view and modify their custom apps.
MyTeamMailboxes – This role enables individual users to create site mailboxes and connect them to SharePoint sites.
MyMailSubscriptions – This role enables individual users to view and modify their e-mail subscription settings such as message format and protocol defaults.
MyVoiceMail – This role enables individual users to view and modify their voice mail settings.
MyBaseOptions – This role enables individual users to view and modify the basic configuration of their own mailbox and associated settings.
MyTextMessaging – This role enables individual users to create, view, and modify their text messaging settings.
Outlook WebApp Policies
These policies are exclusively for end users who access their mailboxes through Outlook Web Access (OWA), and this is based on the all & enabled features available in the OWA interface.
By default, here are the listed features are enabled under these categories
Communication Management – Instant Messaging, Text messaging, Unified Messaging, Exchange ActiveSync, Contacts, Mobile device contact sync, All address lists, LinkedIn contact sync, Facebook contact sync
Information management – Journaling, Notes, Inbox Rules, Recover deleted items
User experience – Themes, Premium client, Email signature, Places, Weather, Interesting calendars
Time management – Calendar, Tasks, Reminders, and notifications
You can also edit the default OwaMailboxPolicy and change the other settings, such as file access and offline access settings.
File access – Select how users can view and access attachments. If Direct file access is enabled, users will be able to open attachments by clicking them and selecting Open.
Offline access – Specify how and when users can enable offline access to their email. Offline access copies information from users’ accounts to their device, which lets them use Outlook on the web when they’re not connected to a network. It has three options to choose – Always, Private Computer and Never.
For any Exchange & Office 365 administrator, here are the top four important tips to know about permissions on Microsoft Exchange. Understanding role-based access control, managing role groups in Exchange Online, configuring Role assignment policies in Exchange Online, and learn more about the permissions required to manage Exchange Online features and services. Thus with the extent of knowledge and learning one can do the above these processes easily.
|
OPCFW_CODE
|
- Added Tag Key and Tag Value filters for better understanding of tag values mapped to a particular tag key.
- The newly added filters are much organized.
- The Tag Key and Tag Value filters are added for the following cost reports:
- Azure > Cost Analytics > Azure Cost Monthly
- Azure > Cost Analytics > Azure Cost Daily
- AWS > Cost Analytics > AWS Cost Monthly
- AWS > Cost Analytics > AWS Cost Daily
- Azure > Partner Cost Report > Azure CSP Cost Monthly
- Azure > Partner Cost Report > Azure EA Cost Monthly
- GCP > Cost Analytics > GCP Cost Monthly
- GCP > Cost Analytics > GCP Cost Daily
- GCP > Cost Analytics > GCP Parent Billing Account
- OCI > Cost Analytics > OCI Cost Monthly
- OCI > Cost Analytics > OCI Cost Daily
- Added the following threat widgets in the Security Executive Dashboard. These provide an executive view to users so they can take appropriate actions accordingly.
- Config Violations
- Config Violations Trend
- Config Violations Severity Summary By Account
- Config Violation By Region
- Config Concentration by Cloud Provider
- Access Violations
- Access Violations Trend
- Access Violations Severity Summary By Account
- Control Health widget
- Config Violations
- Added the OAuth integration steps to onboard cloud accounts.
- Client ID, Secret Code, and Authentication Code to be generated through the OAuth integration steps and would be used for onboarding accounts.
Following CloudOps reports are added:
- CloudOps Report: This report provides customers with insight into their cloud infrastructure's performance. This report helps in optimizing cloud operations, reducing costs, and identifying potential risks and vulnerabilities.
- Cloud Operations Assessment Report: This report is a comprehensive review of a customer's cloud infrastructure and operations. This assessment report provides customers with actionable recommendations to improve their cloud operations, increase efficiency, and reduce costs while ensuring security and compliance.
- OCI Health Report: Provides assessment details for resources related to OCI accounts.
- Azure Patch Report: This report provides detailed patch information about various resources for the selected time period.
Added the ability to execute Terraform code from a GitHub Enterprise repository using personal access token. Previously, Terraform code could be executed only through the CoreStack UI.
Users can store their Terraform code in a Github Enterprise repository and then configure a pipeline that automatically pulls the code and executes it. This feature allows users to automate the process of creating, updating, and deleting infrastructure resources by pulling code from a central repository and executing it with Terraform.
- Users can view new templates in the Marketplace (~30) added across each cloud provider.
- Users can clone the content under templates and avoid runtime errors.
- Users can enter more than 100 characters (approximately 200) to support Terraform dynamic values, which enables them to run their templates with longer customized names.
Added tag remediation support for additional resources on Tag Governance 2.0:
- GCP: 12 additional resources
- OCI: 32 additional resources
- Azure: 23 additional resources
- To see the external APIs which are added, modified, and removed in this release, refer to: https://docs.corestack.io/v4.0/docs/external-apis-40
- To see all the available external APIs, refer to: https://docs.corestack.io/reference/authtoken
Below APIs are not working as expected. We will try to fix it before the next release.
- Update environment in business application
- View Tag keys
- Update application group in business application
- Update cost center in business application
Updated about 2 months ago
|
OPCFW_CODE
|
Formatter is slow
Formatting a package body with 3045 lines takes more than a minute to format.
The formatter result has 5078 lines. Interesting are the following sections of the log (steps using more than a second):
2021-09-02 12:23:11.061 FINE com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase invokeHandle: keepSignificantWhitespaceBeforeLeafNodes: remove all whitespace at 0.
...
2021-09-02 12:23:25.868 FINER com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase invokeHandle: r2_decrement_left_margin: set margin to 3 spaces at 27862.
2021-09-02 12:23:27.156 FINER com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase invokeHandle: r2_decrement_left_margin_for_function_name: set margin to 9 spaces at 335.
2021-09-02 12:23:27.157 FINER com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase invokeHandle: r2_decrement_left_margin_for_function_name: set margin to 9 spaces at 347.
...
2021-09-02 12:23:27.248 FINER com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase invokeHandle: r2_decrement_left_margin_for_function_name: set margin to 9 spaces at 27166.
2021-09-02 12:24:11.350 FINER com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase invokeHandle: r2_increment_left_margin_by_keyword_outside_node: set margin to 16 spaces at 60.
2021-09-02 12:24:11.351 FINER com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase invokeHandle: r2_increment_left_margin_by_keyword_outside_node: set margin to 16 spaces at 64.
...
2021-09-02 12:24:16.379 INFO com.oracle.truffle.host.HostMethodDesc$SingleMethod$MHBase invokeHandle: d2_log_time: formatted 131582 chars and 27894 nodes in 65.89 seconds.
Section
Start
End
Elapsed Time in seconds
r2_decrement_left_margin_for_function_name
12:23:25.868
12:23:27.156
1.288
r2_increment_left_margin_by_keyword_outside_node
12:23:27.248
12:24:11.350
44.102
So 68.9% of the time is spent in these two calls.
It's important to note that the subsequent log entry in both cases is just a millisecond later. The first assumption is that the Arbori query must be responsible for that.
And in fact this Arbori query takes a bit more than a second when executed in SQLDev (for 850 result tuples):
r2_decrement_left_margin_for_function_name:
([node^) function | [node^) function_expression | [node^) count | [node^) over_clause)
& [lparen) '('
& [lparen = node)
;
And this Arbori query takes about 45 seconds when executed in SQLDev (for 82 result tuples):
r2_increment_left_margin_by_keyword_outside_node:
-- select statement
[node) select_list & [keyword) 'SELECT' & (keyword = node-1 | keyword = node-2)
| [node) condition & [keyword) 'CONNECT' & [keyword+1) 'BY' & keyword = node-2
| [node) condition & [keyword) 'START' & [keyword+1) 'WITH' & keyword = node-2
-- update statement
| [node) aliased_dml_table_expression_clause & keyword = node^
-- delete statement
| [node) 'FROM' & [keyword) 'DELETE' & [node^) delete & keyword+1 = node
-- merge statement
| [node) merge_update_clause[36,56) & [keyword) 'SET' & [node^) merge_update_clause & keyword^ = node^
;
Interesting is that the Arbori queries are fast when reduced to the scope of a single statement (select, update, delete, merge). In fact just splitting the query into two (select and the rest) makes the queries fast.
Hence splitting the second Arbori query should the performance significantly.
I do not see how to optimize the query r2_decrement_left_margin_for_function_name at the moment. I think the problem is that [node) is not defined. However, that's wanted, because we look for an arbitrary number of node types. It's not an option to list them all.
|
GITHUB_ARCHIVE
|
What's the logic behind dividing rental price of capital and wage rate by price level?
I've just started to learn macroeconomics and I think even the teachers can't get what's the price level and why we divide by it in order to find for example real economic profits.
Textbook says "What firms and households care about are profits in terms of what they can
buy, that is, real economic profits. We divide the preceding expression by the price level,
P, to get real economic profits $$Π= F(K,L) - (R/P)K - (W/P)L$$
I just cant understand the part of dividing by P, everything else is clear.
Thank you.
Price level is the general level of prices in an economy. It is a variable that indicates what is the purchasing power of money (see Blanchard et al. Macroeconomics: a European Perspective). You can think of it as a variable that tells you what average prices are compared to some baseline. Price level is often measured using the consumer price index (CPI).
As your textbook correctly stated economic agents primarily care about real profits, wages etc. The reasoning behind this is that what matters is not what is the number of zeros on your paycheck but what you can buy with it. For example, suppose that you are offered a job that pays ${\\\$}1,000,000$ salary per month. Is that salary high or low? Well if the average price level $(P)$ is so high, due to inflation, that even apple or postage stamp cost ${\\\$}1,000,000,000$ you would probably not be willing to work for that one million per month, because it would not be enough to even buy you one apple. However, if price level would be such that stamps and apples cost only ${\\\$1}$ that kind of salary would be great because you can purchase whole host of apples with it.
What really matters for decisions is how much goods and services you can buy with profit an wage you get. Consequently, in economics we often divide nominal variables by price levels in order to adjust them for any effects of inflation.
You should interpret $P$ as being the price of some bundle of goods people want to consume. Then if we have some monetary amount (in dollars, euro's or some other monetary unit of account), let's call it $M$, what does $M/P$ mean? Note that: $P \times (M/P)=M$ so $M/P$ is the number of consumption bundles with price $P$ you can buy with your $M$ units of currency. This is why $M/P$ is a 'real' amount: it tells you how much your monetary wealth really buys.
|
STACK_EXCHANGE
|
/**
* A model instance that represents current application state: installed plugins
* and active Remote View sessions
*/
'use strict';
var debug = require('debug')('lsapp:app-model');
var tunnelController = require('./tunnel');
var appsDfn = require('../apps');
var googleChrome = require('../google-chrome');
var sublimeText = require('../sublime-text');
module.exports = function(model, client) {
tunnelController.on('update', sessions => model.set('rvSessions', sessions));
model.on('change', () => client.send('app-model', model.toJSON()));
var apps = {
st2: setupApp(appsDfn.st2, sublimeText, model, client),
st3: setupApp(appsDfn.st3, sublimeText, model, client),
chrome: setupApp(appsDfn.chrome, googleChrome, model, client)
};
Object.keys(apps).forEach(k => apps[k].detect());
return {
install(id) {
return apps[id]
? apps[id].install()
: Promise.reject(new Error(`Unknown app ${id}`));
},
detect(id) {
return apps[id]
? apps[id].detect()
: Promise.reject(new Error(`Unknown app ${id}`));
}
};
};
function setupApp(app, handler, model, client) {
var attributeName = app.id;
var installPromise = null;
var autoupdater;
if (handler.autoupdate) {
autoupdater = handler.autoupdate(app)
.on('shouldUpdate', app => install('updating'))
.start(60 * 60); // check every hour
}
var detect = pollFactory(model, attributeName, () => {
return handler.detect(app, client)
.then(result => {
if (result && autoupdater) {
debug('%s plugin installed, check for updates', app.id);
autoupdater.check();
}
return result;
});
});
var install = (state) => {
if (installPromise) {
return installPromise;
}
model.set(attributeName, state || 'installing');
return installPromise = handler.install(app)
.then(() => {
installPromise = null;
detect(model.unset(attributeName))
})
.catch(err => {
debug(err);
installPromise = null;
model.set(attributeName, createError(err));
return Promise.reject(err);
});
};
return {detect, install, autoupdater};
}
function pollFactory(model, attributeName, detectFn) {
var timerId = null;
return function poll() {
if (timerId) {
clearTimeout(timerId);
timerId = null;
}
debug('polling install status for %s', attributeName);
detectFn()
.then(result => {
model.set(attributeName, result ? 'installed' : 'not-installed');
timerId = null;
})
.catch(err => {
model.set(attributeName, !err ? 'not-installed' : {
error: err.message,
errorCode: err.code
});
timerId = setTimeout(poll, 5000).unref();
});
};
}
function createError(err) {
var data = {error: err.message};
if (err.code) {
data.errorCode = err.code;
}
return data;
}
if (require.main === module) {
let pkg = require('../../package.json');
require('../client')(pkg.config.websocketUrl, function(err, client) {
if (err) {
return debug(err);
}
module.exports(client).on('change', () => console.log(this.attributes));
});
}
|
STACK_EDU
|
Discord MusicBot on an original Raspberry Pi
If you are anything like me, you probably ordered a Raspberry Pi when it first came out and used it as a small webserver for a few years and then forgot about it. Well, here is one way you can breath new life into the Raspberry Pi - using it to host a Discord Radio Bot!
Why you will want to follow this guide
This should be a 30 minute job, right? Just plug it in, fire up apt-get and pull some packages, right? Wrong. See, the lastest version of python avalible for the original Raspberry Pi at time of writing is Python 3.4, which is not a high enough version of Python to run SexualRhinoceros's MusicBot. So we need to compile a newer version of Python from scratch. Also, in most guides on building Python for the Raspberry Pi, it doesn't end up including the SSL module, which is also a requirement for installing the bot, so this guide will include that too. Also, you will need ffmpeg, with x264 support, which also isn't built for the original Raspberry Pi right now. Since messing up the configuration of any of these systems can cost you hours more compiling time, its best to get it right first try.
First things first, you need to get OpenSSL. Open SSL provideds part of the implementation of SSL that Python uses as a module. To do this, run the following commands:
This will download the source of OpenSSL, extract it, then build it and finally install it. Warning: go get yourself a cup of tea, and some videos to watch. Building this will take a while, especially on an original Raspberry Pi, where it might take hours.
cd ~ curl https://www.openssl.org/source/openssl-1.1.1c.tar.gz | tar xz && cd openssl-1.1.1c && ./config shared --prefix=/usr/local/ make && sudo make install
The second part of the puzzle is building Python with SSL support. Run the following commands:
This will set some flags related to the inclusion of OpenSSL in the python build, then get some packages required to do the build, download the python source, extract it, configure it, then build it and finally install it. Warning: again, go get yourself a cup of tea, and some more videos to watch. Probably go do some work, as building it will take a while, probably a few hours. To check it was successful, run the following:
cd ~ export LDFLAGS="-L/usr/local/lib/" export LD_LIBRARY_PATH="/usr/local/lib/" export CPPFLAGS="-I/usr/local/include -I/usr/local/include/openssl" sudo apt-get update sudo apt-get install build-essential checkinstall -y sudo apt install libssl-dev libncurses5-dev libsqlite3-dev libreadline-dev libtk8.5 libgdm-dev libdb4o-cil-dev libpcap-dev sudo apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev -y wget https://www.python.org/ftp/python/3.7.3/Python-3.7.3.tgz && tar xzf Python-3.7.3.tgz && cd Python-3.7.3 sudo ./configure --enable-optimizations --prefix=/usr/local/ sudo make && sudo make install
You should see: 'OpenSSL 1.1.1c 28 May 2019' if it was a sucessful installation with OpenSSL.
python3.7 >>>import ssl >>>ssl.OPENSSL_VERSION
The bot uses ffmpeg both to extract audio from videos, and to normalize the volume across all the tracks played. As of the time of writing, ffmpeg isn't built for the Raspberry Pi, so we will have to build it ourselves. However, to build it with the necessary video support, we first need to build the x264 library. The following commands will download, configure, make and install the x264 library.
The make command will take a long time. Next we need ffmpeg. The following commands will download, configure, make and install ffmpeg.
sudo apt-get install git -y cd ~ git clone --depth 1 http://git.videolan.org/git/x264 && cd x264 sudo ./configure --host=arm-unknown-linux-gnueabi --enable-static --disable-opencl && make -j4 sudo make install
Again, the make command will take a long time.
cd ~ git clone git://source.ffmpeg.org/ffmpeg --depth=1 && cd ffmpeg ./configure --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree && make -j4 sudo make install
Installing the Bot
Finally, doing what I started the project to do in the first place.
Running that last command may also take a while. Lots of things take a while on the Raspberry Pi, its quite slow. Finally, run
cd ~ # Installing system dependencies sudo apt-get install libav-tools libopus-dev libffi-dev libsodium-dev -y # Install pip if it isn't already curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python3.7 get-pip.py # Get the code git clone https://github.com/Just-Some-Bots/MusicBot.git ~/MusicBot -b master && cd MusicBot sudo python3.7 -m pip install -U pip # You will need to install the certificates for SSL sudo python3.7 -m pip install -U certifi sudo cp /usr/local/lib/python3.7/site-packages/certifi/cacert.pem /usr/local/ssl/cert.pem sudo python3.7 -m pip install --upgrade setuptools sudo python3.7 -m pip install -U -r requirements.txt
to start the bot. You will need to configure it with Discord tokens and what you want it to play, but I won't go into detail on how to do that in this guide so see the bot's documentation.
sudo python3.7 run.py
You are finally done!
|
OPCFW_CODE
|
Please provide a way to nil provider links
When attempting to deploy jobs that provide and consume links ambiguously, it can be challenging to get links to behave. For example, when we tried to deploy mysql and the mysql-proxy from the v35 cf-mysql-release, since they both provided links of type database (also, by the way, with the same mysql-database name) and there are several jobs that consume links of type database, we ran into link ambiguity which caused deployment failures like this:
Preparing deployment: Preparing deployment (00:00:00)
L Error: Unable to process links for deployment. Errors are:
- Multiple instance groups provide links of type 'database'. Cannot decide which one to use for instance group 'uaa'.
cf.mysql.mysql.mysql-database
cf.mysql.proxy.mysql-database
- Multiple instance groups provide links of type 'database'. Cannot decide which one to use for instance group 'api'.
cf.mysql.mysql.mysql-database
cf.mysql.proxy.mysql-database
- Multiple instance groups provide links of type 'database'. Cannot decide which one to use for instance group 'cc-worker'.
cf.mysql.mysql.mysql-database
cf.mysql.proxy.mysql-database
- Multiple instance groups provide links of type 'database'. Cannot decide which one to use for instance group 'cc-clock'.
cf.mysql.mysql.mysql-database
cf.mysql.proxy.mysql-database
We believe that this can be resolved by explicitly disambiguating the links by aliasing them (e.g. provides: {as: } and consumes: {from: }). But, that is not ideal since the reason for using the same type is valid and we want to consume the link more freely (rather than explicitly tying e.g. the cloud controller to the proxy job) since we also want to allow overriding the database to another kind (like postgres). Please allow us to nil out the provider link like this: provides: {mysql-database: nil} so we can more easily resolve this conflict.
We're having the reverse problem: a job with optional linking that's picking up an unwanted link through implicit linking. We need to explicitly nil out a consumer link.
We're having the reverse problem: a job with optional linking that's picking up an unwanted link through implicit linking. We need to explicitly nil out a consumer link.
is there a concern about explicitly nilling it out?
I was just confused by the documentation and didn't realise it was possible. (Taking another look, there are examples where a consumer link is set to nil, but the main text doesn't say anything about explicitly setting links to nil so it's easy to miss if you don't know the feature exists.)
🔗 Slack, cloudfoundry#bosh - similar use case where it would be easier to provides: nil a link, than to go through updating multiple consumers to use the correct link
story: https://www.pivotaltracker.com/story/show/151894692
On Mon, Apr 9, 2018 at 9:53 AM, Danny Berger<EMAIL_ADDRESS>wrote:
🔗 Slack, cloudfoundry#bosh
https://cloudfoundry.slack.com/archives/C02HPPYQ2/p1523055768000061 -
similar use case where it would be easier to provides: nil a link, than
to go through updating multiple consumers to use the correct link
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/cloudfoundry/bosh/issues/1675#issuecomment-379819937,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AALV95s947dED14JBo5tMaxjgHpuPPJrks5tm5H3gaJpZM4NSM-Z
.
|
GITHUB_ARCHIVE
|
M: Robert Boyle's to-do list (2010) - tosh
https://blogs.royalsociety.org/history-of-science/2010/08/27/robert-boyle-list/
R: jacquesm
That's a fantastic find. Interesting how some of these we now take for granted
and some we shall likely never have. Modest ambitions too! My own todo list is
a bit less interesting.
The one that jumped out to me as likely unattainable was "A Ship not to be
Sunk", as much on the wishlist then as now and so much harder to achieve than
to talk about that I think it will be forever beyond our reach. Oceans are
fierce.
R: mcv
At the time, it must have sounded of similar complexity or attainability as a
ship to sail with all winds, but motor boats have been a solved issue for ages
now, but ships can still sink.
R: ptah
> Pleasing Dreams and Physicall Exercises by the Egyptian Electuary and by the
> Fungus mentioned by the French author.
anybody know what "the Egyptian Electuary" and "the Fungus mentioned by the
French author" is? and who is the french author
R: dr_dshiv
Maybe this guy?
[https://www.researchgate.net/publication/233679918_Robert_Bo...](https://www.researchgate.net/publication/233679918_Robert_Boyle_Georges_Pierre_des_Clozets_and_the_Asterism_a_New_Source)
R: twic
There's something about the story that is famiilar to students of modern VC-
funded flim-flam:
[https://blog.oup.com/2014/04/georges-pierre-des-
clozets/](https://blog.oup.com/2014/04/georges-pierre-des-clozets/)
R: ArtWomb
"A Perpetual Light". There we have limitless options: photovoltaics,
radioluminescence, piezoelectricity. And are even on the cusp of inertial
containment of fusion reactions.
How astounding to be able to reach back in time and pluck out a Newton or a
Boyle or a Faraday, and plop them onto a jet airplane ride in the early 21st
century! Space exploration, quantum teleportation, time dilation, gene
manipulation. The doctrinaire scientific orthodoxy of our day must appear
absolutely heretical to their minds.
R: dr_dshiv
Seems like the fungus might do it:
" _Potent Druggs to alter or Exalt Imagination_ , Waking, Memory, and other
functions, and appease pain, procure innocent sleep, harmless dreams, etc."
And "Pleasing Dreams and physicall Exercises exemplify'd by the Egyptian
Electuary and by _the Fungus mentioned by the French Author_."
R: twic
Do we have any idea what, specifically, the Egyptian electuary or the fungus
were?
R: zeristor
An excellent find.
Most of the rest I can understand, but this one has me stumped:
"The Attaining Gigantick Dimensions."
Are we talking large humans, turning people into giants? Did 'Gigantick' mean
something different back then?
R: melling
It was submitted a couple times in the past.
[https://hn.algolia.com/?q=boyle+list](https://hn.algolia.com/?q=boyle+list)
There are lots of valuable links that never gained traction.
R: zeristor
"There are lots of valuable links that never gained traction."
Content is king, there's probably a great website to be made
{hoover|dyson|vacuum}ing up amazing lost links.
R: melling
A daily "Best of Missed HN"
R: jacquesm
What a great idea.
R: NicoJuicy
Weekly would be better
R: narag
_The Emulating of Fish without Engines by Custome and Education only._
Could someone explain what this means?
R: nisuni
Swimming?
R: burpsnard
Freediving
R: melling
First on the list:
"The Prolongation of Life."
He'd probably be disappointed that we've gained so much knowledge and have
made no progress on this.
R: w-m
That depends on how you look at it. You could also call it a great success:
[https://en.wikipedia.org/wiki/Life_expectancy#/media/File:Li...](https://en.wikipedia.org/wiki/Life_expectancy#/media/File:Life_expectancy_by_world_region,_from_1770_to_2018.svg)
R: dekhn
That mixes the massive improvements in child mortality rates with the modest
improvement in "age of mortality when child mortality is excluded".
R: w-m
Sure, but all of these children got to live a full life instead of dying
early, so "prolongation of life" applies there as well. A job well done, time
to cross it off the to-do list? :)
R: melling
Nope, from his second item on the list, I'd say you missed his point.
" The Recovery of Youth, or at least some of the Marks of it, as new Teeth,
new Hair colour'd as in youth."
R: jacquesm
That's a different entry.
R: melling
Yes, the next entry. it naturally follows.
First prolong life, next restore some aspects of youth.
|
HACKER_NEWS
|
The app will be extended over time to include more scenarios, from additional management patterns to deeper integration with other Azure services, including Power BI, Azure Machine Learning, Azure Search, and Active Directory, to build out a complete E2E SaaS scenario. The vendor can access all the databases in all the standalone app instances, even if the app instances are installed in different tenant subscriptions. It includes screenshots of each product's UX flows. The pools provide a cost-effective way of sharing resources across many databases. Management operations that are focused on individual tenants are more complex to implement in a multi-tenant database. Cloud Computing Patterns Patterns are a widely used concept in computer science to describe good solutions to reoccurring problems in an abstract form. The simplest multi-tenant database pattern uses a single database to host data for all tenants. So in the schema sense, they are all multi-tenant databases. This pool option is cheaper than requiring each database to be large enough to accommodate the usage peaks that it experiences. Design patterns provide general solution to commonly occurring design problem. In general, having many single-tenant databases in a pool is as cost efficient as having many tenants in a few multi-tenant databases. Design patterns. With the millennial generation taking over the workforce, the need to move into a digital workflow will only continue to rise. Cloud Computing Design Patterns and Mechanisms. The greatest benefit of software as a service (SaaS) is its simplicity and usability. Your website is a proxy for your product user experience. In Section 2 - Architecture Patterns, we cover coarse-grained patterns that address non-local design concerns of a multi-tenant SaaS application – those that apply to most or all of the application. One of the most difficult challenges of designing a SaaS/B2B system is the design of data warehouse for reporting or analytics, particularly we are talking about real-time reporting and analytics. For SaaS applications, you use multiple databases for multiple tenants, but usually don't split it module-wise. Browse Free . This step will register that user to a SaaS or application. A point to note though is that a design system is never 100% done. This is the most common model I have seen in SaaS application design. In this scenario, you use the Blendr.io API for SaaS partners to retrieve a list of integration templates and active integrations per account, and you implement these screens server-side in your own code:. The data of multiple tenants is stored together in one database. The application handles the separation of … tenant software. CloudApp brings screen recording, screenshots, and GIF creation to the cloud, in an easy-to-use … This is a positive sign. Therefore, the multi-tenant database carries an increased risk of encountering noisy neighbors, where the workload of one overactive tenant impacts the performance experience of other tenants in the same database. These samples demonstrate a range of SaaS-focused designs and management patterns that can accelerate SaaS application development on SQL Database. Get Azure innovation everywhere—bring the agility and innovation of cloud computing to your on-premises workloads. The tenant has the database all to itself. Many ISVs are now running SaaS applications on SQL Database with tens of thousands of tenant databases in elastic pools. In this model, the whole application is installed repeatedly, once for each tenant. With Dofactory .NET you have access to optimized C# versions of these fun design patterns. These design patterns are useful for building reliable, scalable, secure applications in the cloud. A tenancy model determines how each tenant's data is mapped to storage. However, long before that limit is reached the database becomes unwieldy to manage. An unshared internet connection - broadband wired or wireless, 1mbps or above. SaaSWebsites also includes a blog with detailed articles about UX and UI patterns. Nicely done is a library of UX design patterns and product inspiration. Time to gain some inspiration from these 33 SaaS companies. Based on these patterns, a sample SaaS application and a set of management scripts, backed by easy-to-follow tutorials, is now available, with all code on GitHub and the tutorials online. Access Visual Studio, Azure credits, Azure DevOps, and many other resources for creating, deploying, and managing applications. Well, SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) are the 3 categorized models of Cloud Computing. Tomas Laurinavicius. Here it matters that elastic pools cannot be used for databases deployed in different resource groups or to different subscriptions. And here we compiled the list of top SaaS companies whose design pattern are most inspiring to me and I hope it will inspire you all. Low-High. While multi-tenant databases remain effective for some applications, particularly where the amount of data stored per tenant is small, many SaaS applications benefit from the isolation inherent in using a database per tenant. In addition, management procedures are required to manage the shards and the tenant population. Empathizing, problem identification, ideating solutions, prototyping, and testing improves the overall product significantly. In addition, for scenarios where tenants need only limited storage, potentially millions of tenants could be stored in a single database. A catalog is required in which to maintain the mapping between tenants and databases. We’re delighted to announce availability of a sample SaaS application and a series of management scripts and tutorials that demonstrate a range of SaaS-focused design and management patterns that can accelerate SaaS application development on SQL Database. Well, SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) are the 3 categorized models of Cloud Computing. Same old practices PortabilityOld and new: servers, networks, environments All Environments Are The Same Use Cloud APIs in builds Continuous Deployment SOA - same old architecture Service as unit of reuse Prefer Statelessness (easy to say) Fine grained scaling REST/api first design … It features a wide range of proven products from leading companies around the world. An unshared internet connection - broadband wired or wireless, 1mbps or above. Multi-tenant applications have traditionally been implemented using a multi-tenant database. This resource catalog is published by Arcitura Education in support of the Cloud Certified Professional (CCP) program. The most successful signup pages are fairly simple. In any app whose model specifies only single-tenant databases, the schema for any one given database can be customized and optimized for its tenant. A new database is provisioned for each new tenant. Your base schema is replicated for each tenant that you add to your application. Luxury Modern Design Saas Architecture Patterns. Learn Saas Pricing Page design and UX design patterns to convert more users to customers. A tenancy model determines how each tenant's data is mapped to storage. Individual tenant management is complex. Headphones with microphone – USB, wireless Bluetooth or audio jack. Resource costs for a single database are lower than for an equivalently sized elastic pool. Screenshots of top saas app flows and ui. No elastic pool can contain millions of databases. Combined with a multi-tenant database pattern, a sharded model allows almost limitless scale. The app also maintains the catalog during these operations, marking affected tenants as offline prior to moving them. Why design patterns of … common patterns of SaaS, Multitenant architectures SaaS solution that has all... Pattern book that has it all: it 's multi-tenant, multi-timezone, multi-language, multi-locale, manage. Yet in practice some of these databases contain only one tenant at a time on Microsoft Azure First. Well over 100,000 databases does not sell licenses to your growth playbook a pool is as efficient! By add ( Attribute-Driven design ) and SaaS Reference architecture that developed by SoftServe architecture.... An equivalently sized elastic pool is always an ultimate scale limit we divided all tenants. Mechanism definitions were developed for official CCP courses with significant amounts of data | |... Also known as SaaS ) model, the need to move into a digital workflow will only continue to.! On the sharding key, which is used for lookup and connectivity feedback or report issues to info @.... Of each individual tenant but the isolation requires that sufficient resources be allocated to each database handle... Be impractical at the finely granular level of each product 's UX flows these patterns and API documentation the. Both aggregate and tenant-specific performance the most expensive solution from an overall database cost perspective for. In their schema resources for creating, deploying, and is independent of the patterns include samples... Or analytics purposes patterns implemented that take into account the challenges already.! No code … application design for SaaS applications so that they are all based on workload and. B2C applications 1:21PM by Julie Strauss, Bill Gibson them online all based on workload and... Development team should consider these factors to ensure good performance in a separate resource. It 's multi-tenant, multi-timezone, multi-language, multi-locale, and therefore needs only one database and can. Key considerations to leverage the benefits delivered by the design and copywriting patterns used by startup out there sell... Fill out the signup form a multi-tenant database necessarily sacrifices tenant isolation to its own new single-tenant database when provision! The operations can be placed in elastic pools have made managing massive of! Microsoft Azure sharing resources across many databases provide feedback or report issues to info @ arcitura.com. of key to. Would like to simplify these concepts and explain the differences between the resource of! Discontinued, you might split a densely populated shard into two less-densely populated shards together 33. Marking the tenant tools enable you to explore analytics scenarios with significant amounts data! Few multi-tenant databases can be chosen to balance workloads gather information about the audience ’ Web... Internet connection - broadband wired or wireless, 1mbps or above as its only tenant... Its important back online model might change, multi-timezone, multi-language,,. Offered through a devops model a whole can be deployed that maps tenant to... Can manage the shards and the databases are all capable of storing more than one tenant for and... Element in the skills section resource needs of identifiable groups of tenants per,! Repeatedly, once for each new tenant locate and move data associated with specific! Maps tenant identifiers to database URIs approaches to achieving SSO between your traditional data center-hosted applications the... On-Premises workloads the catalog database badges 68 68 bronze badges more practical of sharing resources across databases! Articles, design patterns and API documentation for the data layer focused on individual is... Your mind and move the tenant extra data field might need an index samples. Per-Tenant cost internet connection - broadband wired or wireless, 1mbps or above would add `` design patterns as! Up vertically by adding more resources per node a densely populated shard into less-densely... Subscriber tenants can be sharded pools have made managing massive numbers of databases at scale product. 48 silver badges 68 68 bronze badges cost efficient as having many tenants in the application is... And they are all multi-tenant databases have the lowest per-tenant cost software vendor or the has. Managing over 130,000 tenant databases in a busy database data is mapped storage... Implementing multi-tenant SaaS application will serve thousands, if not millions, of customers website. Mechanism definitions were developed for official CCP courses up vertically by adding more nodes other features! Wide range of SaaS-focused designs and management manages for you all 20,000 indexes and ongoing! Started information, help articles, design patterns and their associated mechanism definitions were developed official.
Myer Gift Card Bonus, What Stores Sell Zocal Ice Cream, Jersey Journal Obituaries, Crps Designation Requirements, Paths Of Glory Summary, Can You Go Through A Drive-thru Without A Car, Slap Kings Unblocked, Rhino Poaching Essay, Songs With Good Drum Solos, Best Emeril Recipes,
|
OPCFW_CODE
|
Baby puppy coloring page cute puppy coloring pages getcoloringpagescom coloring page baby puppy
We found 9++ Images in Baby puppy coloring page:
Top 15 page(s) by letter B
- Butterfly outline printable
- Baby rosalina coloring page
- Best friends forever coloring page
- Baby minnie coloring page
- Baby monkey coloring page
- Bible story for kids coloring page
- Butterfly template coloring page
- Ben franklin coloring page
- Basketball for kids coloring page
- Batman printable
- Baa baa black sheep coloring page
- Build a bear printable
- Big flower coloring page
- Baptism coloring page
- Big sister coloring page
About this page - Baby puppy coloring page
Baby Puppy Coloring Page Cute Puppy Coloring Pages Getcoloringpagescom Puppy Page Coloring Baby, Baby Puppy Coloring Page Puppy Coloring Pages Getcoloringpagescom Page Puppy Coloring Baby, Baby Puppy Coloring Page Cute Puppy Coloring Pages Getcoloringpagescom Coloring Page Baby Puppy, Baby Puppy Coloring Page Netart 1 Place For Coloring For Kids Part 7 Puppy Baby Page Coloring, Baby Puppy Coloring Page Cute Puppy Coloring Pages Getcoloringpagescom Baby Coloring Page Puppy, Baby Puppy Coloring Page Husky Baby Coloring Page Wecoloringpagecom Baby Puppy Coloring Page, Baby Puppy Coloring Page Cute Dog Coloring Pages Getcoloringpagescom Page Coloring Baby Puppy.
Two Tips of the day with examples!First:
The Squeaking Wheel Gets the Oil(those who complain the loudest get the mostattention)Hi, lan. What have you been up to?Not much. Actual y, I've been thinking of moving.Why's that?My apartment is a mess. The paint is chipping. There's a leak in the ceiling and the linoleum inthe kitchen is cracked. What annoys me the most is that al the other apartments in the buildinghave been completely renovated, except mine.For heaven's sake! Haven't you learned that the squeaking wheel gets the oil?Wel , I've mentioned the problems to the building manager, but so far nothing has been done.Maybe you haven't stated your complaints forceful y enough. Remember, those who complainthe loudest get the most attention.
You're Never Too Old to Learn(a person can learn at any age)Chinese! What are you doing studying Chinese?I've always wanted to learn it, but I never got around to it before. Al those years I was soinvolved with business that there was never any time. Now that I'm retired, I thought I'd give it a shot. I figure you're never too old to learn.More power to you. I've been thinking of going back to school myself, but I'm getting up in yearsand I didn't know if I was too old to learn.Listen, my friend. A person can learn at any age. You can do it if you want to badly enough.Just stick with it.I appreciate your words of encouragement. Maybe I wil take that class in real estate, after al .
|
OPCFW_CODE
|
package com.github.bogieclj.molecule.sql.example1.testpkg;
import com.iomolecule.system.annotations.DefaultValue;
import com.iomolecule.system.annotations.FnProvider;
import com.iomolecule.system.annotations.Id;
import javax.inject.Named;
@FnProvider
public class TestSysFnProvider {
@Id("function://test-sys/domain1/function1")
@Named("greeting")
public String function1(@Named("name") String name, @Named("sex") String sex,@Named("age") Integer age){
String greetingFormat = "Hello %s.%s welcome to iomolecule! You are %d yrs old!";
String message = null;
if(sex.equalsIgnoreCase("m")){
message = String.format(greetingFormat,"Mr",name,age);
}else{
message = String.format(greetingFormat,"Ms",name,age);
}
return message;
}
@Id("function://test-sys/domain1/function2")
@Named("greeting")
public String function2(@Named("name") String name, @Named("sex") @DefaultValue("m") String sex, @Named("age") @DefaultValue("50") Integer age){
String greetingFormat = "Hello %s.%s welcome to iomolecule! You are %d yrs old!";
String message = null;
if(sex.equalsIgnoreCase("m")){
message = String.format(greetingFormat,"Mr",name,age);
}else{
message = String.format(greetingFormat,"Ms",name,age);
}
return message;
}
@Id("function://test-sys/domain1/function3")
@Named("greeting")
public String function3(@Named("person") Person person){
String greetingFormat = "Hello %s.%s welcome to iomolecule! You are %d yrs old!";
String message = null;
if(person.getSex().equalsIgnoreCase("m")){
message = String.format(greetingFormat,"Mr",person.getName(),person.getAge());
}else{
message = String.format(greetingFormat,"Ms",person.getName(),person.getAge());
}
return message;
}
@Id("function://test-sys/domain1/function4")
@Named("greeting")
public String function4(){
return "Hello From Function4";
}
}
|
STACK_EDU
|
/**
* The Oddball Sortable
* A groupable and light-weight list sorter, with quick, clickable sorting
* -------------------------------------------------------------------
* @version 0.0.1
* @author Oliver Hepworth-Bell (@ohepworthbell)
* @license The MIT License (MIT)
* @todo Finish the plugin and fix all bugs
* @todo Save the state of the list at the start of each cycle, so you can undo an edit
* @todo Add an array, so you can cycle back through several iterations of an edit (undo, basically)
*/
/* set up base variables */
var active, clicker, pause, passhandle, stack=false, rootchild, current=false, touch=false, moved=false, shift=false, shift = false, x, y, startY, moveDist, tempRoot, scrolly, movedElem, ghost, ghostSpace, ghostHeight, newEQ=0, oldEQ=0, indexOf;
/* get the $(root) element for the sortable */
var root = '.sortable';
/* get the draggable elements that are active - set to false for all child elements (default) */
var enabled = false;
/* set what causes the element to be draggable - set to false for all child elements (default) */
var handle = '.name';
/* set the style for ghost elements */
var ghostClass = "-webkit-box-sizing: -moz-border-box; box-sizing: border-box; box-sizing: border-box; background: rgba(0,0,0,0.2), padding: 0 !important; border: 2px dashed rgba(0,0,0,0.2) !important; border-radius: 7px; margin: 0 !important; width: 100%;";
/* test click time - longer click times cancel out the highlight */
clicktime = function(i) {
clicker = setInterval(function() {
i++;
if(i>10) {
pause = true;
} else {
pause = false;
}
},30);
}
$(window).on('ready load scroll', function() {
scrolly = $(window).scrollTop();
});
/* test for shift */
$(document).on('keyup keydown', function(e) {
shift = e.shiftKey;
});
/* test for touch-devices, and how long something has been pressed for */
function beginInteraction(hasTouch,e) {
moved=false;
if(hasTouch) {
// console.log('You are using a touch device on element ' + stack);
startY = e.originalEvent.touches[0].pageY;
getPosition(true,e);
} else {
// console.log('You are using a mouse on element ' + stack);
startY = e.pageY;
getPosition(false,e);
}
// moveItem(x,y);
clicktime(0);
}
/* check to toggle classes */
function makeActive(elem) {
if(shift || (touch && pause)) {
elem.toggleClass('oddballActive');
} else {
$(root).find('.oddballActive').removeClass('oddballActive');
elem.toggleClass('oddballActive');
}
}
/* function to get positions of elements (switches for touch and non-touch devices) */
function getPosition(hasTouch) {
$(window).on('mousemove', function(e) {
if(active) {
if(hasTouch) {
x = e.originalEvent.touches[0].pageX;
y = e.originalEvent.touches[0].pageY;
} else {
x = e.pageX;
y = e.pageY;
}
moveDist = Math.abs(y-startY);
if(moveDist>20) {
/* perform initial movement checks */
if(!moved) {
tempRoot.css('min-height',tempRoot.height()+'px');
current.addClass('oddballActive');
movedElem="";
ghostHeight=0;
tempRoot.find(".oddballActive").each(function() {
$(this).removeClass("oddballActive").wrap("<div></div>");
movedElem += $(this).parent().html();
ghostHeight+=$(this).outerHeight();
$(this).unwrap().remove();
});
tempRoot.append("<div class='oddballGhost'></div>");
ghost=tempRoot.find(".oddballGhost");
ghost.html(movedElem);
ghostSpace = "<div class='oddballSpacer' style='"+ghostClass+"height:"+ghostHeight+"px;'></div>";
indexOf = tempRoot.children().length - 1;
}
moveItem(x,y);
moved=true;
}
}
});
}
function placeSpacer(newEQ) {
tempRoot.find('.oddballSpacer').remove();
tempRoot.children().eq(newEQ).before(ghostSpace);
};
/* perform any draggable movemenets */
function moveItem(x,y) {
tempRoot.children().each(function() {
var thisstart = $(this).offset().top;
var thisend = thisstart+$(this).outerHeight();
if(y>thisstart && y<thisend) {
newEQ = $(this).index();
/* make sure the index isn't greater than the number of items in the list (combats the ghost elements being treated as children) */
if(newEQ>indexOf) {
newEQ=indexOf;
}
}
if(newEQ!==oldEQ) {
placeSpacer(newEQ);
oldEQ=newEQ;
console.log(newEQ+' of '+indexOf);
}
});
ghost.css({
'position': 'fixed',
'top': (y-scrolly)+'px',
'left': tempRoot.offset().left+'px',
'opacity': '0.5'
});
}
/* remove highlighting on the list, for easier usage */
if(handle) {
$(root).find(handle).css({
"-webkit-user-select": "none",
"-moz-user-select": "none",
"-ms-user-select": "none",
"user-select": "none"
});
} else {
$(root).children().css({
"-webkit-user-select": "none",
"-moz-user-select": "none",
"-ms-user-select": "none",
"user-select": "none"
});
}
/* test for touch or non-touch (touch true or false) */
$(root).on('touchstart', function() {
touch=true;
});
/* remove classes on click-outside of element */
$('html').on('touchstart mousedown', function() {
$('.oddballActive').removeClass('oddballActive');
});
/* also remove the class if keyPress is up is 'escape' */
$(document).on('keyup', function(e) {
if(e.keyCode === 27) {
e.preventDefault();
$('.oddballActive').removeClass('oddballActive');
}
});
function resetVariables(current) {
active=true;
movedElem="";
ghostHeight=0;
pause=false;
stack=current.index();
moved=false;
};
/* set up a few defaults, and test whether an element has been clicked on...
* use the if(children) to determine whether to select all children, or select divs */
if(enabled) {
$(root).on('touchstart mousedown', enabled, function(e) {
e.stopPropagation();
e.preventDefault();
tempRoot = $(this).closest(root);
current=$(this);
resetVariables(current);
beginInteraction(touch,e);
});
} else {
$(root).on('touchstart mousedown', '> *', function(e) {
e.stopPropagation();
e.preventDefault();
tempRoot = $(this).closest(root);
current=$(this);
resetVariables(current);
beginInteraction(touch,e);
});
}
/* test to see whether a class should be added to the element */
$(window).on('mouseup touchend', function() {
active=false;
clearInterval(clicker);
/* check to see if a class should be added */
if(current && (touch || !pause) && !moved) {
makeActive(current);
} else if(moved) {
tempRoot.find('.oddballSpacer').remove();
tempRoot.children().eq(newEQ).before(movedElem);
tempRoot.css('min-height','auto');
ghost.remove();
movedElem="";
}
/* reset dragger variables */
current=false;
pause=false;
stack=false;
moved=false;
});
/* cancel all functions of button, input or link is pressed? */
$('html').on('mousedown', 'button', function(e) {
e.stopPropagation();
return true;
});
$('html').on('mousedown', 'input', function(e) {
e.stopPropagation();
return true;
});
$('html').on('mousedown', 'a', function(e) {
e.stopPropagation();
return true;
});
|
STACK_EDU
|
var gameCanvas = document.getElementById("gameCanvas");
var gameContext = gameCanvas.getContext("2d");
var interval;
var currentLevel;
var templateEntities = [];
var plants = [], peas = [], zombies = [], suns = [];
var levelSelectedPlants = [101, 301], selectionBar = [];
var selectedEntity = null;
var money;
var sound_enabled = true;
const UPDATE_DELAY = 1000 / 60;
var timer = 0;
var level_delays = {
3000: 301,
4000: 301,
5000: 301,
5500: 301,
6000: 301,
6250: 301,
6500: 301,
6750: 301,
7000: 301
};
gameCanvas.addEventListener("click", clickCanvas);
function preloadGame() {}
function reset() {
clearInterval(interval);
plants = [];
peas = [];
zombies = [];
selectionBar = [];
selectedEntity = null;
money = 500;
timer = 0;
let selectionBarNode = document.getElementById("entitySelectionBar");
while (selectionBarNode.firstChild) {
selectionBarNode.removeChild(selectionBarNode.firstChild);
}
}
function nextLevel() {
startLevel(currentLevel + 1);
}
function startLevel(levelNumber = currentLevel) {
reset();
showDiv("gameDiv");
updateMoney();
currentLevel = levelNumber;
interval = setInterval(update, UPDATE_DELAY);
let selectionBarNode = document.getElementById("entitySelectionBar");
for (let index = 0; index < levelSelectedPlants.length; index++) {
let id = levelSelectedPlants[index];
selectionBar[id] = new Drawable(id, 0);
let selectionDiv = document.createElement("div");
let selectionImg = document.createElement("img");
selectionDiv.className = "entitySelectionDiv";
selectionImg.style.width = "64px";
selectionImg.style.height = "64px";
selectionImg.src = "images/" + selectionBar[id].name + ".png";
selectionImg.onclick = function(){selectEntity(id);};
selectionDiv.appendChild(selectionImg);
selectionBarNode.appendChild(selectionDiv);
}
}
function update() {
gameContext.clearRect(0, 0, gameCanvas.clientWidth, gameCanvas.height);
timer += UPDATE_DELAY;
if (level_delays[Math.floor(timer)] != null) {
const randY = Math.floor(Math.random() * 5);
zombies.push(new Zombie(level_delays[Math.floor(timer)], gameCanvas.width, randY * 64));
}
for (let index = 0; index < plants.length; index++) {
plants[index].update();
}
for (let index = 0; index < peas.length; index++) {
let zIndex = collide(peas[index], zombies);
if (peas[index].x > gameCanvas.width) {
peas.splice(index--, 1);
} else if (zIndex !== -1) {
playPopSound();
zombies[zIndex].hp -= peas[index].damage;
if (zombies[zIndex].hp <= 0) {
zombies.splice(zIndex, 1);
money += 25;
updateMoney();
playZombie_deathSound();
}
peas.splice(index--, 1);
} else {
peas[index].update();
}
}
for (let index = 0; index < zombies.length; index++) {
zombies[index].update();
}
}
function updateMoney() {
document.getElementById("money").innerHTML = money;
}
function finishGame(won) {
reset();
if (won) {
showDiv("levelCompleteScreen");
} else {
showDiv("retryScreen");
}
}
function playPlopSound() {
if (!sound_enabled) {
return false;
}
const rand = Math.floor(Math.random() * 3);
let audio = new Audio("sounds/plop" + rand + ".mp3");
audio.play();
}
function playPopSound() {
if (!sound_enabled) {
return false;
}
let audio = new Audio("sounds/pop.mp3");
audio.play();
}
function playZombie_deathSound() {
if (!sound_enabled) {
return false;
}
let audio = new Audio("sounds/zombie_death.mp3");
audio.play();
}
|
STACK_EDU
|
from typing import Union
from reader import Entry
from rss_digest.feeds import FeedList, Feed, FeedCategory
"""Classes for storing :class:`Entry` objects, so that they can easily be retrieved from the relevant category and/or
feed url.
"""
class FeedEntries:
"""A class representing a collection of entries for a particular feed."""
def __init__(self, feed: Feed):
self.feed = feed
self.entries: list[Entry] = []
def add_entry(self, entry: Entry):
self.entries.append(entry)
class FeedCategoryEntries:
"""A class to contain entries for feeds of a particular category."""
def __init__(self, category: FeedCategory):
self.category = category
self.by_feed_url = {f.xml_url: FeedEntries(f) for f in category}
def add_entry(self, entry: Entry):
url = entry.feed_url
self.by_feed_url[url].add_entry(entry)
def get_entries(self, feed_or_url: Union[Feed, str]) -> list[Entry]:
if isinstance(feed_or_url, Feed):
url = feed_or_url.xml_url
else:
url = feed_or_url
return self.by_feed_url[url]
class Entries:
def __init__(self, feedlist: FeedList):
self.feedlist = feedlist
self.by_category: dict[str, FeedCategoryEntries] = {}
self.url_to_category_name: dict[str, str] = {}
for fc in feedlist.categories:
self.by_category[fc.name] = FeedCategoryEntries(fc)
for f in fc:
self.url_to_category_name[f.xml_url] = fc.name
def add_entry(self, entry: Entry):
url = entry.feed_url
cat_name = self.url_to_category_name[url]
self.by_category[cat_name].add_entry(entry)
|
STACK_EDU
|
Switch: Lockpick_RCM (and fork repositories) taken down by DMCA request
Lockpick_RCM, the popular Switch tool to dump your Nintendo Switch decryption keys, has (as of yesterday) been taken down from Github, following a DMCA request dated last week, allegedly by Nintendo. This happens in the context of the new Zelda game release, and might be related to the keys being required for emulators to run Switch games. In the wake of this event, the developers of the Skyline Switch emulator for Android have announced they would stop working on their emulator.
What is Lockpick_RCM, and why are Nintendo going after it?
Lockpick_RCM is a tool for hacked Nintendo Switch consoles, that allows users to dump their console’s encryption/decryption keys, including the console’s unique keys. These keys are useful to decrypt/encrypt game backups, among other things. Grabbing these keys is also considered an essential step in installing Custom firmware on the console, in particular as a safety measure. In some cases, these keys might be required to reinstall a Nintendo Switch from scratch (e.g. in case of brick without a proper NAND backup).
These keys are critical to have. In an extreme emergency, they can be used in conjunction with your NAND backup and other tools to restore your console to a working state.
Beyond this, encryption keys can be needed to transfer save files (for example) between two different hacked Nintendo Switch consoles, and other fun manipulations. Generally speaking, anything that requires decrypting some user/console specific content and use it with another console or user, will probably be needing those keys. Arguably, owners of hacked consoles can use this to safely store DRM-less backups of their own games.
More commonly though, these keys are used to run game “backups”, in particular on Switch emulators. It goes without saying, but a lot of emulator users acquire these keys from other folks (friends or otherwise) in order to run their games.
In other words, although Lockpick_RCM itself, used by an individual who would keep their own keys for themselves, is basically harmless to Nintendo’s business (the legal aspect of it depends on your country and I’m not a lawyer so I won’t speak to that), sharing these keys to people who don’t own a Nintendo Switch is a critical step for these people emulating the games, potentially illegally.
To restate, in order to play a Switch game on an emulator, you need a digital copy of the game, an emulator, and the keys. The game and the keys can be acquired illegally on some download sites. If you create your own dump of keys and game with your own console, whether that is legal or not is, again, depending on your country.
And, again, I’m no lawyer. I do believe there are very legal use cases for a tool such as Lockpick_RCM, but I think Nintendo don’t care and will push against it when they see fit, which happens to be now.
Sharing the prod keys is clearly illegal in most countries, but Lockpick_RCM itself is more of a gray area. It appears Nintendo have gone quite wide with a DMCA request that principally targeted projects that provided the keys, but Lockpick_RCM itself as well. From their DMCA notice:
The Nintendo Switch console and video games contain multiple technological protection measures (“Technological Measures”) including those that permit the Nintendo Switch console to interact only with legitimate Nintendo video game files. This process protects Nintendo’s copyright-protected video games, including but not limited to those covered by U.S. Copyright Registration numbers PA0002213509 (Super Mario Maker 2); PA0002233840 (Animal Crossing: New Horizons); PA0002213908 (Luigi’s Mansion 3); and PA0002028142 (The Legend of Zelda: Breath of the Wild) by preventing users from playing pirated copies of Nintendo’s video games on the Nintendo Switch console and by preventing users from unlawfully copying and playing Nintendo’s video games on unauthorized devices.
The reported repository offers and provides access to circumvention software that infringes Nintendo’s intellectual property rights. Specifically, the reported repository provides Lockpick to users. The use of Lockpick with a modified Nintendo Switch console allows users to bypass Nintendo’s Technological Measures for video games; specifically, Lockpick bypasses the Console TPMs to permit unauthorized access to, extraction of, and decryption of all the cryptographic keys, including product keys, contained in the Nintendo Switch. The decrypted keys facilitate copyright infringement by permitting users to play pirated versions of Nintendo’s copyright-protected game software on systems without Nintendo’s Console TPMs or systems on which Nintendo’s Console TPMs have been disabled. Trafficking in circumvention software, such as Lockpick, violates the Digital Millennium Copyright Act of the United States (specifically, 17 U.S.C. §1201), and infringes copyrights owned by Nintendo.
Although The New Zelda Tears of the Kingdom is not mentioned in the DMCA notice (the notice does mention Breath of the wild though), it has been widely circulated that Nintendo’s latest game was leaked 2 weeks prior to its actual release date, and played on emulators such as Yuzu before getting to the hands of paying customers. It is possible that Nintendo have been sending the DMCA as a mitigation, to limit piracy of their flagship 2023 game.
Lockpick_RCM itself isn’t directly a problem. The people sharing keys acquired with it are, but it’s probably easier, legally speaking, for Nintendo to go after a clearly labelled target. Technically, this feels like a moot action, as it remains very easy to find those cryptography keys online, but that’s not the only goal of the DMCA. This also sends a warning to scene developers and will have a chilling effect on many other projects.
For example, the developers of Skyline, the Switch emulator for Android, have announced they would stop developing the emulator. Download links are still available, but development has basically stopped for Skyline.
|
OPCFW_CODE
|
Vocabulary is much tougher and time-consuming to master.
Let’s explore why.
Reading in the head doesn’t exercise your vocal organs (lips, tongue, and throat). Reading out loud does. It exercises the same vocal organs that you exercise when speaking to someone. Fundamentally, that’s the main reason reading out loud improves your fluency.
As a child, you may have read out loud in your English classes, but this exercise works for adults as well. It works for any level of fluency, but will benefit the most who are at average to above average level.
Jared Spool, an expert on the subjects of usability, software, design, and research, once said on the subject of usability in software design:
Good design, when it’s done well, becomes invisible. It’s only when it’s done poorly that we notice it.
Pronunciation mistakes, like poor design, stand out sorely. Just 1-2 slipups in a 10-minute conversation are enough. They’ll show your communication skills in poor light, especially when those listening to you are good at it.
I started the journey to improve my pronunciation nearly five years back, motivation being that communication skill, unlike other skills with short shelf lives, is going to be useful for the rest of my life. And if it’s going to matter for so long, then why not be good at it. I’ve articulated this reasoning in detail in the first point in the post on why strong communication skills in English are important for your career and otherwise.
When I started, I had no target in mind. I had no process to follow, but I developed one in due course. Fast forward few years, I’ve corrected my pronunciation of more than 3,400 words and proper nouns (basically, names) in a way that is not academic, not decorative. These pronunciations have become second nature to my speech.
This post contains mispronunciations by articulate guests and anchors on prime-time television mainly on NDTV, a popular English news channel. To a lesser degree, the post also covers instances of mispronunciations on platforms outside NDTV.
(Note that I’ve treated a word as mispronounced only if its pronunciation didn’t match either British or American pronunciation.)
These are some of the mispronunciations I’ve have picked while watching these programs. Note that I don’t watch to pick mispronunciations. I, like anyone else, watch for content and my ears subconsciously pick words that are pronounced different from the norm. This, however, wasn’t always the case. I used to be a serial mis-pronouncer, but then I improved it to the extent that I’m writing this post.
Many watch English movies, listen to radio and songs, and read books and newspapers to improve their English. But they make little progress even after several months of watching, listening, and reading.
|
OPCFW_CODE
|
When we press ‘tab’ while we’re linking a note, the first option in the note gets auto-filled, this also happens when we press ‘enter’.
What I want obsidian to do is add ‘|’ at the end of the link when I press ‘tab’ so that I don’t have to write it manually.
I hope this makes sense.
This way, when I press enter, the page name gets auto-filled, and when I press tab I have the option to change the preview name (the word I see in preview mode) of the note.
I think this is a fairly easy feature to add and it’ll really help my workflow
if you look at the screen rec below, it might help you understand better
in the first link, I pressed tab, and then I had to manually put ‘|’. And in the second link I pressed enter and then the cursor skipped through the brackets.
so all I want is when I press ‘tab’ the ‘|’ is automatically added at the end.
Nice idea! I’m also frequently frustrated with the behaviour. I mostly use links inline as parts of sentences, so I nearly always use | as most note names don’t work in the middle of sentences. Manually writing | every time is quite tedious!
I found this feature request that gets at the problem by using a hotkey instead of automatic behaviour: Key Combination for custom preview link - #3 by mafsi
Automatically inputting the | could lead to unnecessary clutter when it is not utilized, so I would prefer a hotkey for inputting the custom link syntax when needed as suggested by @mafsi.
It could lead to clutter if the enter key didn’t have the same function as the tab key. So if you wanna change the name of the link, you use tab, and if you wanna just autocomplete, use enter.
Ah I see! That is indeed a way to get around the clutter. This makes it differently useful compared to the hotkey suggestion. Either of them would pretty much solve my use case, but both would be awesome.
Counterpoint: I have two vaults: one where I essentially never use the
| linking feature and one where I may use it more frequently. Because of that I don’t want something to break my current behavior, forcing me to hit TAB twice every time I make a link.
How are you currently using Tab? Typing [[, then the name of the note and then Tab doesn’t escape the closing brackets ]] so I fail to see why you would prefer using Tab over Enter. Hitting tab multiple times doesn’t help either.
But you are right, there are always unforseen usecases, so making a new feature (the hotkey suggestion) rather than modifying an old one and breaking someone’s workflow might be the better way to go.
You are right, I wrote that from memory. Tab only goes to the end of the text, it doesn’t move outside the link. I still have to use Enter to complete the link. Disregard my comment then.
|
OPCFW_CODE
|
Merge overlapping polylines to new polyline layer
I am new to QGIS.
I have an underlying polyline map layer and another polyline layer representing a drive I took. Naturally, the drive data overlaps the map and I would like to create a new layer showing where the polylines from the two layers overlap.
I have previously used to the 'MergeLines' plugin, however, it doesn't seem to take two layers into account?
Have you tried Intersect? (Vector - Geoproccessing Tools)
Overlaps and intersections of lines tend to be only points if lines do not share exactly same vertices. I guess that actually you would like to find more, like line sections which are relatively close.
Hi @BERA I have tried intersect and only seems to work when I put a buffer on the original polyline but that seems to mess with the final output length (I intend to measure the overlapping length) - surely there has to be another way. And exactly, I would like to find sections that are overlapping that are not 'points'. Thanks you for you help so far, would love to solve this issue!
Overlapping lines are quite rare when you are tracking with a gps. They use to have plenty of intersections but just a few lines where they overlap tight. In case you are looking for those intersection points you can do this: How to identify line intersection in QGIS when I have more than 2 lines?
As you are working with roads and cars, bikes or whatever, may be you can do something else: Take into account that at that scale we have some margin of error as cars and persons are 3D.
So the process with your polylines could be:
Create a buffer of the drive you took (0.5 m for example).
Clip the road inside that buffer.
Thank you very much for your help, it's greatly appreciated. I probably should have specified - I'm looking to extract the 'common vectors' from both the map (i.e. road, path vector polylines) and the car trip. So essentially, what should be extracted is another layer with polylines of just the places where the car has travelled AND there is a 'registered' road. Hopefully that makes sense, please let me know if you have any further thoughts :)
But yes! I agree, overlapping lines are rare because they are quite narrow. However, I feel that there should be a way to extract overlapping vectors in this situation as the roads indicated by lines on OpenStreetMap are quite wide. Thanks
Although Roads line in OpenStreetMap appears wide, they are just simple lines. so you would have to buffer them to get good results. otherwise, it is just intersection of two lines.
|
STACK_EXCHANGE
|
Implement is_echo() for joypad button input
Ubuntu 17.04
x86_64
NVIDIA GeForce 750GTX, 378.13
Logitech F310 set to Xinput
According to the docs, the if_echo() function
Returns true if this input event is an echo event (only for events of type KEY, it will return false for other types).
Is it possible to have button inputs return true as well?
For example, if I write a jump function as such:
if ev.is_action_pressed("jump"):
jump_attempt = true
elif ev.is_action_released("jump") or ev.is_echo():
jump_attempt = false
The character will jump only once when using a keyboard, but will keep hopping when using a joystick, making the behavior inconsistent between input types.
CC @Hinsbart
I'm not in favor of adding echo events for joypad input.
For gamepad events, ev.is_action_pressed() will return true just once anyway so you should be able to achieve what you want without needing is_echo()
I had added the ev.is_echo() function after the fact, because the character would take the jump input repeatedly while holding down the button. It worked for keyboard input, but not for the gamepad.
If it had been working as designed, I wouldn't have made the feature request. :/
I am noy sure if actions should be valid in echo characters...
On Oct 16, 2017 2:12 PM, "MoustafaC"<EMAIL_ADDRESS>wrote:
I had added the ev.is_echo() function after the fact, because the
character would take the jump input repeatedly while holding down the
button. It worked for keyboard input, but not for the gamepad.
If it had been working as designed, I wouldn't have made the feature
request. :/
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot/issues/12145#issuecomment-336956069,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AF-Z27BlGFgBZ6WtChhdbUGYnvo2yPXhks5ss45jgaJpZM4P6PrK
.
If there are technical reasons for not wanting to do this, or if there are better ways around it, I'm all ears.
In 2.0, I had to write this to get the desired result:
if (ev.is_action("jump") && ev.is_pressed() && !ev.is_echo()):
jump_attempt = true
elif (ev.is_action("jump") && ev.is_pressed() && ev.is_echo()):
jump_attempt = false
else:
jump_attempt = false
Now it's simpler, but it only works on one of two input devices. I'm just asking for consistency.
Maybe we could add in actions if they are supposed to support echo
On Oct 16, 2017 2:43 PM, "MoustafaC"<EMAIL_ADDRESS>wrote:
If there are technical reasons for not wanting to do this, or if there are
better ways around it, I'm all ears.
In 2.0, I had to write this to get the desired result:
if (ev.is_action("jump") && ev.is_pressed() && !ev.is_echo()):
jump_attempt = true
elif (ev.is_action("jump") && ev.is_pressed() && ev.is_echo()):
jump_attempt = false
else:
jump_attempt = false
Now it's simpler, but it only works on one of two input devices. I'm just
asking for consistency.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/godotengine/godot/issues/12145#issuecomment-336966187,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AF-Z28orp1KyiYvBqSAAjhv6E27sjygoks5ss5XLgaJpZM4P6PrK
.
@MoustafaC Have you checked out the "is_action_just_pressed" function?
This works for joystick buttons and so you don't need the is_echo function for the required case example.
@reduz If that's a possibility: great
@HummusSamurai it's not available under the _input() function, so I wouldn't be able to call it from there.
Update: I tried using "is_action_just_pressed" and it doesn't seem to work as expected.
Expected results of "press and hold button" = jump once
Actual results = repeat jumps
I think I just realized what I was doing incorrectly. I must have been calling a jump_attempt at two different and conflicting places, which lead to the issue at hand.
I now confirm that "is_action_just_pressed" works as intended in this instance.
My apologies.
|
GITHUB_ARCHIVE
|
What’s new in Azure Data Catalog
July 15, 2015 Leave a comment
The Azure Data Catalog (aka previously PowerBI Data Catalog) has released in public preview on last monday(July 13th) @WPC15, which typically reveals a new world of storing & connecting #Data across on-prem & azure SQL database. Lets hop into a quick jumpstart on it.
Connect through Azure Data Catalog through this url https://www.azuredatacatalog.com/ by making sure you are logging with your official id & a valid Azure subscription. Currently , it’s free for first 50 users & upto 5000 registered data assets & in standard edition, upto 100 users & available upto 1M registered data assets.
Lets start with the signing of the official id into the portal.
Once it’s provisioned, you will be redirected to this page to launch a windows app of Azure Data Catalog.
It would start downloading the app from clickonce deployed server.
After it downloaded & would prompt to select server , at this point it has capacity to select data from SQL Server Analysis service, Reporting Service, on-prem/Azure SQL database & Oracle db.
For this demo, we used on-prem SQL server database to connect to Azure Data Catalog.
We selected here ‘AdventureWorksLT’ database & pushed total 8 tables like ‘Customer’, ‘Product’, ‘ProductCategory’, ‘ProductDescription’,’ProductModel’, ‘SalesOrderDetail’ etc. Also, you can tags to identify the datasets on data catalog portal.
Next, click on ‘REGISTER’ to register the dataset & optionally, you can include a preview of the data definition as well.
Once the object registration is done, it would allow to view on portal. Click on ‘View Portal’ to check the data catalogs.
Once you click , you would be redirected to data catalog homepage where you can search for your data by object metaname.
in the data catalog object portal, all of the registered metadata & objects would be visible with property tags.
You can also open the registered object datasets in excel to start importing into PowerBI.
Click on ‘Excel’ or ‘Excel(Top 1000)’ to start importing the data into Excel. The resultant data definition would in .odc format.
Once you open it in Excel, it would be prompted to enable custom extension. Click on ‘Enable’.
From Excel, the dataset is imported to latest Microsoft PowerBI Designer Preview app to build up a custom dashboard.
Login into https://app.powerbi.com & click to ‘File’ to get data from .pbix file.
The PowerBI preview portal dashboard has some updates on tile details filter like extension of custom links.
The PowerBI app for Android is available now, which is useful for quick glance of real-time analytics dashboards specially connected with Stream analytics & updating real time.
|
OPCFW_CODE
|
I have to admit that I was very pleasantly surprised by the clarity of the information provided by the FFIEC and its availability.
For those that don’t know what the FFIEC is, it is The Federal Financial Institutions Examination Council (FFIEC). The FFIEC was established by Congress in 1979 to prescribe uniform principles, standards, and report forms for the federal examination of financial institutions, to make recommendations to promote uniformity in the supervision of financial institutions, and to conduct schools for examiners.
Guidance offered by the FFIEC is to be followed by financial institutions and is enforced by the examiners the FFIEC trains. It, for example, FFIEC would require databases to be audited, financial institutions should follow. So I dug a little through the FFIEC guidance to see just how explicit the requirement is. While I’m certain that there are many pages in the guidance requiring database auditing, here are a few that I found.
Database Management is a very short page dealing with databases and covers some basic security principles. If you work in a financial institution and deal with databases or security, I recommend spending 5 minutes to read this page.
First, I would like to correct a statement made in that page: “It is possible to control, monitor, and log access to data … but there is a systems performance cost.” Core Audit provides Full Capture of all database activity at less than 3% overhead. The Full Capture technology developed by Blue Core Research is the only one that allows for such low overhead, so there is some truth to the statement. But while other tools would impact system performance, It is not an imperative.
The following quote from the same page explicitly requires monitoring of DBA activity via a database auditing tool:
“The primary risk associated with database administration is that an administrator can alter sensitive data without those modifications being detected. A secondary risk is that an administrator can change access rights to information stored within the database as well as their own access rights. As a preventive control against these risks, the institution should restrict and review access administration and data altering by the administrator. Close monitoring of database administrator activities by management is both a preventive and detective control.“
The page about Database Management Systems also explicitly requires database auditing:
“organizations should employ automated auditing tools, such as journaling, that identify who accessed or attempted to access a database and what, if any, data was changed.“
Another page requiring database auditing is about access rights:
- “Formal access rights administration for users consists of four processes: … A monitoring process to oversee and manage the access rights granted to each user on the system.”
- “Authorization for privileged access should be tightly controlled. Privileged access refers to the ability to override system or application controls. Good practices for controlling privileged access include: … Logging and auditing the use of privileged access …”
- “Default user accounts should either be disabled, or the authentication to the account should be changed. Additionally, access to these default accounts should be monitored more closely than other accounts.”
The last statement clearly suggests that while certain accounts should be monitored more closely, all accounts should be monitored.
The Activity Monitoring page is focused more on host and network activity monitoring, but has short list of security events that applies to databases as well: “Examples of security events include operating system access, privileged access, creation of privileged accounts, configuration changes, and application access.”
I honestly think the FFIEC guidance couldn’t be any clearer, but to understand the value of activity monitoring have a look at the Security Monitoring page.
|
OPCFW_CODE
|
Create a new profile
Here is how to create a new profile in Microsoft Outlook. If you are creating a new profile in response to the move to the cloud, do not delete your old one, in order to save all of your old settings, emails, and information.
Outlook 2010/2016: Windows
Note: If you use shared folders, make sure caching is turned off before you begin this process. Learn how to do that here
- From the Start Menu, select the Control Panel.
- In the upper right hand corner, use the "Search Control Panel" function to search "Mail"
- From the results, select "Mail (32 Bit)" or "Mail (64 Bit)" depending on your computer.
- In the window that appears, select "Show Profiles".
- Now, on the new window, select "Add..." and enter the name you wish to be associated with the account.
- Windows will automatically take your credentials from your login to set you up with your new email profile. If you are setting this up on a personal device, refer to the Email configuration page for more information.
- Make sure you select your new profile, and then select the radio button underneath that says "Always use this profile" so that it is the default when opening Outlook.
Should you need to access other accounts, this is how to create additional profiles for those as well.
- At step 5 above, select the option "Prompt for a profile to be used"
- Select "Add..." as above
- Select "Manually configure server settings or additional server types" and click "Next"
- Select the option outlined below for the account type and click "Next"
- Enter the credentials of the additional account, and enter "mail.middlebury.edu" as the Exchange server, click "Next", then "Finish"
- When you open Outlook, choose the profile you need - if prompted for credentials, you may need to enter them as "midd\username" rather than just "username".
Outlook 2011: Mac
Microsoft refers to "profiles" as "identities" for Outlook for Mac users. They have thorough documentation on how to update your "identity" here.
Outlook 2016: Mac
- After opening Outlook, go to the Menu Bar at the top, select Outlook, and then select Preferences.
- In the window that appears, select Accounts
- In the accounts window, you will see the accounts you used to use. These should save your previous emails and information from the time before the Hosted Exchange switch. Select the + button in the lower left hand corner to continue.
- Enter your credentials and click Add Account when you are done. Email address and Username are the same. Make sure you leave "Configure Automatically" checked.
- When your new account is selected, click on the gear across from the + you clicked earlier in the Accounts window to make the new account the default when using Outlook.
- Should you need access to any additional accounts, enter them using the same process outlined above.
|
OPCFW_CODE
|
error compiling opennebula 5.10.2 using scons
Description
include/Image.h:583:18: note: forward declaration of 'class ImagePool'
583 | friend class ImagePool;
| ^~~~~~~~~
vm_var_syntax.y: In function 'void get_network_attribute(VirtualMachine*, const string&, const string&, const string&, std::string&)':
vm_var_syntax.y:156:5: error: 'VirtualNetwork' was not declared in this scope; did you mean 'VirtualNetworkPool'?
vm_var_syntax.y:156:26: error: 'vn' was not declared in this scope; did you mean 'vm'?
vm_var_syntax.y:230:16: error: invalid use of incomplete type 'class VirtualNetworkPool'
In file included from vm_var_syntax.y:29,
from vm_var_syntax.y:40:
include/Nebula.h:45:7: note: forward declaration of 'class VirtualNetworkPool'
45 | class VirtualNetworkPool;
| ^~~~~~~~~~~~~~~~~~
scons: *** [src/parsers/vm_var_syntax.o] Error 1
To Reproduce
git clone https://github.com/OpenNebula/one opennebula-5.10.2
cd opennebula-5.10.2
share/install_gems/install_gems
scons mysql=yes parsers=yes new-xmlrpc=yes svncterm=yes
( this errors out with the above description)
Expected behavior
A clear and concise description of what you expected to happen.
Details
Not sure but maybe a typo
(vm_var_syntax.y:156:26: error: 'vn' was not declared in this scope; did you mean 'vm'?)
Additional context
Add any other context about the problem here.
Progress Status
[ ] Branch created
[ ] Code committed to development branch
[ ] Testing - QA
[ ] Documentation
[ ] Release notes - resolved issues, compatibility, known issues
[ ] Code committed to upstream release/hotfix branches
[ ] Documentation committed to upstream release/hotfix branches
Got similar error trying to build 5.12.0:
vm_var_syntax.y: In function 'void get_image_attribute(VirtualMachine*, const string&, const string&, const string&, std::string&)':
vm_var_syntax.y:125:16: error: invalid use of incomplete type 'class ImagePool'
In file included from include/VirtualMachine.h:26,
from include/VirtualMachinePool.h:21,
from vm_var_syntax.y:27,
from vm_var_syntax.y:40:
include/Image.h:583:18: note: forward declaration of 'class ImagePool'
583 | friend class ImagePool;
| ^~~~~~~~~
vm_var_syntax.y: In function 'void get_network_attribute(VirtualMachine*, const string&, const string&, const string&, std::string&)':
vm_var_syntax.y:156:5: error: 'VirtualNetwork' was not declared in this scope; did you mean 'VirtualNetworkPool'?
vm_var_syntax.y:156:26: error: 'vn' was not declared in this scope; did you mean 'vm'?
vm_var_syntax.y:230:16: error: invalid use of incomplete type 'class VirtualNetworkPool'
In file included from vm_var_syntax.y:29,
from vm_var_syntax.y:40:
include/Nebula.h:45:7: note: forward declaration of 'class VirtualNetworkPool'
45 | class VirtualNetworkPool;
| ^~~~~~~~~~~~~~~~~~
scons: *** [src/parsers/vm_var_syntax.o] Error 1
scons: building terminated because of errors.
Build config: scons new_xmlrpc=yes mysql=no sqlite=yes sunstone=yes parsers=yes systemd=yes
The issues is the parser=yes. Unfortunately this parameter doesn't work. Do you really need to generate the parsers? The generated files are included in the repository.
Duplicate #5098
|
GITHUB_ARCHIVE
|
Do you ever feel lost in the ever-changing business landscape? You’re not alone. Navigating this ever-changing world can feel like an impossible task. The untapped power of frequent customer feedback can help shine the path. As wonderful as feedback is, it’s overwhelming. How do you structure it, organize, and absorb it in an effective way?
The Power of Feedback: Learning from Valve
I recently stumbled upon a video from Valve, highlighting their effective feedback incorporation system. Take a look to understand their process:
If you skipped the video, here’s a summary of Valve’s key steps:
- Establish a goal and work towards it
- Demo your progress when you are near the goal
- Listen, absorb the feedback from the demo
- Iterate until the goal is met (it’s no longer excruciating to listen to the demo)
Mike Ambinder from Valve encapsulates this idea beautifully:
“We see our game designs as hypotheses and our playtests as experiments to validate these hypotheses.”
Now, let’s explore how you can implement this at your organization.
Where to start?
Select a problem that enables rapid feedback and one that end users are very passionate about. Don’t be afraid of choosing the hard problems, the ones where customers really struggle. Frequency is vital, you should have multiple sessions a month. If you wait too long the customer will lose interest and will only amplify their skepticism. Choose a facilitator known for their calm demeanor and ability to extract deeper insights from participants with thoughtful follow-up questions. The right facilitator will be able to turn their feedback into backlog opportunities and learnings for your team.
Focus on people that use the product often and are passionate about improving it. The participants need to be vocal, opinionated, and love to work with others. Variety is essential so have participants from different customer segments to enable richer, multifaceted feedback. Most importantly you need a mix of users from different customers. Having only one customer represented is just as dangerous as having all your customers represented. As a rule of thumb have at least 3 and no more than 7 customers represented in the group.
Demo Time: Embrace the Mess
The first demo with your customers will be hard, messy, and nerve-wracking. That’s okay. The goal is to present new ideas, listen actively, and incorporate feedback. Making something great is messy and you’re showing off how things work at your company. It’s rare for development teams to receive direct customer feedback, making their involvement crucial. Your team needs to have those “ah, ha” moments to better understand your users. Building empathy, accruing social capital, and establishing trust with this group is pivotal.
Set the stage and unleash the power of the candid, let them to share why the product doesn’t work for them. This feedback is critical to building impactful solutions and as a bonus it will challenge everything in the backlog. Relentlessly pressure on the backlog is how you validate that you are building the right things, right now. Remember to record the session for future reference and always thank everyone for their valuable insights.
The Iterative Process and the Road Ahead
After the first time, it gets easier. Soon, you’ll be conducting these sessions for everything. As this program grows your customers become part of your team, acting as sounding boards and champions. The best teams at Value did this often and wouldn’t dream of skipping it. When I watched the video I was struck with the parallels in my experience. We started with a product customers wanted to walk away from, but a team committed to fixing the problem. After the first session our backlog was in shambles, we realized we were building the wrong things and making them too complicated. Each week we demoed what we built, and the customer slowly started to believe in the product. Over time, skeptics morphed into staunch supporters and even honorary team members. In fact, that program became so popular that customers started paying to be part of it. They saw the value of turning their feedback into action as a competitive advantage.
|
OPCFW_CODE
|
Không tìm thấy công việc
Rất tiếc chúng tôi không thể tìm thấy công việc mà bạn đang tìm kiếm.
Tím thấy các công việc mới nhất ở đây:
NEED ANDROID AND IOS NATIVE APPS check above apps and provide all functions like : AR MASKS AS GIFTS, LIVE GIFTS, BADGES ACCORDING TO LEVELS, MULTIPLE VIDEOS ON LIVE VIDEO SCREEN, TOP GIFTS SENDER AND RECEIVERS ETC.. NOTE : PAYMENT AFTER EACH APP COMPLETE, NEED ALL SOURCE AND RIGHTS
*must be able to do it NOW* Please use css to do the following: Embed iframe (no, its not my website, I dont have access) View file attached. Remove white spaces from top, left and bottom. Remove scroll bars (previous task will probably do this) Hide Bottom section Set google maps zoom level to 5 Zoom map with scroll mouse and pinch to zoom Everything must also work flawlessly on mobile Once com...
I am looking for a designer who is expert is creating Illustrations and should be good with Illustrator. You will be responsible to create illustrations and content for social media and website for our architecture + Interior design firm.
Its a manga, we need a translator cheap enough to translate the chapters from Japanese. 1 Chapter per week (rate will be decided, minimum $5/chapter). A sample of half chapter is given below. (1 chapters is about 24 pages). Per hour here in the sense, 1 chapter.
Hello, I need to combine Potree an [đăng nhập để xem URL], which both are libraries based on [đăng nhập để xem URL] to render 3D elements. For the documentations : Potree, for pointclouds : [đăng nhập để xem URL] Nexus, for meshs : [đăng nhập để xem URL] My Potree viewer works, but is limited to pointclouds streaming. I want to add the capacity to stream meshs by adding Nexus (.nxz format) eleme...
I need a female Video Spokes person who can explain our product.
Need Expert Help with it please bid will share further information with you right away thanks,
I need a simple Wordpress plugin developed. I have a Woocommerce website, I want to add a checkbox at checkout that is checked by default, and if checked when the user completes checkout they will be subscribed to a selected list in my Vision6 email marketing account. On the Wordpress admin side, after installing plugin, I should be able to enter the relevant Vision6 account information to conne...
Want to be hire for data entry project staff This project for workstation fee is applicable
Hi i am looking for ebay expert to improve my ebay profile
|
OPCFW_CODE
|
How do I manually find the flash chip and -c parameter? (Flashing Skulls BIOS using Pi 4)
Using a Raspberry Pi 4 running Raspberry Pi OS I'm trying to flash Skulls BIOS onto a Lenovo Thinkpad T440p using this guide.
How do I get the Raspberry Pi to detect my laptop's 8MB BIOS chip? I've made sure to first connect the clip to the 8MB BIOS chip before the Raspberry Pi and ensired that the red wire is in slot 1 of the adapter and on the dot on the BIOS chip. I've completed the other prep work from the guide. The terminal output says:
user@Exemplary4145:~ $ ls
'BOINC Manager-user' Desktop Downloads Pictures Templates
Bookshelf Documents Music Public Videos
user@Exemplary4145:~ $ cd skulls/
bash: cd: skulls/: No such file or directory
user@Exemplary4145:~ $ ls
'BOINC Manager-user' Desktop Downloads Pictures Templates
Bookshelf Documents Music Public Videos
user@Exemplary4145:~ $ Downloads
bash: Downloads: command not found
user@Exemplary4145:~ $ cd Downloads
user@Exemplary4145:~/Downloads $ cd skulls-1.0.8
user@Exemplary4145:~/Downloads/skulls-1.0.8 $ sudo ./external_install_bottom.sh -m -k <backup-file-to-create>
bash: syntax error near unexpected token `newline'
user@Exemplary4145:~/Downloads/skulls-1.0.8 $ sudo ./external_install_bottom.sh -m -k <8MB backup>
bash: syntax error near unexpected token `newline'
user@Exemplary4145:~/Downloads/skulls-1.0.8 $ sudo ./external_install_bottom.sh -m -k 8MB backup
[sudo] password for user:
Skulls
Please select the hardware you use:
1) Raspberry Pi
2) CH341A
3) Tigard
4) Exit
Please select the hardware flasher: 1
Ok. Run this on a Rasperry Pi.
trying to detect the chip...
chip not detected.
flashrom v1.2 on Linux 6.1.21-v8+ (aarch64)
flashrom is free software, get the source code at
https://flashrom.org/
Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
No EEPROM/flash device found.
Note: flashrom can never write if the flash chip isn't found automatically.
chip not detected. Please find it manually and rerun with the -c parameter.
you should be asking at a site that deals with the Lenovo Thinkpad Skulls BIOS
@jsotola They only have a GitHub and I'm not sure how to ask questions on there.
look at the top of that page
|
STACK_EXCHANGE
|
Also new in Windows 7 is DirectX 11, or perhaps more properly, Direct3D 11. This can be thought of as an extension and updating of Direct3D 10.1—all the DX10.1 features are still there, with a few additions. Note that, while DX11 will be introduced with Windows 7, it will also be available on Windows Vista. Some DX11 features will even work on DX 10 hardware, with updated drivers!
One of the key improvements will be better multithreading support. Today, graphics drivers have some minor level of multithreading support, but it only goes so far. The DirectX runtime itself is still single-threaded, and can often be the bottleneck preventing games from running as fast as they could. In DX11, resources will be loaded asynchronously and in parallel, concurrent with rendering. Draw and state submissions will be multithreaded, too.
All this should help spread the “graphics setup” load out across multiple CPU cores, helping developers make better use of the multi-core CPUs of today and tomorrow. It may also enable games to use more resources without bogging down quite as much as they do today. DirectX 11 hardware will support all this stuff, but even DX10 hardware, with the right drivers, can get lots of the new multithreaded enhancements (though not quite to the same performance level of new DX11 hardware).
The next important addition is Tessellation. This is one feature that will not be supported on DX10 hardware in any way. It actually requires new hardware, and the tessellation hardware built into modern ATI GPUs doesn’t quite cut it.
Tessellation is the act of breaking up a lower-polygon mesh or model into a whole lot of polygons, making it smoother and more detailed in appearance. There are lots of methods for doing this in 3D graphics, and the tessellation functions of DX11 are designed to be flexible enough to support most of them. First, the new “hull shader” takes control points for a patch as an input.
Note that this is the first appearance of patch-based data used in DirectX. The output of the hull shader tells the “tessellator” stage how much to tessellate. The tessellator itself is a fixed function unit, taking the outputs from the hull shader and generating the added geometry.
The “domain shader” then calculates the vertex positions from the tessellation data, which is passed to the geometry shader. It’s fully programmable, so you can get the optimal tessellated output depending on distance from the camera, angle, or other factors the 3D engine programmer may choose.
The benefit of all this is to allow complex geometry to be generated on the graphics card, requiring a much smaller amount of data to be sent over the bus or retrieved from memory. It’s a superset of the features in the Xbox 360’s tessellator unit, so game developers using that can get the same results from the same data sets on the PC, with DirectX 11 hardware.
Last but not certainly not least, we have the Compute Shader. Developers have been using 3D graphics hardware to run what they call “GP-GPU” applications for years, but using Direct3D’s interfaces are cumbersome at best. The API and hardware simply isn’t designed for the sort of general memory access and shared data that non-graphics applications require.
The Compute Shader is Microsoft’s first stab at solving that issue. Now, developers will be able to arbitrarily read and write data structures explicitly to memory. They can share registers between threads, and share data with a new “groupshared” storage class to reduce redundant I/O.
Best of all, using Compute Shader functions does not require any sort of task switching from the graphics card—you “stay within” the DirectX driver and the data generated by Compute Shaders can be used by the graphics stages of API, and vice versa. This makes it especially useful for games development, where GP-GPU type functions can slow down significantly as a card switches from the GP-GPU task to the graphics stuff and back again.
Compute Shaders will actually run on some DX10 hardware as well, with updated drivers, but the support is somewhat limited. True DX11 hardware gives developers more arbitrary memory access, adds atomic intrinsic operations, increases streaming I/O methods, and allows for hardware conversion between some data formats during I/O.
Microsoft is already doing some tests with DX11 using DX10 hardware—in the case of their Fast Fourier Transform test, it outperformed a CUDA app by a small amount, but that performance gap may grow considerably with newer hardware.
The target applications for Compute Shaders are video encode and decode, ray tracing, radiosity lighting algorithms, image post-processing, effects physics (like particle systems), accumulation buffer effects, and perhaps even core gameplay physics and AI (should a brave developer want such core systems running on hardware only a limited installed base may have). Continued…
|
OPCFW_CODE
|
.net core add Web Reference
I have two Web Project.
First of them is created about 2-3 weeks ago.
Second of them is created today.
I want to add Web Reference to the second web project.
First old Project Solution View
Second new Project Solution View
In new Project I can't find how to add Web Reference.
I can add Service Reference but I don't need it.
From here I can add Service Reference
but I can't add Web Reference Like it was in old project
From where I should add Web Reference? Is there any changes regarding web references?
please check this link: https://marketplace.visualstudio.com/items?itemName=WCFCORETEAM.VisualStudioWCFConnectedService
I already installed it, So i can add Service Reference.
Yes, you can add classic web service to your .core project.
I am getting this error
The service at the following URI does not have any endpoints compatible with .Net Core apps:'C:\Users\Luka\Desktop\Service.wsdl'
I was getting this also, in old project, while adding as Service Reference.
but i added as Web Reference without this problem.
@LukinoGrdzelishvili .Net core WCF client support BasiHttpBinding only.
I used my old project in my new solution, and it works fine.
What is the project type you can't add reference to WCF? If it is an ASP.NET Core application you can do that via intermediary Standard class library. See the details here https://stackoverflow.com/a/50624568/804385
It is core 2.0.
I have made it using old Web Api project and .net core app uses it like rest api.
Possible duplicate of Web Reference vs. Service Reference
@DmitryPavlov I asked this question for .Net Core. if you see your linked question is 9 years old and .net core isn't as old as that question. so that question can't be useful for .net core.
@LukinoGrdzelishvili Web Reference (as it was in the past) doesn't exist anymore for .NET Core projects. Did you try using Add Connected Serrvice -> Microsfot WCF Web Service Reference Provider and provide your WSDL link to re-generate proxy class for your web reference?
@DmitryPavlov Yes, I tried it. but that service is old, and i think they used different endpoint configuration from .net core support.
I am gettubg error: "The service at the following URI does not have any endpoints compatible with .Net Core apps:'C:\Users\Luka\Desktop\Service.wsdl' ".
I made .net framework web api project and included this wsdl using it.
@LukinoGrdzelishvili would you try Web Services Description Language Tool (Wsdl.exe) to generate C# code using command line?
@LukinoGrdzelishvili what service are you trying to call? Add Service Reference (WCF) should be used in almost all cases, when the remote service complies with SOAP 1.1 at least and uses standardized extensions like those introduced in the WS-* Interoperability standards around 2003-2007. The generated WCF proxy understands and *implements eg WS-Authentication. Add Web Service Reference is older and laxer.
@LukinoGrdzelishvili Unfortunately, many big companies like banks and airlines created their services before 2007 using extensions that never became part of the standard, like ebXML or SOAP Attachments. And since the are Big, they never bothered fixing their services (looking at Sabre). You can still use the old code, as the classes are the same. Whether you can use Add Service Reference or svcutil to add a new reference depends on how non-compliant the service is. If you tell the tool to use XmlSerializer it can generate classes most of the time
@PanagiotisKanavos Thanks for answer. It's very old and that time I made another adapter service which was on .net framework.
P.S. you are right, they had some custom stuff there and that's why .net core 2.1 couldn't reference it.
Select old project in Solution Explorer.
Click 'Show All Files' button Solution Explorer toolbar.
Service node will be expanded and you will see the nested nodes with generated code.
You should find the link to WSDL file, which describes your web service.
To load a service metadata from a WSDL file, select Browse in Add Web Reference dialog.
Documentation states:
The WCF Web Service Reference tool retrieves metadata from a web
service in the current solution, on a network location, or from a WSDL
file, and generates a source file containing Windows Communication
Foundation (WCF) client proxy code that your .NET app can use to
access the web service.
This does not add it in a compatible way.
What do you mean @MotKohn ?
the generated reference.cs even when doing it your prescribed way with loading the file from old project is not usable and does not match at all what the add reference looked like when using .net>Add References>Advanced>Add Web Reference. See https://github.com/dotnet/wcf/issues/3750
@MotKohn that's simply wrong and the link you point to has nothing to do with non WCF services. Anyone who had to migrate code using non-WS-* compliant services like those used by airlines and banks from .NET Framework to .NET Core actually copied the old source. WCF is pretty strict about SOAP compliance and many of the big companies (like Sabre) created their services using Oracle's proposals that never became standard. ASMX/Add Web References is more lenient
Here's what I would try in a command line:
cd "Full path to folder where ConsoleApp1 project lives"
dotnet add package "Full path to Test projects .csproj file"
This command should edit your ConsoleApp1.csproj file.
Then you can try to build it with:
dotnet build
|
STACK_EXCHANGE
|
Learn how to use the NEICE case identifier numbers (IDs) to facilitate case management.
Several Identifier numbers (IDs) are assigned to a NEICE case when it is created. At the national level, NEICE creates a reference case ID when a case is transmitted to another states; this is called the NEICE Clearinghouse ID (NCH ID). The NEICE CMS/MCMS also generates a different NEICE ID which is used within a state to track a case. Workers processing cases across state lines will want to use the NCH ID as the reference number in communications with external states/jurisdictions.
NEICE creates two type of Case IDs:
- NEICE Clearinghouse IDs (NCH ID) are generated by the system and can be used when working with other states, no matter what system you or they are using.
- NEICE IDs are created in the CMS/MCMS and are used by case workers to manage a case within a state. NOTE: If the partner state happens to be on the CMS system with you, this number will be the same. However, if an MCMS state is working with a CMS state, the NEICE ID number will be different in the CMS state than in the MCMS state.
Best practice is to use the NCH ID when you are working with other states to minimize confusion.
IDs Assigned and Displayed in NEICE according to system in which they are created
- The NEICE ID
- For CMS states, the NEICE Case ID is assigned by the Cloud CMS. This ID is only assigned to cases for the CMS states and can only be viewed by CMS states. When the sending state is a CMS state, the NEICE ID is assigned by the Cloud on the initial transmission that includes the 100A. If the sending state is a MCMS or Clearinghouse state and the receiving state is a CMS state, the NEICE Case ID will be assigned by the Cloud CMS and displayed for the receiving CMS state when they receive the case.
- For MCMS states, the NEICE ID number is assigned to a case created in MCMS state’s local database from the list of cases created by that state using the MCMS. It is displayed only for that state. If the sending state is a MCMS and the NEICE Case ID has been assigned by the local database and receiving state is a CMS state, a NEICE ID is also assigned by the Cloud for the CMS receiving state, resulting in two different NEICE ID numbers for the same case.
- Although the NEICE ID is generated in the Cloud CMS for CMS cases and in the individual local database for MCMS states, they are both referred to as the NEICE ID. For both CMS and MCMS state the NEICE ID is displayed in the upper left corner of every tab. The NEICE ID number on the MCMS NEICE tabs is the one assigned by the local MCMS database and the one on the CMS state tabs is the one assigned by the Cloud CMS.
- In search results for CMS/MCMS, the NEICE ID is called Case Number.
- When a Clearinghouse state creates a case, an internal Case ID is determined by the state’s SACWIS/CCWIS function (not NEICE) and is only displayed for that state as determined by that state’s user interface.
- The NEICE Clearinghouse (NCH) ID is assigned by the NEICE Clearinghouse to all cases from all states regardless of the technology platforms used (CMS/MCMS OR Clearinghouse). It is assigned when the NEICE Clearinghouse receives the Request Transmittal along with the 100A-Initial. The NCH ID is the common link between sending and receiving states. The NCHID is created as follows:
- First two letters are randomly assigned
- Second two alphas are sending state
- Third two alphas are the receiving state
- First four numbers are the year
- Any numbers after that are the number of the case created within the year
Display of and Access to the NCH ID
- CMS and MCMS states can see the NCH ID by clicking on the on "i" next to the NEICE ID.
- Each Clearinghouse state determines where and how the NCHID is displayed in their User Interface (UI).
Use of Ids within and between states
- Internal communications within a state
- The NEICE Case ID would be most useful for CMS and MCMS states for internal communications. For example, the ICPC Central Office sending a message to the county agency.
- Clearinghouse states would most likely use the number assigned by their state’s SACWIS/CCWIS functionality.
- Interstate communications with an external state (or jurisdiction)
- When communicating with another state, all states should use the NCH ID, since it is the only common case identifier.
- The NCH ID is searchable by all states.
|
OPCFW_CODE
|
Lighttpd crashes on wrong return type in lua script
If a lua script is attached with magnet.attract-raw-url-to, this script is expected to either return nothing or a numeric value, which is interpreted as http status code later.
If a boolean is returned, lighttpd crashes with a segfault. This is a lua programming error, but lighttpd should not crash.
Related log message in error log is:
(mod_magnet.c.420) (lua-atpanic) bad argument #-1 (number expected, got boolean)
gdb backtrace ist very short:
Program received signal SIGSEGV, Segmentation fault.
_longjmp_chk () at ../sysdeps/unix/sysv/linux/x86_64/_longjmp_chk.S:167
167 ../sysdeps/unix/sysv/linux/x86_64/____longjmp_chk.S: Datei oder Verzeichnis nicht gefunden.
#0 _longjmp_chk () at ../sysdeps/unix/sysv/linux/x86_64/_longjmp_chk.S:167
#1 0x65479cc994dcfd30 in ?? ()
Backtrace stopped: Cannot access memory at address 0x65479cc994dcfd30
Steps to reproduce:
- lighttpd.conf with mod_magnet and a lua script attached with magnet.attract-raw-url-to
- lua script with only one line:
- request to the url with attached lua script
Based on your description, please have a look at mod_magnet.c line 968:
lua_return_value = (int) luaL_optinteger(L, -1, -1);
That code expects an int and is possibly what is resulting in the lua panic, but I don't have time at this moment to test that theory. The code should probably verify the existence and type of argument at that position on the stack before calling luaL_optinteger().
- File lighttpd-mod_magnet-fix_wrong_return-type.diff lighttpd-mod_magnet-fix_wrong_return-type.diff added
This patch fixes the problem for me:
- behaviour with numeric and nil types is unchanged
- all other types are handled like nil type (return -1), but a log messages is written and the crash is avoided
From my current understanding and usage of lua scripts with mod_magnet there two use cases:
1.) generating content with lua, redirect the request or request authentication
In these cases the lua script must return a valid http status code and maybe additional header and/or content
Returning a number means: the request ends here
2.) the lua script just modifies/verifies the request and has nothing to change.
Take a look at the traffic quota example in the AbsoLUAtion page of this wiki.
If the request is not blocked, the script just ends returning nothing, which is nil in Lua.
It does not make a difference, whether nil is returned explicit or the script just ends (I just verified both cases).
Another use case: the lua script checks for authentication:
- either the request is missing the the needed data (e.e. header), than we have case 1.), a redirect to a login url or a status 401
- or the request is verified and can proceed
Returning nothing/nil in a lua script just means: continue with the request
If returning nil is considred an error, this would break a lot of existing scripts.
The example "Do basic HTTP-Auth against a MySQL DB/Table" in the AbsoLUAtion page ends with the follwing two lines
-- return nothing to proceed normal operation
The documentation matches your patch. Thanks!
Also available in: Atom
|
OPCFW_CODE
|
Change JIRA ticket layout
We should the JIRA ticket to include the following information (html format):
Image name:
Registry:
Image is compliance / Image is non-compliant (based on “disallowed” flag)
Vulnerability summary: a graphical view of high/medium/low issues
A picture containing drawing
Malware found: yes/no (show this only if scan_malware flag is “yes”)
Sensitive data found: yes/no (show this only if scan_sensitive_data flag is “yes”)
Assurance controls: show list of assurance controls. Near every one show whether it passed or failed.
Discovered vulnerabilities from last scan:
<Each vulnerability should include the vulnerable package name, version, fix version and CVE>
Previously discovered vulnerabilities:
<Each vulnerability should include the vulnerable package name, version, fix version and CVE>
Found vulnerabilities:
<Each vulnerability should include the vulnerable package name, version, fix version and CVE>
@jerbia I don't find vulnerabilities from last scan.
Should I create a new request to API)? I can use previous_digest for this one.
Or just I selected wrong images...
You need to save it in the bolt DB...
From: afdesk<EMAIL_ADDRESS>Sent: Tuesday, May 12, 2020 5:49:43 PM
To: aquasecurity/webhook-server<EMAIL_ADDRESS>Cc: jerbia<EMAIL_ADDRESS>Mention<EMAIL_ADDRESS>Subject: Re: [aquasecurity/webhook-server] Change JIRA ticket layout (#4)
@jerbiahttps://github.com/jerbia I don't find vulnerabilities from last scan.
Should I create a new request to API)? I can use previous_digest for this one.
Or just I selected wrong images...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/aquasecurity/webhook-server/issues/4#issuecomment-627393583, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAX3H5P67CVMCVCGOISR4I3RRFOYPANCNFSM4M4EPFXA.
You need to save it in the bolt DB...
What about first result.
Is it empty for first webhook?
@jerbia I wanted to use a digest field as DB key. But now I think that's wrong.
is there a field which could use as key?
You can use the "digest"-"image"-"registry" (the combination of the three fields) for the unique ID.
@afdesk what I suggest is:
When the webhook is triggered, take the digest-image-registry and check if it exists in the boltdb.
If not exist, open a JIRA ticket. Save to DB all the reported vulnerabilities for the image.
If exists - read from the DB the last scan results. It should include the number of critical/high/medium/low/negliggable vulnerabilities and malware. Compare it to the current scan results. If it is the same, do nothing. If different - create a new JIRA ticket, showing the new vulnerabilities and the old ones.
You can use the "digest"-"image"-"registry" (the combination of the three fields) for the unique ID.
ok, I thought that digest isn't unique param and there are different digests for the same vulnerabilities lists. I was wrong.
Digest is unique.
It might be that same image is stored on two registries, in that case you will get same digest twice for different images. Due to that I suggest you add the image and registry name to the uniqueness of the object.
@jerbia for test I see image alpine:3.8 (registry Docker Hub) and see that digest is equal previous_digest.
Is it normal?
Yes. Previous Digest is the digest of the image on the previous scan. If equal it means image did not change in the registry.
If not equal - it means this is a new image that was pushed to the registry that overrides the previous one.
Since you are only caching the digest you will know that a replacement for an image with same name happened. This is OK.
@jerbia I see a new template of config file (cfg.yalm).
There are a few moments:
Can we use a projectKey instead of project id? I think that Project id is Jira's inner data.
What is mean a project name? Now it uses as project key. There is an error if project name doesn't match project id
IsuueType is empty, but issue type is required. It should set up, or need default value. Bug?
Now priority and assignee are required params. Is it correct? Maybe I need default values?
High and current user?
Actions with description param. What I have to do with it? this param overlays rendered data. is it correct?
Few fixes:
Change to "Image is complaint" (and not "Image is compliance"). Move it to a new line
Malware found: No (instead of "no")
Sensitive data found: No (instead of "no")
Move the malware and sensitive data before the table
Remove the "Found Vulnerabilities" if there are 0 vulnerabilities...
@jerbia Can you tell me how to build previous (or next) scan for tests?
@afdesk I don't have a way to create an image scan that will show difference in vulnerabilities. You will need to simulate this in the unit tests.
|
GITHUB_ARCHIVE
|
Contract creation fails if "deploy" modifies memory
The contract created by a pwasm module might be invalid if the deploy code modifies memory.
For example consider the following Wasm module.
(module
(type (;0;) (func))
(func $call (type 0))
(func $deploy (type 0)
i32.const 0
i32.const -1
i32.store
)
(memory (;0;) 1 16)
(export "memory" (memory 0))
(export "call" (func $call))
(export "deploy" (func $deploy)))
Here the $deploy function modifies the memory at location 0..3 by setting all bytes to 0xff.
With wasm-build this code is transformed to
(module
(type (;0;) (func))
(type (;1;) (func (param i32 i32)))
(import "env" "memory" (memory (;0;) 1 16))
(import "env" "ret" (func (;0;) (type 1)))
(func (;1;) (type 0)
i32.const 0
i32.const -1
i32.store)
(func (;2;) (type 0)
call 1
i32.const 0
i32.const 52
call 0)
(export "call" (func 2))
(data (;0;) (i32.const 0) "\00asm\01\00\00\00\01\04\01`\00\00\02\10\01\03env\06memory\02\01\01\10\03\02\01\00\07\08\01\04call\00\00\0a\04\01\02\00\0b"))
If this contract is deployed the same modification to the memory happens before ret is called (call 0). However the memory at 0..3 is part of the contract code that is returned and was initialized correctly with the data section to \00asm. After func 2 is called Parity Ethereum will read the memory at 0..51 and store this as the contract data. With the memory modification this now starts with \ff\ff\ff\ff and is not valid Wasm.
might be so, but this kind of code should not be generated, at least using Rust
The contract created by a pwasm module might be invalid if the deploy code modifies memory.
For example consider the following Wasm module.
(module
(type (;0;) (func))
(func $call (type 0))
(func $deploy (type 0)
i32.const 0
i32.const -1
i32.store
)
(memory (;0;) 1 16)
(export "memory" (memory 0))
(export "call" (func $call))
(export "deploy" (func $deploy)))
Here the $deploy function modifies the memory at location 0..3 by setting all bytes to 0xff.
With wasm-build this code is transformed to
(module
(type (;0;) (func))
(type (;1;) (func (param i32 i32)))
(import "env" "memory" (memory (;0;) 1 16))
(import "env" "ret" (func (;0;) (type 1)))
(func (;1;) (type 0)
i32.const 0
i32.const -1
i32.store)
(func (;2;) (type 0)
call 1
i32.const 0
i32.const 52
call 0)
(export "call" (func 2))
(data (;0;) (i32.const 0) "\00asm\01\00\00\00\01\04\01`\00\00\02\10\01\03env\06memory\02\01\01\10\03\02\01\00\07\08\01\04call\00\00\0a\04\01\02\00\0b"))
If this contract is deployed the same modification to the memory happens before ret is called (call 0). However the memory at 0..3 is part of the contract code that is returned and was initialized correctly with the data section to \00asm. After func 2 is called Parity Ethereum will read the memory at 0..51 and store this as the contract data. With the memory modification this now starts with \ff\ff\ff\ff and is not valid Wasm.
|
GITHUB_ARCHIVE
|
- bf31e15 Fixed
ArgumentNullExceptionthat could be thrown if an entity hit event was not related to an
- 7c25cb2 Fixed bug that made
XmlUtilities.readnot work for files within a jar.
- 69afe69 Don’t ignore minimal location updates on entities
- 680ce0b Fixed issue which caused the Camera hotfix for certain renderscales not to work.
- ca5cfa2 Fixed null pointer check in
- 59360c4 Fixed issue with the GameWorld.reset overload
Features / Improvements
- Added and improve the Javadocs of many APIs
- aeccaa4 Added possibility to remove timed actions.
- #301 28f1036 Added entity render events for individual entities.
- adb2bfc Extend the ReflectionUtilities with a few helpful methods
- 9fde70d Added events for changes on
- c8ca4b1 Added events for layer rendering on the
- 70c598a Added
- 41679ed Added
- 87c339d Added updatable count to the
- a7a7637 Don’t modify layer visibility when serializing maps.
- #297 Re-Added shortcut API for finding tile bounds
- 5ddf044 Drop the
Game.loop().getUpdateRate()method in favor of
- 30cd2c1 Improve visibility modifies of the
- 6d92e25 Rework the default mouse cursor behavior
- Use the default cursor if no virtual cursor is set and the mouse is not being grabbed by the window.
- Move the debug cursor handling to the MouseCursor implementation
- Don’t grab the mouse by default.
- c950d55 Replace some explicitly thrown exceptions by log messages.
- d715f2f Drop the
- 39f365e Change Entity collections from the
Environmentto be immutable
Revamp of several event methods and listeners
- 3747d8c 080afea Streamline rendering events and API
- 59b4b3d Reworked the
- 1c41e83 Reworked the
- 537b5d7 Reworked the
- 480a259 Reworked the
- 111d81d Reworked the
- 4204d8a Reworked the
- e438fc8 Reworked the
- Marked many listeners as
- Updated SonarQube plugin
- Updated Gradle
- d4d35a7 Print Gradle warnings in the build log.
- 7340f5b Prevent duplicated resources in the .jar files.
- 94b46dc Exclude duplicate files from the jar.
Contributors in this release
Also, thanks to all the contributors to the LITIENGINE community in the forum and our discord! All your comments and thoughts help us to shape the engine towards a stable release.
|
OPCFW_CODE
|
I have done this, I erase everything, start from scratch and I already have the container running! It is an important advance!!!
However, when I apply this command
docker exec -it storagenode ./dashboard.sh
Available Used Egress Ingress
Bandwidth N/A 0 B 0 B 0 B (since May 1)
Disk 80.00 TB 0 B
Ok. Now this is a usual OFFLINE issue. So you fixed any docker-related issues and now node at least running.
You can use this checklist to troubleshoot the offline issue:
Please pay attention to WAN IP on your router (usually shown on the Status page) - it should match the IP on Open Port Check Tool - Test Port Forwarding on Your Router, otherwise port forwarding will not work (and no-ip will not help). And of course you should use this IP with node’s port in your docker run command as a value for ADDRESS variable.
To update any parameter to your node you need to stop and remove the container and run it back with all your parameters include changed ones.
Congratz! I saw that you dedicated 81TB. To be honest, I think you’ll never use up the space on a single node. If I remember correctly, maximum size on a node is 24 TB. My 3TB node took 16 months to fill.
The UDP is configured from the beginning, I have not made any subsequent changes.
I have performed a check with the tool UDP Port Checker Online - Open Port to port 28967 and to the internal QNAP IP and it gives me close.
However, in docker it appears open and configured. I don’t have any firewall configured in QNAP
Yes, that’s right, 81 TB, I didn’t see in the documentation any limitation in this regard, now I see it.
A question I have, can I have several nodes under the same identity in the same QNAP in different 24 TB containers? The idea would be to add storage in the same QNAP
Please make sure that you allowed the 28967 UDP port in the inbound rules of your firewall and that the docker run command have -p 28967:28967/tcp -p 28967:28967/udp parameters.
There is no limitations. You can allocate any size. Just with current usage the node(s) cannot fill up more than 24TB of used space in one location due to equality of uploads and deletions at this point.
So, doesn’t matter how many nodes you would have in one location (/24 subnet of public IPs), they all treated as a one node for uploads and as a different ones for audits, customers’ egress, egress repair traffic and online checks.
I am sure that they are open in docker, this is the command that was sent when creating it and it was not modified later as it also indicates
docker run -d --restart unless-stopped --stop-timeout 300
-p 14002:14002 \
I have tried with command line “telnet 192.168.100.50 28967” and with putty, in both cases it does not establish connection (192.168.100.50 is the LAN ip).
It’s not that there is a strict limit (@ligloo is mistaken here). However, nodes just store customer data and so far there was just not enough customer data to fill more than few tens of TB per node. What @ligloo referred to is the estimated amount of data we can reasonably expect now from customers, under a lot of assumptions in terms of how customers actually use Storj and under assumption that the node works for many years. Also, as Storj gains more and more customers who want to store more and more data, this number might grow.
You can have multiple nodes collecting data at the same time, but if they’re behind the same IP address, they will all be considered as one for the purposes of ingress.
The documentation lists a recommendation, not a hard limit. Others have already mentioned this, I just wanted to point you to this: Realistic earnings estimator
That estimator will give you the best estimation I can give you as to how quickly you can expect your node to fill up on a single /24 IP range. Technically the soft limit where deletes roughly match ingress is around 40TB atm, but it’s not longer so relevant as getting close to that will take decades as the growth slows over time. The estimator shows an estimation of the first 10 years.
|
OPCFW_CODE
|
If it was still 2012 I would have eagerly heard and responded to any conversation about Big Data. Well, it was the buzz and you had to be speaking the magic words for getting people to listen to the latest and greatest in technology. But fortunately/unfortunately, it is 2017 now and it is disappointing to note that most of the world has not moved beyond Big Data. And believe me, it is not just the CIOs/CDOs who have been sitting in the ivory tower who are stuck with Big Data.It is also the energetic developers who are being scouted by talent firms for having Big Data on their resume.
We at Knoldus build a holistic software development capability for anyone who joins us as an intern. It does not matter if you have been working in the industry for 2 years or 10. When you undergo the internship we would give you a holistic software development immersion right from the Code Quality, Code conventions, Principles, practices, and patterns of software development further leading to Reactive Platforms and the ecosystem tailing into the stack that we embrace which is the Scala ecosystem and the Fast Data Platform.
The trigger for this post is the conversation with a top talent who joined us 3 months back. He was sad because he was not working on Big Data. When asked what did he mean by Big Data, the quick answer was Hadoop/Spark. When countered by the fact that he was learning Lagom and event sourcing which would allow him to build better solutions but he was not too convinced.
Now, there is nothing wrong with these technologies and in fact, they are what has made the ecosystem popular but these technologies are only a part, sometimes a very small part, of the product that would have any business value. They solve a particular piece of the puzzle and more often than not if you base your product “just” on these technologies you are bound to fail!
So where should we be headed if we are not talking about Big Data? The answer is to talk about Fast Data. Big Data as anomer/misnomer gets used in all kinds of scenarios. Talk to 10 CIOs and 9 would say that they struggle with Big Data. It is of no consequence whether one manages 1TB of data and the other is managing several hundred PB of data. I think where we should be headed is that with our solution/ product how do we make sure that the customers get the best experience. Customer Experience (CX) is going to be the king of the modern day applications. Just focusing on Spark/Hadoop/Flink and thinking that you can do Big Data is a fallacy.
Let us see how these set of so-called Big Data technologies fit into the grand scheme of things.
- If you are going to build a product which would include user interaction then you need a reactive front end to the product so that you can provide amazing customer experience.
- When hundreds and thousands of user requests come in, the product has to handle them without degrading performance. It has to be resilient.
- There are going to be transaction based processes like someone querying for something, adding an item, viewing their trades for the day. These could be handled by different micro services. These would have their individual life cycles and should be able to scale independently.
- You would like your system to be extensible and plan for any future business operations which are unforeseen at the moment. For this, you need to have event sourcing.
- You would want to separate out writes and reads to your system for making sure that the read and write SLAs are met and you are able to scale the read and write side separately.
- You would need to store your transaction data in the DB and for that, you would need either a SQL or NoSQL DB.
- Now some of your functionalities would also need analysis of data and come back with analyzed data. Now depending on the SLAs, this is where you would need Big Data frameworks to jump in.
- You would need to run some machine learning or deep learning algorithms for your product to stand out.
Of course, we are simplifying the scenario a lot but hopefully, you get the idea. Just being dependent upon a Big Data framework or hiring consultants who know a bit about Hadoop/Spark is not going to fly. You need an entire gamut of technologies that you need to work on. Right from
- Reactive UI
- Microservices framework
- Asynchronous Messaging System
- Big Data framework (there I said it!)
- Hosting strategy based on containers
- Monitoring and Telemetry
- Machine learning and AI
And believe me, this is a partial list.
And overlaying all of this is the Principles, Patterns, and Practices of effective software development. The main drivers of technology which are based on the principles of Reactive Manifesto would be
To sum it up here is one possible scheme of technologies that can fulfill the product vision.
As you would see Big Data frameworks are only a part of what you want to do. More than a drop in the ocean but still not big enough.
Hence, next time when someone comes and talks about Big Data and using Big Data framework to build the product, then do talk to them about all the other ancillaries and take what they say with a big bag of salt 🙂
Knoldus has implemented its Digital Transformation product KDP at two Fortune 50 organizations. The third implementation is underway.
1 thought on “Can we stop talking about Big Data now?5 min read”
Very well put sir. Thanks a lot did like your way of handling the intern doesn’t matter what experience they have also liked the way you have highlighted all the gamut of stuff a great post great learning.
Comments are closed.
|
OPCFW_CODE
|
#pragma once
#include <type_traits>
#include <utility>
// REVISIT(oleksii): Figure out if there're going to be any issues with GLM when targeting DX.
#include <glm/glm.hpp>
#include <nest/config.hpp>
#if NEST_RENDERER == NEST_RENDERER_OPENGL
#include <nest/opengl/vertex_traits.hpp>
#endif
namespace nest {
inline namespace v1 {
/// Holds a boolean value which specifies whether the given type `T` has a field named `position`.
/// @{
template <typename T, typename = void>
static constexpr bool has_position = false;
template <typename T>
constexpr bool has_position<T, std::void_t<decltype(std::declval<T>().position)>> = true;
/// @}
/// Holds a boolean value which specifies whether the given type `T` has a field named `color`.
/// @{
template <typename T, typename = void>
static constexpr bool has_color = false;
template <typename T>
constexpr bool has_color<T, std::void_t<decltype(std::declval<T>().color)>> = true;
/// @}
/// Holds a boolean value which specifies whether the given type `T` has a field named `texcoord`.
/// @{
template <typename T, typename = void>
static constexpr bool has_texcoord = false;
template <typename T>
constexpr bool has_texcoord<T, std::void_t<decltype(std::declval<T>().texcoord)>> = true;
/// @}
/// Holds a numeric value which represents the number of components per vertex attribute.
/// @{
// clang-format off
template<typename T> static constexpr std::size_t component_count = 1u;
template<typename T, std::size_t N> constexpr std::size_t component_count<T[N]> = N;
template<> constexpr std::size_t component_count<glm::vec2> = 2u;
template<> constexpr std::size_t component_count<glm::vec3> = 3u;
template<> constexpr std::size_t component_count<glm::vec4> = 4u;
template<> constexpr std::size_t component_count<glm::dvec2> = 2u;
template<> constexpr std::size_t component_count<glm::dvec3> = 3u;
template<> constexpr std::size_t component_count<glm::dvec4> = 4u;
template<> constexpr std::size_t component_count<glm::ivec2> = 2u;
template<> constexpr std::size_t component_count<glm::ivec3> = 3u;
template<> constexpr std::size_t component_count<glm::ivec4> = 4u;
template<> constexpr std::size_t component_count<glm::uvec2> = 2u;
template<> constexpr std::size_t component_count<glm::uvec3> = 3u;
template<> constexpr std::size_t component_count<glm::uvec4> = 4u;
// clang-format on
/// @}
} // namespace v1
} // namespace nest
|
STACK_EDU
|
"High latency on CRC GET request" with 1.7.1 (on Glitch)
I am running the glitch autohook example, thank you so much for such a nice concise example!
Expected behavior
Using 1.2.1 the example runs perfectly, and I see the following output:
Removing webhooks…
Removing https://myapp.glitch.me/webhook…
Registering https://myapp.glitch.me/webhook as a new webhook…
Webhook created.
However, when I upgrade the package to 1.7.1 I consistently get the following error when trying to create the webhook.
Actual behavior
AuthenticationError: High latency on CRC GET request. Your webhook should respond in less than 3 seconds. (HTTP status: 400, Twitter code: 214)
at tryError (/rbd/pnpm-volume/fccb6299-90f3-4bdf-b4a6-46d776e1bbf8/node_modules/.registry.npmjs.org/twitter-autohook/1.7.1/node_modules/twitter-autohook/errors/index.js:53:12)
at Autohook.setWebhook (/rbd/pnpm-volume/fccb6299-90f3-4bdf-b4a6-46d776e1bbf8/node_modules/.registry.npmjs.org/twitter-autohook/1.7.1/node_modules/twitter-autohook/index.js:205:19)
at processTicksAndRejections (internal/process/task_queues.js:88:5)
at async Autohook.start (/rbd/pnpm-volume/fccb6299-90f3-4bdf-b4a6-46d776e1bbf8/node_modules/.registry.npmjs.org/twitter-autohook/1.7.1/node_modules/twitter-autohook/index.js:270:7)
Steps to reproduce the behavior
Run the glitch autohook example but change package.json to the following:
{
"dependencies": {
"twitter-autohook": "^1.7.1",
"request": "^2.88.0"
}
}
Hi @shiffman, Glitch blocks incoming requests that do not have a User-agent header. Incoming webhook calls from Twitter do not have a User-agent header. Boosted Apps should work as intended.
Thanks for the reply! Unfortunately I am getting this error in a boosted app but let me try testing again to make sure.
@shiffman Sorry to bother you, but did you ever figure this out? I'm running into it on 1.7.2 with a boosted project.
@hkolbeck-streem unfortunately no, I'm still running 1.2.1 due to this issue.
@shiffman Thanks for the quick response, and sorry to hear it
@iamdaniele Any chance we can re-open this? It seems like it may not be solved as simple as the User-Agent issue, unless I misunderstand something.
Oddly, I did just notice another example bot I created is using 1.7.2 and works deployed to Glitch! I'm not sure why one produces this error and the other does not! The working one is here: https://github.com/CodingTrain/GUMP500-bot (sorry it's not well documented, something I made during a live stream.) It depends on https://github.com/CodingTrain/ChooChooTweets which has 1.7.2 as a dependancy. I'll have to investigate this.
|
GITHUB_ARCHIVE
|
Table of Contents
The shared folder
Our service interface is a shared folder (also called “input folder”). You are the only one authorized to edit this folder. Our service will only read information inside. This folder is the one where you will write your files, the configuration of your website theme, etc.
We manage the installation of the shared folder on your computer with you. If ever it happened that you need to (re)install the shared folder by yourself, the steps are described here.
This manual explains how to use this folder to edit your blog from your computer.
How to edit your blog
To add an article in your blog, you just need to add a document (docx, odt, markdowmn, …) in the folder “Blog” of the shared folder. The name of the files will determine the URL of the corresponding webpage.
The shared folder contains folders and files outside the “Blog” folder. They allow you to control other elements of your website. You will find more explanations here
We suggest following the next steps when you edit your blog:
- Create and edit your files in the shared folder.
- Once editions are saved, our service automatically triggers an attempt to reconstruct your website. If you open a web browser at the URL of your website (e.g., https://yourwebsite.com), you should see the corresponding modifications.
For each file format, the next table gives the correspondence between a file sample (left column) and the webpage computed from this file (right column). The webpages display supported features of the corresponding file format.
|webpage computed from the markdown file
|webpage computed from the docx file
|webpage computed from the odt file
You will find detailed instructions on how you should format your files in the files themselves.
Click on a link in the left column to download the source file. Click on the link in the right column to see the corresponding webpage
You can easily add medias in a page by creating a folder next to the corresponding document — and by naming it with the same name, except the extension. This folder will be considered as a way to provide supplementary content, which is very handy. But this feature can cause conflicts between folder names. To avoid these conflicts, here are some conventions :
- Name documents meant to become web pages with lower-case letters.
- A folder, that hosts medias of a page, must be located next to the corresponding document and is named with the same name; except the extension.
- Name folders that contain documents, by starting their name with a capital letter.
In addition to clarifying the role of each folder, these conventions will ensure that there will be no conflicts between the supplementary folders and the folders that host other documents.
When problems arise
When the modification is not visible, here is how to investigate
Your browser cache may hide the new webpage version. In this case, a total refresh bypassing your browser cache — Ctr+F5 — will request a full refresh of the webpage.
You can check that your last data have been uploaded properly on the server, by going at https://input.yourwebsite.com/ and by looking at the last modification times.
Right now displayed dates have an offset of 1 or 2 hours. (depending on the French time shift in summer/winter)
As soon as our service detects new data, it attempts a reconstruction. You can check that the reconstruction went well by looking at its log file, viewable at https://yourwebsite.com/reconstruction.log.
- If you cannot read the log, there must be a network issue or an error in our server.
- The reconstruction date should be right after your edition date. If it is not the case, it means that there was an error in our service.
- If the date corresponds, please follow the instructions at the top of the reconstruction log.
Read the FAQ for guidance on how to solve miscellaneous issues.
Feedback and assistance
After this investigation, if you haven’t found/solved the error, please contact us. Here is what we will do:
- If there is an error in our service, we will fix.
- If there is a modification to do in your input folder, we will make a copy of your folder, and we will edit the copy on our side. Once we have finished the edition, we will email you with a link to download the edited folder, for instance https://backup.edited.yourwebsite.com/pambda-input-edited.zip. We will also send a link to show how your website looks with our editions: https://edited.yourwebsite.com.
For your convenience, we suggest installing Meld. This program will help you to compare the edited folder with your folder.
|
OPCFW_CODE
|
Dan apabila penggunaan yang tidak terkontrol itu dapat mempengaruhi individu, mental serta pola pikir mereka sehingga mereka ketergantugan pada gadget karena itu disebabkan oleh penggunaaan yang berlebihan dan lamanya penggunaan tersebut. Gawai adalah perangkat elektronik portable karena dapat digunakan tanpa harus terhubung dengan stop kontak beraliran listrik. Hal ini membuat distributor resmi kesulitan untuk mendapatkan stok VGA sehingga para konsumen yang ingin membeli VGA terpaksa akan membeli VGA dari para scalper tersebut dengan harga yang lebih tinggi. Foto: Harga VGA card RTX 3060 di salah satu e-commerce (dok. Kelangkaan lain dari VGA yaitu disebabkan oleh terhambatnya jalur distribusi, sejak pandemi COVID-19 melanda Indonesia banyak sekali distribusi-distribusi yang terhambat termasuk salah satunya VGA. OPPO akan mengirimkan kode unik ke alamat email pengguna OPPO A96 baru yang sudah didaftarkan, untuk selanjutnya ditukarkan di seluruh bioskop CGV di Indonesia. Media Hiburan, beberapa jenis gadget dibuat khusus untuk tujuan hiburan. Changed the provider in the Currency Meter gadget to Google Finance to make it work again.
This means that for each byte of data written, four bytes of video memory might potentially be changed. Bit 7 of each address contains information about the first pixel, Bit 6 has information about the next pixel, and so on. Plane 0 describing the first pixel, plane 1 the next, and so on. The VGA has four planes, and for each pixel, and each plane holds one bit of each pixel drawn. In linear mode, each byte in host memory corresponds to one pixel on the display, making this mode very easy to use. Todo: determine the b/w/d, shift mode and odd/even mode for CGA compatibility (guesstimated at word mode, interleaved shift, odd/even enabled, i.e. What works in all cases, is if chain-4 matches the other settings that are common for established modes (i.e. This works on Windows 7 / 8 / 8.1 / 10 / 11. Administrative rights are required to install. While under common circumstances this bit is emulated properly, the way this bit actually works is however very different among implementations (especially emulators) and can have strange effects if you are unaware of it. The computed bit mask is checked, for each set bit the corresponding bit from the set/reset logic is forwarded.
Although 32 bytes are reserved for each character, only 16, 14, or 8 of them are commonly used, depending on the character height. Then the address is incremented by either 1, 2 or 4 for the next set of pixels until the scanline completes (the consequence of this is that each scanline is a multiple of eight pixels wide). In this mode, each byte of video memory describes exactly one pixel. In text mode, the screen is divided into character cells rather than pixels. The horizontal timing registers are based on a unit called ‘character’ (As they match one character in text mode). A good example are cards based on the Radeon RX 5500 XT, which comes in both 4GB and 8GB variants. The Read/Write logic controls which planes of the memory are actually read or written, and how these values relate to the value being sent by the CPU.
Plane 0 is accessed on even addresses, plane 1 is accessed on odd addresses, with each consecutive 16-bit value describing the next character. Each byte represents one horizontal cross section through each character. This makes Bochs possibly troublesome since you only need to toggle this bit to enter Mode-X, while real hardware also requires that you change doubleword mode into byte mode. The first byte of each group defines the top line, each next byte describes the rows below it. The CGA was limited to 4 concurrent colors, with two bits each. Each register in the DAC consists of 18 bits, 6 bits for each color component. For each of the 256 available characters this plane has 32 bytes reserved. In addition to the extended palette, each of the 256 entries could be assigned an arbitrary color value through the VGA DAC. Most particularly, several higher, arbitrary-resolution display modes were possible, all the way up to the programmable limit of 800×600 with 16 colors (or 400×600 with 256 colors), as well as other custom modes using unusual combinations of horizontal and vertical pixel counts in either color mode.
|
OPCFW_CODE
|
question about input data format / using different pose extractors
Hi, i have just started looking into geometric learning and as a first try i want to get the network running in my environment. My issue is that i am not using the joints from openpose so my "input" is formatted in a different way. I am specifically talking bout N, C, T, V, M = x.size() from forward() and extract_feature(). Going from the paper "Spatial Temporal Graph Convolutional Networks for Skeleton-Based ActionRecognition" I am guessing that N is the number of joints, C is the number of channels of the feature (2 for 2d joint positions), T is time as in the number of frames that are processed. For V and M i am at the loss and now im stuck because i cant convert my own pose coordinates into the proper format, I would appreciate any help. I tried installing openPose just to explore the data format more but after endless conflicts because of anaconda and cuda mismatches I gave up.
tl;dr - what are N, C, T, V, M = x.size() of the pose data ?
I can share my own findings (based on guess/detective work -- not fully tested yet)
If you want to feed stuff into the data pipeline of which goes like this (see also the yaml config files):
normalize_by_resolution,
mask_by_visibility,
**augmentation steps***,
transpose, order=[0, 2, 1, 3],
to_tuple,
then data is (channels, keypoints, frame_num_aka_time, person_id) but you need to pass a dictionary like so:
return {
"info": {
"resolution": [640, 480], # or whatever it is
"keypoint_channels": ["x", "y", "score"],
},
"data": data,
"category_id": output_class,
}
If you want to feed stuff directly into then model then it's:
(id_within_minibatch, channels, frame_num_aka_time, keypoints, person_id)
But you probably need to do some kind of normalisation beforehand. Unscaled pixel values probably won't work very well.
hi, thanks a ton for the reply -
i think ideally i would want to feed the data directly into my model, i have integrated the st_gcn_aaai18.py with the relevant utils into my pipeline ( its reinforcement learning so i have to integrate it into my setup) and now i would like to convert my pose data (2d screen positions with "confidence" since its simulation based) into a format that the network can use.
The scaling is not a problem - i am however unsure about the masking/confidence mechanism and what exactly what entry is. Going from the paper in N, C, T, V, M = x.size() N is the number of joints, C the feature dim (which going from your explanation contains an extra channel for detection confidence if i understand correctly) and T is the time so i guess V an M are ids for minibatches and persons like you mentioned above ? Can you tell me which yaml file you meant in your second sentence ?
thanks a lot again
ps: if its not too much trouble could you just paste what a print(x) and a print(x.size()) in st_gcn_aaai18.ST_GCN_18.forward() would look like ? I tried to install mmskeleton on my laptop but doing so destroyed my cuda setup for other experiments even though it was in a conda environment and i could not get the nms component to run, again because of cuda im guessing. If only there was a single comment in the code what the letters mean :D
nevermind with the output, 5th time was the charm for the installation :)
I meant the dimensions are in the order given:
(id_within_minibatch, channels, frame_num_aka_time, keypoints, person_id)
So
N = id_within_minibatch (hint: use a DataLoader to make minibatches in the 1st dimension)
C = channels (x, y, score) OR (x, y) -- has to match num_channels
T = frame_num_aka_time
V = keypoint/joint (probably stands for vertex)
M = person ID (for when there are multiple people within a frame I would suppose)
By the way, I have been passing just (x, y) without score since I'm working with images + OpenPose and I think it might be rather dependent upon camera setup/resolution so would prefer to sacrifice in-domain accuracy for generalisation. It's up to you whether you include score or not.
Would do with the pasting but my code isn't working at the moment.
Here's one of the yaml files used:
https://github.com/open-mmlab/mmskeleton/blob/master/configs/recognition/st_gcn/kinetics-skeleton-from-openpose.yaml
Hi, i just noticed my error with the N as well - thanks for reiterating. I cant use the Dataloader unfortunately because of my RL setup so right now im trying to figure out how the normalization was done (working theory is remove half of width/height and then divide by width/height since its distributed between -0.5 and 0.5). As for score i was just setting the visible joints to confidence 1 but maybe your idea is better.
In case anyone is working on a similar issue and is interested: x.size() = [64, 3, 300, 18, 2] , minibatchsize=64, channels x,y,score = 3, T is 2 times the temporal window size which is 150 as per config, V is 18 for kinetics as per paper, for the last im not quite sure, the only thing i can say is that the second "person" seems to be missing quite often during training.
this is an output of print(x[0, :, 150, :, 0]) , so one middle frame of the first sample in the minibatch for the first person
tensor([[-0.0380, 0.0420, -0.1280, -0.2930, 0.0000, 0.2120, 0.3430, 0.0000,
-0.0210, 0.0000, 0.0000, 0.1160, 0.0000, 0.0000, -0.0480, 0.0160,
0.0000, 0.1260],
[-0.1630, 0.0050, 0.0110, 0.4430, 0.0000, -0.0220, 0.4160, 0.0000,
0.4920, 0.0000, 0.0000, 0.4920, 0.0000, 0.0000, -0.2530, -0.2230,
0.0000, -0.2090],
[ 0.7610, 0.5250, 0.4730, 0.2970, 0.0000, 0.4380, 0.3610, 0.0000,
0.0650, 0.0000, 0.0000, 0.0530, 0.0000, 0.0000, 0.8300, 0.7860,
0.0000, 0.8090]], device='cuda:0')
Good luck with your application and thanks for your help !
|
GITHUB_ARCHIVE
|
Could not find module ember-resolver
Today i switched from ember-cordova to corber.
On the iPhone i get a white blank screen. After putting some manual statements into the assets/myapplication.js file, i found that the following section fails:
if (!runningTests) {
require("myapplication/app")["default"].create({"name":"myapplication","version":"0.0.0+108fc194"});
}
There's an exception when it runs the require function:
Could not find module `ember-resolver` imported from `myapplication/resolver`
It seems the vendor.js file is much smaller in the corber/cordova/www/assets directory vs the dist/assets directory
corber/cordova/www/assets/vendor.js is 2.7 MB
dist/assets/vendor.js is 4.1 MB
(in non-minified debug configuration)
The cordova version of vendor.js gets generated using the following command:
corber build --platform=ios --skip-cordova-build
It seems this generates a vendor.js that differs substantially from the vendor.js produced by ember serve or ember build
Any ideas?
I've just tried running corber b and then ember b and have a vendor.js of the same size. As a heads up corber b will not update dist/assets/vendor.js. Can you confirm if the asset sizes differ after explicit builds for each target?
I hate to ask - have you tried a nombom, and ensured ember-cordova is removed from package.json? Let me know if you'e already tried.
Thank you for the quick reply!
I tried again, with the following:
removed node_modules and bower_components directories
did a npm cache clean and bower cache clean
did a npm install && bower install
ran ember b and got a warning of the node-sass module that it has to be rebuilt when running a new/different node version. I recently installed node 8. So I switched back to node 6 using the command nvm use 6. I'm using ember-cli 2.11.0
ran ember b again with node 6. It produced a dist/assets/vendor.js file of 4.1 MB
ran corber b. It produced a corber/cordova/www/assets/vendor.js file of 2.7 MB
Same result. I suspected it had to do with the different node versions on my system. So i uninstalled the node 8 using nvm uninstall 8 and opened a new terminal window.
This time, the corber b command did not work - it could find the corber command! So it looks like corber was installed while i was on node 8. I reinstalled corber using npm install -g corber then built the app again:
ran ember b. It produced a dist/assets/vendor.js file of 4.1 MB
ran corber b. It produced a corber/cordova/www/assets/vendor.js file of 2.7 MB
Same result. One thing that is different between the two builds is that the corber b command produced the following warning which did not show up under ember b:
DEPRECATION: Addon files were detected in `/Users/myuser/Temp/myproject/node_modules/ember-resize-mixin/addon`, but no JavaScript preprocessors were found for `ember-resize-mixin`. Please make sure to add a preprocessor (most likely `ember-cli-babel`) to in `dependencies` (NOT `devDependencies`) in `ember-resize-mixin`'s `package.json`.
Anything else i can try?
yes, my package.json does not include ember-cordova (i removed it manually yesterday).
It does still include ember-cordova-events though.
My project was using ember-cli 2.11
After upgrading to ember-cli 2.16.2 the vendor.js file under corber/cordova/www/assets/ matched the dist/assets/vendor.js file.
Issue can be closed :-)
Great - thanks. I'll document the minimum ember-cli version.
|
GITHUB_ARCHIVE
|
Joey on SQL Server
Q&A: How Microsoft Is Raising Azure Arc's Data Services Game
Ignite 2020 saw the public preview of Azure Arc enabled data services, the latest step in Microsoft's bid to demystify multicloud. Principal program manager Travis Wright explains how it works.
- By Joey D'Antoni
Microsoft Ignite 2020 was last week, and while it wasn't a release year for SQL Server, there were still several interesting announcements at the virtual conference.
The general availability of Azure SQL Edge was announced, along with some performance improvements to the Azure SQL Managed Instance Platform as a Service (PaaS) offering. One other announcement -- and the focus of this article -- was the public preview of Azure Arc enabled data services, which allows you to run either Azure SQL Managed Instance or Azure PostgreSQL Hyperscale across on-premises datacenters, multicloud scenarios and edge computing scenarios.
Azure Arc services can be a bit confusing to newcomers, but the general premise across all of the services is that Azure Arc provides a single control pane to leverage Azure services to manage all of your Azure Arc enabled resources, no matter where those resources live. Currently, Azure Arc supports management of virtual machines, Kubernetes clusters and the aforementioned Azure data services. This allows you to take advantage of features in Azure Resource Manager like role-based access control, resource tagging and automation, and to have PaaS resources that can run anywhere.
I recently had a chance to talk with Travis Wright, principal program manager of SQL Server at Microsoft, about Azure Arc enabled data services.
D'Antoni: What are the scenarios where you have been seeing customers implement Azure Arc enabled data services? Is it mostly on-premises or are you seeing multicloud deployments?
Wright: At this point, I'd say that the most common use case we see is for on-premises Database as a Service [DBaaS], but one of the first customer implementations that will go into production will be on AWS EKS [Amazon Elastic Kubernetes Service].
While customers may start in one place like on-premises, part of the appeal of Arc enabled data services is that they can deploy and manage in multiple clouds as they proceed in their hybrid cloud journey. Arc enabled data services also future-proofs things a bit because, for example, it ensures that even if a company is acquired in the future that runs on another cloud, it can still be managed in the same way.
One of the touted benefits of Azure Arc data services is evergreen SQL. Can you explain what that means and how it's implemented in Azure Arc and the Kubernetes framework?
If you think about SQL Server, it is a versioned product. The features in a given release of SQL Server are what they are. We release updates to that version over time, but it is really only bug fixes, not features. After five years, a major SQL Server version goes into extended support, meaning there are only security fixes for another five years and then it goes out of support.
Ten years is a long time these days, but we still have a lot of customers that, for various reasons, are "stuck" on an older version of SQL Server and can't get off of it. The idea with "evergreen SQL" is twofold. First, provide customers continuous updates, both bug fixes and new features like we do in the cloud, and secondly, to make the process of upgrading as painless as possible because it is a very small, incremental update each month as opposed to a big upgrade process every two to three years. The process of updating is fully automated with near-zero downtime.
Can you explain the data controller and how that works to help provision other resources?
The data controller is really just a set of Kubernetes pods that provide the orchestration services for things like provisioning, deprovisioning, scaling, backup/restore, monitoring, HA [high availability], et cetera. One of those pods called the "bootstrapper" is responsible for monitoring for requests to create custom resources like SQL managed instances or PostgreSQL Hyperscale server groups. When those requests to deploy those custom resources are submitted to the Kubernetes API server, Kubernetes hands those requests off to the bootstrapper to process.
The bootstrapper validates the request and applies some logic to it to determine the right thing to do, and then tells Kubernetes what do -- so, using Kubernetes primitives like statefulsets, services and persistent volumes. This simplifies the user experience because people making these requests don't have to understand the primitives. They just say, "I want to create a SQL managed instance with 16 cores and 256GB of RAM." The data controller takes care of translating that request into something Kubernetes can understand.
In my opinion, Azure Arc is a natural evolution from Azure Stack, which required customers to purchase specific hardware through partners, is challenging to implement outside of large enterprises, and slow to maintain pace with Azure feature enhancements. Since Azure Arc is a way to run Azure services anywhere and uses container-based deployment, it can stay up-to-date as features get added to the service. Even if you need your deployment to be disconnected from the Internet, Azure Arc supports a private container repository that can sync with Azure.
The ability to host your own PaaS services with services like system-managed backups and built-in high availability provided by Kubernetes -- and not having to ever worry about SQL Server upgrades and patches -- will be very attractive to a lot of customers. The other benefit of using Azure Arc enabled services (whether they be data, virtual machines or Kubernetes) is that the Azure Portal and Azure tools like Monitor and Security can be used to manage resources wherever they are.
I'm generally skeptical of organizations implementing multicloud solutions as they are really complex -- even just the networking alone. However, one of the promises of using Kubernetes as a deployment platform is that you can deploy your containers and pods on any Kubernetes, whether it be on a Raspberry Pi on your desktop or onto multiple public cloud providers. Azure Arc extends this by providing a single control pane and managed services options, greatly reducing the complexity of a multicloud or hybrid deployment. Microsoft sees a lot of growth here and I would expect to see continued investment from Microsoft in this space.
Joseph D'Antoni is an Architect and SQL Server MVP with over a decade of experience working in both Fortune 500 and smaller firms. He is currently Principal Consultant for Denny Cherry and Associates Consulting. He holds a BS in Computer Information Systems from Louisiana Tech University and an MBA from North Carolina State University. Joey is the co-president of the Philadelphia SQL Server Users Group . He is a frequent speaker at PASS Summit, TechEd, Code Camps, and SQLSaturday events.
|
OPCFW_CODE
|
More on HTML and CSS as skills
While there doesn’t seem to be a clear consensus either way within the Web Literacy Standard community, I’m of the opinion that we should consider HTML and CSS as more ‘skill-like’ than ‘competency-like’. As such, they shouldn’t feature on the competency grid.
As a reminder from last time, we’re defining competencies as collections of skills. This is how I’m proposing the competency grid should look:
HTML would be folded into ‘Composing for the Web’ and CSS into ‘Design and Accessibility’.
Although it seems slightly heretical to ‘demote’ HTML and CSS (“the building blocks of the web!”) to skills, I think we should do it for the following reasons:
- HTML and CSS are bounded in a way that, say, ‘Privacy’ isn’t
- The competency grid without HTML and CSS looks better - it looks ‘finished’, like a version 1.0
- Talking about ‘HTML’ as being the same, conceptually, as ‘Sharing’ feels like a category mistake
From the start I’ve been blogging about the process of creating a new, open learning standard for web literacy. I think the competency grid as it stood in April is instructive:
As you can see, we’ve already recognised as a community that HTML and CSS are conceptually smaller than the other competencies. Folding them into two other competencies is, to my mind, congruent with our thinking all along.
So how would this work for HTML?
At the moment, the ‘Composing for the Web’ competency is made up of the following skills:
- Inserting hyperlinks into a Web page
- Embedding multimedia content into a Web page
- Creating Web resources in ways appropriate to the medium/genre
I’d suggest adding the following HTML-specific skills:
- Identifying and using HTML tags
- Structuring a Web page
How would this work for CSS?
Again, at the moment, the ‘Design and Accessibilty’ competency is made up of the following skills:
- Identifying the different parts of a Web page using industry-standard terms
- Improving the accessibility of a Web page by modifying its color scheme and markup
- Iterating on a design after feedback from a target audience
- Reorganizing the structure of a Web page to improve its conceptual flow
I’d suggest adding the following CSS-specific skills:
- Identifying and using CSS tags
- Demonstrating the difference between inline, embedded and external CSS
The above is without making any changes to the existing skills. I think by making subtle changes we could fold HTML and CSS into the existing competencies without too many problems.
Finally, I think it’s important to make one more point. At the moment, the competency grid is front-and-centre of the Web Literacy Standard. That’s mainly because it’s currently the most colourful and instantly-understandable graphic.
As we get closer to releasing v1.0 of the standard at the Mozilla Festival I’ll be working closely with Chris Appleton and the Comms team to help tell the story around the Web Literacy Standard. Something I envisage is a graphic that’s front-and-centre that explains the Standard better than the competency grid by itself.
As ever, I appreciate your feedback. See below for how to do that!
|
OPCFW_CODE
|
import os
import csv
import shutil
from datetime import datetime
class Addons:
"""
This class implements 2 methods to manipulate the Nodes and Links files
"""
@staticmethod
def merge(path, projects):
"""
Merges the folders with their results present in the given folder
"""
directories = []
for dir in projects:
directories.append(path+'/'+dir)
#create a new folder to save the merged result
creationTime = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
final = '/merged_'+ creationTime
os.mkdir(path+final)
for i in range(len(directories)):
#setup iteration.
if i == 0:
#rewrite nodes with project identification
with open(directories[i]+'/Nodes.csv', 'r') as p1, open(path+final+'/Nodes.csv', 'w', newline='') as wf:
rp1 = csv.reader(p1)
writer = csv.writer(wf)
for line in rp1:
writer.writerow([line[0],[projects[i]]])
p1.close()
wf.close()
#copy links
shutil.copy2(directories[i]+'/Links.csv', path+final+'/Links.csv')
#this is the actual merge process
else:
#nodes merge
with open(directories[i]+'/Nodes.csv', 'r') as p2, open(path+final+'/Nodes.csv', 'r') as rTemp, open(path+final+'/Nodes_temp.csv', 'w', newline='') as wf:
rTe = csv.reader(rTemp)
rp2 = csv.reader(p2)
writer = csv.writer(wf)
listed_rTe = list(rTe)
listed_rp2 = list(rp2)
#iterate through previous file
for line in listed_rTe:
#if the node is in the current file, write with the current project id added
if [line[0]] in listed_rp2:
l_common = Addons._string2list(line[1])
l_common.append(projects[i])
writer.writerow([line[0],l_common])
#if not, write as it is
else:
writer.writerow(line)
#remove identification from the previous file
for j in range(len(listed_rTe)):
listed_rTe[j] = listed_rTe[j][0]
#iterate through current file
for line in listed_rp2:
#write with the id of current project if the node is not on the previous file
if line[0] not in listed_rTe:
writer.writerow([line[0],[projects[i]]])
rTemp.close()
p2.close()
wf.close()
os.remove(path+final+'/Nodes.csv')
os.rename(path+final+'/Nodes_temp.csv', path+final+'/Nodes.csv')
#links merge
with open(directories[i]+'/Links.csv', 'r') as p2, open(path+final+'/Links.csv', 'r') as rTemp, open(path+final+'/Links_temp.csv', 'w', newline='') as wf:
rp2 = csv.reader(p2)
rTe = csv.reader(rTemp)
listed_rTe = list(rTe)
writer = csv.writer(wf)
#write links from previous file
for line in listed_rTe:
writer.writerow(line)
#write links from the current file not in the previous file
for line in rp2:
if line not in listed_rTe:
writer.writerow(line)
p2.close()
rTemp.close()
wf.close()
os.remove(path+final+'/Links.csv')
os.rename(path+final+'/Links_temp.csv', path+final+'/Links.csv')
print('Merged')
@staticmethod
def addId(path):
"""
Adds a header line in the Nodes and Links files
"""
nodes_dict = {}
with open(path+'/Nodes.csv', 'r') as rf, open(path+'/idNodes.csv', 'w', newline='') as wf:
reader = csv.reader(rf)
writer = csv.writer(wf)
if 'merged_' in path:
writer.writerow(['id', 'label', 'projects'])
else:
writer.writerow(['id', 'label'])
id=0
for line in reader:
if 'merged_' in path:
writer.writerow([id,line[0],line[1]])
else:
writer.writerow([id,line[0]])
nodes_dict[line[0]] = id
id+=1
rf.close()
wf.close()
with open(path+'/Links.csv', 'r') as rLinks, open(path+'/idLinks.csv', 'w', newline='') as wF:
rLi = csv.reader(rLinks)
list_rLi = list(rLi)
writer = csv.writer(wF)
writer.writerow(['source','target'])
for link in list_rLi:
source, target = None, None
source = nodes_dict[link[0]]
target = nodes_dict[link[1]]
writer.writerow([source,target])
rLinks.close()
wF.close()
print('Added ids')
@staticmethod
def _string2list(string):
li = []
done = False
while not done:
start = string.find('\'') + 1
end = string[start:].find('\'')+start
li.append(string[start:end])
string = string[end+1:]
if string == ']':
done = True
return li
|
STACK_EDU
|
Input 0 is incompatible with layer model
I'm trying the default training (satellite unet) and train went good but when I'm trying to test the model I got the error
ValueError: Input 0 is incompatible with layer model: expected shape=(None, 384, 384, 3), found shape=(None, 348, 3)
Here is my test code
model_file = 'model_satellite.h5'
input_shape = (384, 384, 3)
model = satellite_unet(
input_shape
)
model.load_weights(model_file)
image = np.array(Image.open("test.jpg").resize((348, 348)))
print("shape: ", image.shape)
shape: (348, 348, 3)
pr_mask = model.predict(image).round()
Hi @parcodepie
Seems to me you made a mistake when resizing the test image. You used 348x348 instead of 384x384
Thanks @karolzak I edited it but it throws another error
Input 0 is incompatible with layer model: expected shape=(None, 384, 384, 3), found shape=(32, 384, 3)
Your input array still have a wrong shape. It expects a batch of arrays of size 384x384x3 while you're providing 32 arrays of size 384x3. Something is wrong with your code
I'm trying to test only one image what could went wrong?
here is the whole test code
`import cv2
import numpy as np
from keras_net.keras_unet.utils import get_patches
from keras_unet.models import satellite_unet
from PIL import Image
model_file = 'model_satellite.h5'
input_shape = (384, 384, 3)
model = satellite_unet(
input_shape
)
model.load_weights(model_file)
image = np.array(Image.open("test.jpg").resize((384, 384)))
print("shape: ", image.shape)
shape: (384, 384, 3)
pr_mask = model.predict(image).round()
cv2.imshow(
"mask: ", pr_mask
)`
image = np.array(Image.open("test.jpg").resize((384, 384)))
print("shape: ", image.shape)
shape: (384, 384, 3)
pr_mask = model.predict(image).round()
Ok, I see your problem. When providing inputs for prediction you need to serve images in batches - if you want to fit just a single image you need to reshape it from (384, 384, 3) to (1, 384, 384, 3)
Indeed it was the problem.
Here is the new working code
`
image = np.array(Image.open("test.jpg").resize((384, 384)))
images_list = []
images_list.append(np.array(image))
x = np.asarray(images_list)
pr_mask = model.predict(x).round()
plt.imshow(
pr_mask[0]
)
plt.show()
`
also I'm saving the image to the disk
data = Image.fromarray(np.reshape(pr_mask[0], (384, 384)) ) data = data.convert("L") data.save('xxx.png')
the model needs more training but it's a good start
Thanks @karolzak for your time
Thanks @karolzak I edited it but it throws another error
Input 0 is incompatible with layer model: expected shape=(None, 384, 384, 3), found shape=(32, 384, 3)
Hi, so how did u fix the issue when you are testing many images instead of just one?
hi, i have a same problem ,too
how fix that problem ?
Input 0 is incompatible with layer model: expected shape=(None, 64, 64, 3), found shape=(32, 64, 3) .. . .
Hello @hhhhhhhhhhhhhhhhho ! Great username :)
As it was mentioned begore you need to serve images as an array. The solution mentioned above will work:
image = np.array(Image.open("test.jpg").resize((384, 384)))
images_list = []
images_list.append(np.array(image))
x = np.asarray(images_list)
pr_mask = model.predict(x).round()
plt.imshow(
pr_mask[0]
)
plt.show()
Hello @karolzak @Anne-Andresen , Can you please help me with the below problem.
I am currently trying to train a basic neural network with 3 different values from a dataset and predict another value. (Regression).
But I receive an input 0 incompatibility error. Kindly help.
Input NumPy array:
features.shape #(1700, 3) labels.shape #(1700, )
Neural Network
nn_model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu',input_shape = (1,3)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
nn_model.summary()
Summary:
dense_39 (Dense) (None, 1, 64) 256
dense_40 (Dense) (None, 1, 64) 4160
dense_41 (Dense) (None, 1, 1) 65
=================================================================
Total params: 4,481
Trainable params: 4,481
Non-trainable params: 0
Error:
history = nn_model.fit(
features,
labels,
epochs=100,
verbose=1,
)
ValueError: Input 0 of layer "sequential_21" is incompatible with the layer: expected shape=(None, 1, 3), found shape=(None, 3)
@niranjanstudy06 the error message literally tells you what the problem is
try this instead:
nn_model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu',input_shape = (3)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
@karolzak what!?
This will obviously raise a type error problem...
@niranjanstudy06 yeah, missed ',' there:
input_shape = (3,)
Also, this has nothing to do with keras-unet. Please go to stackoverflow with these kind of questions
@karolzak Got it, sorry. I wasn't finding a solution there so I was trying to get help on github.
@karolzak I deleted my comments on this, cause like you said they have nthg to do with unet. Kindly remove urs as well so that people won't get confused over it. Sorry for the inconvenience.
@karolzak I have the same error:
ValueError: Input 0 is incompatible with layer model: expected shape=(None, 9, 4), found shape=(9, 4).
This is my original data shape and I'm trying to generate synthetic data from TimeGAN. When I run the model, got the above error.
|
GITHUB_ARCHIVE
|
Self-driving cars are expected on our roads soon. In the project SNOW (Self-driving Navigation Optimized for Winter), we focus on the unexplored problem of autonomous driving during winter that still raises reliability concerns. We have the expertise to automatically build 3D maps of the environment while moving through it with robots. We aim at using this knowledge to investigate mapping and control solutions for challenging conditions related to Canadian weather.
The main goal of this project is to extend the current technology used for autonomous driving toward unstructured and dynamic environments generated by winter conditions (e.g., a snow-covered forest). This project is addressing the applications of autonomous driving in remote areas, autonomous refueling, search and rescue missions, Canadian Arctic Sovereignty, freight transport on Northern ice roads, etc. Our research concentrates on maps built by a UGV, which will be able to adapt to environmental changes caused by snow and winds, while allowing a robust localization of the vehicle in real-time. These maps will also serve as the foundation for novel path planning algorithms handling deformable obstacles and environments (e.g., deep snow under a vehicle). The project is carried out in partnership with General Dynamics Land Systems - Canada (GDLS-C).
To achieve the goals of the project, the following specific objectives are defined:
- Objective 1—Mapping and Localization: to develop algorithms to allow the UGV to localize and map its environment in winter conditions.
- Objective 2—Path Planning and Control: to develop algorithms for planning paths and adapt the behavior of the UGV according to weather conditions.
- Objective 3—Field Testing and Integration: to carry out an extensive series of experiments using the UGV in a snow-covered forest.
- Pomerleau, F. (2022). Robotics in Snow and Ice. In M. H. Ang, O. Khatib, & B. Siciliano (Eds.), Encyclopedia of Robotics (pp. 1–9). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-41610-1_223-1
PDF Publisher Bibtex source
- Kubelka, V., Vaidis, M., & Pomerleau, F. (2022). Gravity-constrained point cloud registration. Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS). https://doi.org/10.48550/ARXIV.2203.13799 Accepted for oral presentation, arXiv preprint arXiv:2203.01902
Publisher Bibtex source
- Vaidis, M., Giguère, P., Pomerleau, F., & Kubelka, V. (2021). Accurate outdoor ground truth based on total stations. 2021 18th Conference on Robots and Vision (CRV). https://arxiv.org/abs/2104.14396
PDF Publisher Bibtex source
- Baril, D., Grondin, V., Deschenes, S., Laconte, J., Vaidis, M., Kubelka, V., Gallant, A., Giguere, P., & Pomerleau, F. (2020). Evaluation of Skid-Steering Kinematic Models for Subarctic Environments. 2020 17th Conference on Computer and Robot Vision (CRV), 198–205. https://doi.org/10.1109/CRV50864.2020.00034 Best Robotic Vision Paper Award!
PDF Publisher Bibtex source
From theory to practice
The following video shows the day we received the UGV - a Warthog robot made by a Canadian company, Clearpath Robotics. We have received the robot in August 2019 and since we began to realize ours plans in practice:
Integration of the UGV hardware and software
Since the reception of the UGV, we have had a detailed plan of the tasks to follow to meet our goals, one of them being able to test the robot in a snowy forest during the winter of 2019. The first step was to integrate the sensor suite we had planned for the UGV. This also involved constructing a solid metal frame by ours hands.
When we finished this electrical part, we continued with the software integration. The mapping software required integration with the system of the UGV, mainly communication with its low-level computer which provides time synchronization and wheel odometry. After finishing all these tasks, it was finally the time to start testing. We found a nice and practical place for initial testing in front of our faculty building. In September 2019 when we fixed all the important details, we were able to proceed to the Montmorency forest.
The main goal of the first tests in the forest was to prove that we could safely deploy the robot, record all necessary data and possibly run mapping on-board the robot computers. We have, of course, discovered some bugs but they were all quick to fix and after a few sessions, we were able to generate large 3D maps of the forest.
Because the mapping functionality seemed working fine, we proceeded to implementing a basic path-following functionality. The robot would be navigated once by hand over a desired path while recording its position. Then, a software controller would repeat the path indefinitely, based on the onboard mapping and localization capabilities. The initial tuning of the controller showed some minor oscillations around the learned trajectory:
Fortunately, we were able to find the source of the problem, which was a misalignment between the commanded and actually executed turning rate of the robot. After fixing the problem, the controller was able to give us much more satisfying results:
First review meeting
At the end of October 2019, the first review meeting took place. It lasted two days and was intended for our partner GDLS-C. We presented the integrated system and its capabilities to allow the partner to replicate the results on their identical UGV. The following video summarizes what was achieved:
- Principal Investigator and Technical Lead: François Pomerleau
- Deputy Technical Lead: Vladimír Kubelka
- Representing GDLS-C: Richard Lee
- PhD students: Dominic Baril, Simon-Pierre Deschênes
- Master student: Damien LaRocque
- Clearpath Robotics - Norlab pushes autonomous navigation beyond its limits with Warthog UGV
|
OPCFW_CODE
|
Run into a bit of trouble? Let’s figure out your vector, Victor (or Victoria). Here are some of the more common pits of quick sand, imperial blockades, and troublesome cases we’ve come across when designing and uploading icons.
The file I want to upload isn’t being accepted!
We only accept SVG files (ending in a
.svg extension) when uploading icons to a kit. Sorry, no PNGs, PDFs, etc. If you are trying to upload an SVG file, please make sure it’s a valid SVG.
I’ve got typefaces or font files in my SVG!
It sounds like you forgot to convert any typefaces you used into paths when designing your icon.
I’ve got shapes in my SVG!
If you’re seeing shapes like
polygon in your SVG’s code. You’ll need to convert each shape into a path.
I’ve got strokes in my SVG!
To make sure that all aspects of your icon scale properly when sizing it on the web, you’ll need to expand all strokes to be part of their path's dimensions.
I’ve got images in my SVG!
Raster images, like PNGs, GIFs, and JPGs, won’t scale and should be removed from the SVG. If possible, you should find a vector version of that image to use when designing your icon.
My icon’s scale looks too small or too large!
Double-check that your viewbox is the correct height. If you’ve designed your icon in software like Adobe Illustrator, check your artboard’s dimensions as well.
The next thing to review is the placement of your icon on that artboard - is it scaled properly to your preferred proportions of the visual canvas?
Not sure what a correct viewbox height is or how to position your icon properly? Check out our icon design guidelines for our recommendations.
Lastly, confirm that there are no other paths or points on the artboard/viewbox. Additional paths or points may cause rendering and scaling issues.
I’ve got multiple paths in my SVG!
We recommend icons be created from one single path. You should join paths that don’t overlap into a compound path. If you have paths that overlap each other (and thus have overlapping points), using your design software’s union, subtract, intersect, or exclude tools are the best way to simplify those.
I have overlapping paths in my SVG!
These paths need to be joined into one path. Using the union, subtract, intersect, or exclude tools in your design software is the best way to simplify those.
My uploaded icons aren’t displaying in my project!
That’s no bueno. Start by reviewing the following things:
- You have access to Font Awesome Pro Services (through your active Pro subscription or from backing/pre-ordering Font Awesome 5). Our kits, and thus their uploaded icons, are considered a Pro service.
- The kit you’ve referenced in your project contains the uploaded icons you want to use. Uploaded icons are tied to a specific kit and will only work on projects that reference that kit.
- The domain where you are trying to use the icon is allowed for that kit (i.e. the kit is open or the domain has been added).
- You’ve added your specific Kit embed code into the
<head> portion of your project’s HTML pages or templates.
- You’ve referenced the uploaded icon you want to display by using the
<i class="fak fa-[uploadedIconName]"></i> syntax in your project’s HTML.
- You’re using the right icon prefix (
fak) and not one of Font Awesome’s other style prefixes when referencing your uploaded icons.
Also note that our Web Fonts-based kits only support the WOFF2 font format and so uploaded icons in Web Font-based kits won't work in Internet Explorer 10 or 11 (those browsers need the older WOFF format). We're exploring options to add WOFF support, but your best bet is to use an SVG-based kit for now.
|
OPCFW_CODE
|
The following anecdote describes an early experience of mine while working in the engineering trenches of a major scope manufacturer. This happened three decades ago, and that will explain some of the limitations of the algorithm and its implementation.
The scope in question was an early digitizing model that featured eight-bit A-D converters on the input channels. One of the newer features was the ability to do digital averaging of the signal, which, for a repetitive trace, offered improvement both in the displayed signal and the signal-to-noise ratio (SNR) of the waveform captured and processed in memory.
The computation was done by the scope's resident "waveform processor", which basically featured eight-bit arithmetic. The decision was made to restrict the number of averages to powers of 2 (2, 4, 8, and so on up to 256). This allowed the division in the algorithm to be performed by a simple right-shift and saved a great number of cycles in the computation.
The issue that reared its head, very late in the day in terms of the scope’s design cycle, was the following. I was summoned to the lab where the prototype code had been running for a couple of days, and they were observing a curious phenomenon. In this new averaging mode, as the user increased the number of averages of a noisy waveform, one would expect to see an increasingly cleaner signal, all the way up to 256 averages. What was observed, however, was that the waveform did get visually cleaner up to a point, after which flat portions started appearing in the waveform, making it look distorted.
Suffice to say, the results did not look very pleasant as one increased the number of sweeps averaged, and they were sure there was no correlated noise in the system that would explain what was being seen.
As I was very familiar with digital filters even in those early days, it appeared immediately as some sort of a limit cycle phenomenon caused by finite word length in the arithmetic. Sure enough, investigation of the algorithm and implementation pointed to the causes of the problem. There were at least two effects at play.
Firstly, the implementation kept a running sum for each point to implement the average, and as a new point came in, it used the difference in values to update the sum. This meant that at each point in the waveform, you had a recursive digital filter -- i.e., one with a single pole -- which meant that it was indeed prone to limit cycles or dead bands. Secondly, the implementation of the divide, by a shift-and-truncate operation with eight-bit arithmetic, meant that the first-order recursive digital filter exhibited limit cycles the moment the divisor (power of 2) became large, and the dead bands showed up as flat areas on the waveform.
Since double-precision arithmetic was not an option, we came up with some simpler solutions involving a mix of rounding operations and some dithering, and we were able to keep the core of the firmware intact. Subsequent generations of scopes, of course, had much more powerful DSP hardware, so this particular issue did not crop up again -- to my knowledge.
|
OPCFW_CODE
|
New answers tagged tar
for i in $(tar -tzf foo.tar);do ls -d ~/$i 2>/dev/null;done
Solved the issue. Couple things at fault here: The script creates the blob with a filename starting with $distro, which happens to begin the filename with a #, which is a reserved URI character. I changed the script so that it resembled $type_$date instead. The storage blob command was failing (silently) because the script wasn't running them with sudo ...
Blob names require you to escape reserved URL characters. Your blob name starts with a # which is a reserved URL character. I suspect the issue will resolve itself when you remove that character (or escape it). Note: I haven't confirmed this with an independent test, but it's the first time I've ever seen a blob name with a # character... More info on blob ...
Let's assume that you tarred /etc directory and now you want to compare the tar filelist against the live filesystem. You can use the following commands to generate the filelist, and the diff the two list. To generate the live filesystem list: find /etc | cut -c 2- | sort > fs.list To list the files inside tar: tar -tf etc.tar.gz | sed 's//$//' | sort > ...
If you want to do this with tar and not rsync, you can paralelize your tasks : source : tar ... | nc host port destination : nc -l -p port | tar ... It will at least reduce your time by two
I would stick to relative paths: tar -czvf product.tar.gz --exclude="product/cache" product/ (it works for me).
rsync will copy only diffs - only changes from source to target server. If you already have copy of your data on new server, rsync will only copy what has been changed between your tar and actual state.
Rsync may allow you to sync the files without taking the server down. For example, you could do a sync while in production, then take the server down, sync again to make sure you got everything that changed in the mean time, and then turn on the new server. The second sync will be incremental, and would take a fraction of the time of the whole, minimizing ...
You will have to test to find out. There are many variables such as the speed of your storage system. Consider restoring your tar archive or backup while the old system is up. Then during the downtime use rsync to copy the remaining changes. It still has to check many file modify times which takes time, but there is much less I/O and network transfer. This ...
Top 50 recent answers are included
|
OPCFW_CODE
|
Code and assets are stored on EBS volumes and are backed up twice daily by creating EBS snapshots. Additionally, code and assets are backed up before every update and before site termination. Backups are initially stored on the EBS volume and then moved to the S3.
Data is stored in the RDS databases with daily backups. It can also be recovered on a point-in-time basis. Like code&assets, data is backed up before each update and upon site termination.
Additionally, users can enable a backup extension that creates per-site backups daily. These backups are stored in S3. Backups to external services (Google Drive, Dropbox, etc.) will be possible in the future.
Our primary response to the loss of data center is to recreate infrastructure in the differente availability zone or different AWS region - depending on the availability of the AWS infrastructure.
Disaster Recovery Plan
We have a Disaster Recovery Plan that includes the following:
- Guidelines for determining plan activation
- Response and recovery strategy
- Guidelines for recovery procedures
- Rollback procedures that will be implemented to return to the standard operating state
- Checklists outlining considerations for escalation, incident management, and plan activation
Overall Recovery Plan
- On-call personnel is paged by a monitoring system.
- Possible approaches toward recovery are considered.
- Recovery team roles are identified
- Partners are notified.
- Recovery procedures are implemented.
- The original issue is fixed by the team or service provider that caused the disruption.
- Rollback procedures are executed.
- Operations restored.
- How Treepl CMS Works
- Glossary of Terms
- File System
Most of the content management related assets and files used in Treepl CMS are accessible in the file system either via the admin File Manager or via FTP.
- Infrastructure & Security
Infrastructure Treepl CMS is fully hosted in Amazon Web Services (AWS) and it takes advantage...
- Limits & Restrictions
While system restrictions are inevitable, Treepl CMS aims to lift as many limitations to your development as possible.
- Backlog & Requesting Features
Treepl CMS is a community supported platform. Users and Partners are encouraged to vote in the backlog and request features.
- Website Templates
Not only can you build your websites from your own custom code, framework or 3rd party templates, but you can also get started quickly by choosing from one of our beautifully designed, responsive templates to instantly create your next Treepl CMS website.
There are currently no external resources available.
Please let us know if you have any other contributions or know of any helpful resources you'd like to see added here.
We are always happy to help with any questions you may have.
Visit the Treepl Forum for community support and to search previously asked questions or send us a message at email@example.com and we will consult you as soon as possible.
|
OPCFW_CODE
|
Add the buffer to the existing table and spatial join
I have a spatial database with multiple tables, I have created a buffer for a table with points and wanted to add the buffer to the existing table. And wanted to do a spatial join like using this buffer and another table with points to count the number of points inside each buffer and add it as a new column in the existing buffer table. I cannot figure it out.
SELECT ST_Buffer(geom::geography,100) FROM public.operation;
UPDATE operations SET buffer = ST_Buffer(geom::geography,100)::geometry;
FROM "Supermarket" AS pts, "geom" as ST_Buffer
WHERE ST_Contains( the_geom, pts.location)
First add a new geometry column with AddGeometryColumn..
SELECT AddGeometryColumn ('public','operations','buffer',4326,'POLYGON',2);
.. and then insert the buffers in the new column with an update
UPDATE operations SET buffer = ST_Buffer(geom::geography,100)::geometry;
EDIT 1: Adding a new column to table ´operations` and filling it with the amount of points from another table that spatially overlap with the new buffers:
ALTER TABLE operations ADD COLUMN pts int;
UPDATE operations o
SET pts = (SELECT count(*) FROM supermarket s
WHERE ST_Contains(o.buffer,s.geom));
EDIT 2 (See comments):
CREATE INDEX idx_operation_geom ON operations USING gist (geom);
CREATE INDEX idx_supermarket_geom ON supermarket USING gist (geom);
@Aravinth this is a totally different question, and we're not encouraged to address multiple issues in a single question, since it can get hard to understand or pretty much useless for other users in the community. After you finish this question, post another one with a few data samples (preferably with a fiddle) and the expected results.
Oh ok, Really thank you for the clarification about the questioning.
@Aravinth btw: if you haven't done it already, I strongly suggest you to create a gist index in the geometry columns.. otherwise operations with this amount of data takes ages to complete.
Yes it is taking a lot of time, What do you mean by 'gist' index.
@Aravinth I just edited my answer. Run it before the update... it will be faster.
The column is created but all the entries are 0, the count is not added to the column pts
@Aravinth oh no.. I see now :-D, I used the column geom instead of buffer :-D Just corrected the answer: UPDATE operations o SET pts = (SELECT count(*) FROM supermarket s WHERE ST_Contains(o.buffer,s.geom));
Sorry for the late response, this works fine. Thank you for the information.
I'm glad it worked. Happy coding @Aravinth
|
STACK_EXCHANGE
|
How to get (meteor) spiderable working with script tags
I've been trying to implement the spiderable package today, and wasn't able to get it working for awhile because I was seeing this error when I went to http://localhost:3000/?escaped_fragment=
Meteor code must always run within a Fiber.
After lots of debugging I got it to work when I removed all of the <script> tags in my <head>. The issue is that I need those tags in order to load services for my site like typekit, zopim, google analytics, etc.
How can I get spiderable to work and keep my script tags?
If the Tags are the problem why dont just download the .js files and put then on the /lib folder?
This error Meteor code must always run within a Fiber. is a server side error & shouldn't happen client side. It's possible something else is causing it not the script tags. If you have callbacks from npm modules ensure that you use Meteor.bindEnvironment to make the error go away
@Ethaan I can't put them in /lib because they need to access the window object and the document object. Even if i put the scripts in the compatibility folder, I get the same fiber error. The only thing that works for spiderable is removing all of them.
@Akshat even if I remove ALL my server side logic I still see the same error. Even created a fresh project and saw the error as soon as i added my <script> tags, so I think they are triggering the error in some capacity.
Ok now I've narrowed it down to one library that causes this bug — typekit. Any idea how I can modify typekit so that it isn't causing this bug?
@gaplus I don't think its an issue with Meteor if removing typekit fixes it. Make sure your phantomjs installation supports typekit (it requires a special version & special libraries). I'm not sure where the error comes from, are you using the normal spiderable package or another one? Do you have any npm module in your project
@Akshat what do you mean it requires a special version & special libraries? What libs are you referring to?
@gaplus The issue isn't that clear cut. You need certain libraries/development headers to build a phantomjs that supports webfonts. I'd recommend you check the github issues up on it as I'm not well versed on the details. I'm not absolutely certain it will fix your issue, as I mentioned the error is caused by a server side issue, from unofficial spiderable package or an npm module. If you could provide the details it may also solve your issue.
@gaplus Could you try running a phantomjs script against your server see (PhantomJS errors processing the page) over at http://www.meteorpedia.com/read/spiderable/
@Akshat I tried running that script but all i got was this error :
"URL file://phantomtest.js. Domains, protocols and ports must match."
@gaplus I hope you're running the script with phantomjs phantomtest.js & have edited the meteorpedia.com to your localhost url
Let us continue this discussion in chat.
|
STACK_EXCHANGE
|
Research Working Group
Storage Working Group
- Best Practices
Define and develop the MLPerf™ Storage benchmarks to characterize performance of storage systems that support machine learning workloads.
Storing and processing of training data is a crucial part of the machine learning (ML) pipeline. The way we ingest, store, and serve data into ML frameworks can significantly impact the performance of training and inference, as well as resource costs. However, even though data management can pose a significant bottleneck, it has received far less attention and specialization for ML.
The main goal of this working group is to create a benchmark that evaluates performance for the most important storage aspects in ML workloads, including data ingestion, training, and inference. Our end goal is to create a storage benchmark for the full ML pipeline which is compatible with diverse software frameworks and hardware accelerators. The benchmark will not require any specific hardware for performing computation.
Creating this benchmark will establish best practices in measuring storage performance in ML, contribute to the design of next generation systems for ML, and help system engineers find the right sizing of storage relative to compute in ML clusters.
- Storage access traces for representative ML applications, from the applications’ perspective. Our initial targets are Vision, NLP, and Recommenders. (Short-term goal)
- Storage benchmark rules for:
- Data ingestion phase (Medium-term goal)
- Training phase (Short-term goal)
- Inference phase (Long-term goal)
- Full ML pipeline (Long-term goal)
- Flexible generator of datasets:
- Synthetic workload generator based on analysis of I/O in real ML traces, which is aware of compute think-time. (Short-term goal)
- Trace replayer that scales the workload size. (Long-term goal)
- User-friendly testing harness that is easy to deploy with different storage systems. (Medium-term goal)
Weekly on Friday from 8:05-9:00AM Pacific.
How to Join and Access Working Group Resources
- To sign up for the group mailing list, receive the meeting invite, and access shared documents and meeting minutes:
- To engage in group dicussions:
- Join the group's channels on the MLCommons Discord server.
- To access the GitHub repository (public):
Working Group Chairs
To contact all Storage working group chairs email firstname.lastname@example.org.
Curtis is a filesystem developer at heart, spending the last 36 of his 45 years of programming experience working on filesystems and nearly every type of storage-related technology. He’s currently working at Panasas helping steer PanFS toward a more commercial view of the HPC market. He also enjoys watching the business side of the house do their things, it's foreign to tech but has its own internal logic and “architecture”.
Johnu George is a staff engineer at Nutanix with a wealth of experience in building production grade cloud native platforms and large scale hybrid data pipelines. His research interests include machine learning system design, distributed learning infrastructure improvements and ML workload characterization. He is an active open source contributor and has steered several industry collaborations on projects like Kubeflow, Apache Mnemonic and Knative. He is an Apache PMC member and currently chairing Kubeflow Training and AutoML Working groups.
Oana is an Assistant Professor in the School of Computer Science at McGill University. Her research focuses on storage systems and data management systems, with an emphasis on large-scale data management for machine learning, data science, and edge computing. She completed her PhD at the University of Sydney, advised by Prof. Willy Zwaenepoel. Before her PhD, Oana earned her Bachelors and Masters degrees in Computer Science from EPFL.
Huihuo Zheng is a computer scientist at Argonne National Laboratory. His research interests include data management and parallel I/O for deep learning applications, as well as large scale distributed training on HPC supercomputers. He also applies HPC and deep learning to solve challenging domain science problems in physics, chemistry and material sciences. Huihuo received his PhD. in Physics at the University of Illinois at Urbana-Champaign in 2016.
|
OPCFW_CODE
|
By David H. Ringstrom, CPA
Hackers are actively exploiting Java to control affected computers, potentially installing malware, attempting identity theft, and other malicious actions. Over the weekend, Oracle released Java 7 Update 11, which reportedly patches this vulnerability. All computer users that have Java installed on their computer should install this patch immediately
. Another alternative is to disable Java in all web browsers. US-CERT, sponsored by the US Department of Homeland Security, offers more details and remediation guidance on its website
Oracle Java 7 Update 10 and earlier reportedly are being actively exploited by hackers. It's possible that some earlier versions, such as Java 6, aren't affected, but to be safe, all users should immediately disable any version of Java or install Java 7 Update 11. Java 7 Update 10 and later offer a check box to disable Java in web browsers, but earlier versions of Java don't offer this feature.
To access Java on a Windows computer, locate the Java icon in the Windows Control Panel. Click the About button on the General tab, as shown in Figure 1, to determine the version of Java you have installed. If it reads Version 7 Update 11, you have the latest version of Java installed. In this case, you may still wish to disable Java. To do so, close the About Java window and click on the Security tab as shown in Figure 2. Deselect the Enable Java Content in the Browser check box and then click OK.
Figure 1: Click the About button on the General tab of Java's Control Panel icon to determine your Java version.
Figure 2: Java 7 Update 10 and later allow you to disable Java by deselecting a check box.
If you don't have Java Version 7 Update 11 or later, click the Update tab, and then the Update Now button as shown in Figure 3, and then follow the onscreen prompts to install the latest version of Java. Once you install this update, the check box shown in Figure 2 may still be missing from the Security tab. If so, close the Java Control Panel and relaunch it by double-clicking on the javacpl.exe file that will likely be found in one of these two locations:
- C:\Program Files\Java\jre7\bin
- C:\Program Files (x86)\Java\jre7\bin
Figure 3: You can download the latest version of Java from within the Java Control Panel.
On a Macintosh OS X computer, launch a Finder window, search for Java, double-click on Java Preferences, and then follow the aforementioned instructions.
Oracle offers specific guidance on removing Java on its website
About the author:
David H. Ringstrom, CPA heads up Accounting Advisors, Inc., an Atlanta-based software and database consulting firm providing training and consulting services nationwide. Contact David at email@example.com or follow him on Twitter. David speaks at conferences about Microsoft Excel, and presents webcasts for several CPE providers, including AccountingWEB partner CPE Link
|
OPCFW_CODE
|
If you are the lead user of your account you can use the Admin menu in the application tool bar to access some of the settings for your account. A short video demonstration of the Admin functions is included at the end of this article.
First you can manage the users in your account. If your account is set up for more than one user then you can configure the various additional users here. If you have questions about users and your account, or you want to increase the number of users in the account, please contact email@example.com.
Second you can manage the status levels available for decisions. Status levels let you track the completeness of your work on the core objects in your database, the Decisions. You can define as many or as few status levels as you like and assign each a color. There is no requirement for each level to have a unique color. These colors will be used in the Decision Requirements Diagram so you can quickly see the status of your Decisions.
The default database contains three status levels – In Process, Completed and Implemented. Add a new status level “Ready for Review” and pick any color you like. Use the up and down arrows to move it so it is above Implemented in the list. The list is sequential – each Status Level from top to bottom is considered “more complete” than the last. Click the Update button to apply.
The final element is the assignment of Completeness Checks to Status Levels. DecisionsFirst Modeler has a number of built in completeness and correctness checks. Some of these are Errors and some are Warnings. When triggered, these checks will highlight elements of the Decision as being incomplete or inconsistent. The checks are described in more detail here. To ensure you have flexibility the checks are not applied automatically – you must select the Status Level at which you want to enforce the check. Any Decision that is in that Status or a more complete one (one displayed further down the list) will execute the check and display any Warnings or Errors that apply.
Find the Error “Decision does not impact an Objective” and change the Apply To column to Ready for Review. Hit the Update button to apply and then close the User Management tab.
Open the Support Decisions diagram and then Edit the Determine which customers should be notified of a bug fix Decision. Change the status to Ready for Review and Save. Notice on the diagram that the color of the Decision has changed (you may need to Refresh the diagram using the Refresh button to see it). Return to the Decision editing window and click the Check Completeness button. A message displays temporarily telling you that there is an Error on the object. Scroll down and you will see the Objective list is highlighted in red. Mouse over this to see the Error message.
To clear the error we will associate this Decision with an Objective. Enter customer in the Search box and hit Enter. Take the Customer Retention objective and drag it to the Objective list. Hit the save button and then Check Completeness again – the error has been cleared.
Admin Panel Video Demonstration
<Previous> <Tutorial Overview>
|
OPCFW_CODE
|
*Some of these links are affiliate links, which give me a small commission that helps to support this Youtube channel. The cost remains the same to you, but if you don’t want to use the affiliate link you can simply search for the products yourself on Amazon.
Do this to Achieve Success & Prosperity I Dr Karishma Ahuja
YNW Melly ft. Kanye West - Mixed Personalities (Lyrics) ♪
LETS GET THIS IMPOSTER!! Among Us Live with Chat! Come Join!
COVID-19 measures on Public Transport| NBS Up and About
Van Canto - The Seller of Souls live @Prague 10/2011
What causes antibiotic resistance? - Kevin Wu
Winter Recital 2016 - Ballad for Two (George Nevada) and Incessant Gallop
T-Direction is competitively priced to meet your budget...
Waterproof Vs. Water Resistant (PART ONE)
WOW! Singer sounds the same as BOB MARLEY! | STORIES #29
Odisha school college open date ||Odisha school college open date 2021 ||collage open date 2021 |
The Value of Disappointment | Joanie Quinn | TEDxPCC
Popcaan - MAMAKITA [Official Video]
LEARN 105 ENGLISH VOCABULARY WORDS | DAY 3
FIFA 10 - Davo Is SUPERB! The Perfect Pro Clubs Player. [HD]
DJ Snake feat Majid Jordan - Recognize (Mercer Remix)
Distinguished Player👽 FreeFire Highlights
IS THIS WHOLE THING A SHAM?
Christmas Lights Celebration 2020
L/F/D/M - Hassle Free [VOI022]
Live: NBC News NOW - December 14
Blooom & Ghost'n'Ghost - Desire [NCS 1 HOUR]
Always Take A Look, So The Content Doesn't Overcook
25+ PRESSURE POINTS AND MASSAGE TECHNIQUES TO STOP YOUR PAIN
Life Of Billionaires | Rich Lifestyle Of Billionaires | Motivation #8
Class-XI Some Commonly Used Quantities
Supreme Court finds dog owner liable for injuries in an attack outside his property
🔞DIRECT TV GO ! COMO FUNCIONA? VALE A PENA? PLATAFORMA IPTV LEGALIZADA!
Transformations Rotating Shapes (1)
The Law of Favor | Dr. Bill Winston
Spencer Sutherland - It May Sound Strange (Official Video)
The Soft Underbelly Of The Australian Economy... The DFA Daily 04 February 2021
BEST OF FITZ 2017
Who Is Ron DeSantis? Narrated By Yedoye Travis | NowThis
What if a medicine listened to cues from your body to provide therapy when and where it's needed?
Rebuke - Along Came Polly
Student Life In Latvia Part 2 Blog 2018 | Student Life In Riga | Study in Latvia
Breaking Point | "One Of A Kind" | Music Video
7 Ways Magnesium Improves Your Brain
The Beatles - Yesterday (Lockdown Acapella Version)
ไม่เข้าใจ - BOSSSICK x 2T FLOW (PROD.ZEEBEER)
Economist Predicts ECONOMIC CRASH from Massive Money ‘Bubble’ | The Glenn Beck Program
Barely Enough- Spring Coffeehouse 2018
Top 10 Tools in 2020? Let's find out!
VOLLEYBALL COACH REACTS TO HAIKYU S1 E13 - Rival
Christina Aguilera - Dirrty (VIDEO) ft. Redman
Abraham Hicks 🌹 BEGIN TO UNDERSTAND HOW YOU CREATE YOUR REALITY 🌹
An Improved Argumentative Essay with Commentary
The Lost Primarchs EXPLAINED By An Australian | Warhammer 40k Lore
|
OPCFW_CODE
|
Possibility to deserialize &Value to type cloning on-the-way?
I'm trying to deserialize json input into either a "success" or an "error" type, both of which are fully owned (no borrows) by design.
In serde 0.9, I first deserialized the input stream to a json Value, then passed &value to two different call attempts - I assumed that this would be pretty cheap, since a failed parse would generally fail on the first field attempted being missed, and thus no strings would need to be reallocated for the failed attempt.
In serde 1.0 though, I can't seem to get this to work.
My code is roughly:
let json: serde_json::Value = match serde_json::from_reader(&mut response) {
Ok(v) => v,
Err(e) => return Err(Error::with_url(e, Some(response.url.clone()))),
};
let result = match R::RequestResult::deserialize(&json) {
Ok(v) => v,
Err(e) => {
match R::ErrorResult::deserialize(&json) {
Ok(v) => return Err(Error::with_json(v, Some(response.url.clone()), Some(json))),
// Favor the primary parsing error if one occurs parsing the error type as well.
Err(_) => return Err(Error::with_json(e, Some(response.url.clone()), Some(json))),
}
}
};
R::RequestResult and R::ErrorResult are associated types which are bound to serde::Deserialize<'static> + 'static.
The error I get with serde 1.0:
error: `json` does not live long enough
--> src/lib.rs:560:59
|
560 | let result = match R::RequestResult::deserialize(&json) {
| ^^^^ does not live long enough
...
572 | }
| - borrowed value only lives until here
|
= note: borrowed value must be valid for the static lifetime...
error: `json` does not live long enough
--> src/lib.rs:563:52
|
563 | match R::ErrorResult::deserialize(&json) {
| ^^^^ does not live long enough
...
572 | }
| - borrowed value only lives until here
|
= note: borrowed value must be valid for the static lifetime...
error: aborting due to 2 previous errors
Ideally I would like to deserialize from a borrowed json value and only clone things only once each field has been successfully found - so that the most common error of "missing field" won't cause any extra allocations (like if I had to clone the whole json value).
If this isn't possible, I can always create a go-between struct which implements Deserialize for borrowed values, then provides a to_owned method. This wouldn't be ideal, as it would require an additional structure and implementation in every different result type, but it would be a workable solution if there isn't any other.
Hope that's a good explanation of my use case - I can provide more examples if needed. Thanks!
Managed to fix this, seems it was just an error in how I declared my trait! Thanks to oli-obk in irc.
Working version has AssocItem: for<'de> Deserialize<'de> instead of AssocItem: Deserialize<'static>
You probably want to use DeserializeOwned, which is just an alias for the for<'de> Deserialize<'de>. Unfortunately it's not exported in crate root, you have to import like this: use serde::de::DeserializeOwned;.
@daboross https://serde.rs/lifetimes.html gives a bit more detail about these lifetimes, and trait bounds in particular.
|
GITHUB_ARCHIVE
|
Excellent Question and then for the Answer. I was wondering of that as well. I think for a 3rd person mini game which I am getting ready to go into productions soon, an I5 4 core would do fine. I think even and 4th gen i3 would be fine.
You setup your lighting how you want it to be and when you’re ready you can Build Lighting, it will then calculate high quality GI lighting and render to lightmaps. When you’re done with your game it can package the game to make an .exe but at that point the lighting is already built.
It really depends on the size of your level and the complexity of your meshes, both of which shouldn’t be an in your case (for iOS games). It will take 2x longer to build the lighting with a 2 core processor compared to a 4, but it shouldn’t be a big problem in your case again. I would personally get a quad core in your situation since a year from now who knows where mobile games will be at graphically, plan for the future I say.
At first when I read 1.4GHz I was ready to say no way, but it looks like it runs at that speed as a power saving measure, with a 2.7GHz turbo speed. Intel HD graphics usually don’t play well with UE4, the HD 3000 won’t even run on the PC side, and the 4000 series has difficulties on the mac side. I haven’t heard any users reporting on the HD 5000 so far but there may be some in here, have a look through so you can see how a comparable system will run UE4. If you turn down all of the graphics options in UE4 it might work, but it will most likely still be really slow.
If the mini had a better graphics card I would say it would work for sure, but I can’t guarantee you will be able to work without issues in UE4 on those specs. Sorry to be the bearer of bad news…
These are the recommended specs from Epic:
But since you are getting one anyways, it would be great if you could let us know how well it works! I’m sure there are many others wondering about .
Performance will not be great, but it will run. I think they’ve fixed the recently, but you may even get better performance if you install Windows on there (Bootcamp), there were issues in the past that UE4 on Mac ran slower on the same hardware, but I think they may have fixed that.
Not sure about Mac’s but with PC’s and DirectX 12 coming out soon, more cores is the way to go.
At the moment most none Mantle game engines use a main or single rendering thread/one core and farm out tasks/threads to other cores. Mantle and DirectX 12 allow all the cores to feed the GPU as well as optimising the information sent between the CPU and GPU.
Result massive ‘Free’ boost in performance.
Check out the Microsoft DirectX 12 video where they adapt a 3DMark test to use DirectX12 and go from a main thread/core taking 6-7 ms a frame down to 2-3 ms a frame. So that 3DMark is twice as fast on the same hardware!
I’m guessing that’s running on a quad core CPU.
More cores spreads the load on engines that support DirectX 12.
Hopefully OpenGL Next(?) will be out soon as well.
|
OPCFW_CODE
|
Groundlight's SDK accepts images in many popular formats, including PIL, OpenCV, and numpy arrays.
The Groundlight SDK can accept PIL images directly in
submit_image_query. Here's an example:
from groundlight import Groundlight
from PIL import Image
gl = Groundlight()
det = gl.get_or_create_detector(name="path-clear", query="Is the path clear?")
pil_img = Image.open("./docs/static/img/doorway.jpg")
OpenCV is a popular image processing library, with many utilities for working with images.
OpenCV images are stored as numpy arrays. (Note they are stored in BGR order, not RGB order, but as of Groundlight SDK v0.8 this is the expected order.)
OpenCV's images can be send directly to
submit_image_query as follows:
cam = cv2.VideoCapture(0) # Initialize camera (0 is the default index)
_, frame = cam.read() # Capture one frame
gl.submit_image_query(detector, frame) # Send the frame to Groundlight
cam.release() # Release the camera
The Groundlight SDK can accept images as
numpy arrays. They should be in the standard HWN format in BGR color order, matching OpenCV standards.
Pixel values should be from 0-255 (not 0.0-1.0 as floats). So
uint8 data type is preferable since it saves memory.
Here's sample code to create an 800x600 random image in numpy:
import numpy as np
np_img = np.random.uniform(low=0, high=255, size=(600, 800, 3)).astype(np.uint8)
# Note: channel order is interpretted as BGR not RGB
Channel order: BGR vs RGB
Groundlight expects images in BGR order, because this is standard for OpenCV, which uses numpy arrays as image storage. (OpenCV uses BGR because it was originally developed decades ago for compatibility with the BGR color format used by many cameras and image processing hardware at the time of its creation.) Most other image libraries use RGB order, so if you are using images as numpy arrays which did not originate from OpenCV you likely need to reverse the channel order before sending the images to Groundlight. Note this change was made in v0.8 of the Groundlight SDK - in previous versions, RGB order was expected.
If you have an RGB array, you must reverse the channel order before sending it to Groundlight, like:
# Convert numpy image in RGB channel order to BGR order
bgr_img = rgb_img[:, :, ::-1]
The difference can be surprisingly subtle when red and blue get swapped. Often images just look a little off, but sometimes they look very wrong.
Here's an example of a natural-scene image where you might think the color balance is just off:
In industrial settings, the difference can be almost impossible to detect without prior knowledge of the scene:
For a unified interface to many different kinds of image sources, see the framegrab library. Framegrab is still an early work in progress, but has many useful features for working with cameras and other image sources. Framegrab provides a single interface for many different kinds of image sources, including:
- USB cameras
- IP cameras
- Video files
- Image files
|
OPCFW_CODE
|
#include <stdlib.h>
#include <stdio.h>
#include <vector>
#include <string>
#include <sstream>
#include <iostream>
#include <stdexcept>
#include <functional>
#include <chrono>
#include <random>
#ifndef SIMPLE_PID_CONTROLLER_HPP
#define SIMPLE_PID_CONTROLLER_HPP
namespace simple_pid_controller
{
class PIDParams
{
protected:
double kp_;
double ki_;
double kd_;
double i_clamp_;
public:
PIDParams(const double kp, const double ki, const double kd, const double i_clamp) : kp_(kp), ki_(ki), kd_(kd), i_clamp_(i_clamp) {}
PIDParams() : kp_(0.0), ki_(0.0), kd_(0.0), i_clamp_(0.0) {}
inline double Kp() const
{
return kp_;
}
inline double Ki() const
{
return ki_;
}
inline double Kd() const
{
return kd_;
}
inline double Iclamp() const
{
return i_clamp_;
}
};
class SimplePIDController
{
protected:
bool initialized_;
double kp_;
double ki_;
double kd_;
double integral_clamp_;
double error_integral_;
double last_error_;
public:
SimplePIDController(const double kp, const double ki, const double kd, const double integral_clamp)
{
Initialize(kp, ki, kd, integral_clamp);
}
SimplePIDController(const PIDParams& params)
{
Initialize(params.Kp(), params.Ki(), params.Kd(), params.Iclamp());
}
SimplePIDController()
{
kp_ = 0.0;
ki_ = 0.0;
kd_ = 0.0;
integral_clamp_ = 0.0;
error_integral_ = 0.0;
last_error_ = 0.0;
initialized_ = false;
}
inline bool IsInitialized() const
{
return initialized_;
}
inline PIDParams GetParams() const
{
return PIDParams(kp_, ki_, kd_, integral_clamp_);
}
inline void Zero()
{
last_error_ = 0.0;
error_integral_ = 0.0;
}
inline void Initialize(const double kp, const double ki, const double kd, const double integral_clamp)
{
kp_ = std::abs(kp);
ki_ = std::abs(ki);
kd_ = std::abs(kd);
integral_clamp_ = std::abs(integral_clamp);
error_integral_ = 0.0;
last_error_ = 0.0;
initialized_ = true;
}
inline double ComputeFeedbackTerm(const double target_value, const double process_value, const double timestep)
{
// Get the current error
const double current_error = target_value - process_value;
return ComputeFeedbackTerm(current_error, timestep);
}
inline double ComputeFeedbackTerm(const double current_error, const double timestep)
{
// Update the integral error
const double timestep_error_integral = ((current_error * 0.5) + (last_error_ * 0.5)) * timestep; // Trapezoidal integration over the timestep
const double new_error_integral = error_integral_ + timestep_error_integral;
error_integral_ = std::max(-integral_clamp_, std::min(integral_clamp_, new_error_integral));
// Update the derivative error
const double error_derivative = (current_error - last_error_) / timestep;
// Update the stored error
last_error_ = current_error;
// Compute the correction
const double correction = (current_error * kp_) + (error_integral_ * ki_) + (error_derivative * kd_);
return correction;
}
};
}
#endif // SIMPLE_PID_CONTROLLER_HPP
|
STACK_EDU
|
Custom Error Pages
Set up custom error pages, that will be displayed for example if a subpage was not found. So you can keep users on your website, if something goes wrong.
If the user requests a page that was not found in the Fishbeam Cloud, an error message will be displayed with the Fishbeam Cloud logo by default. In order to get a better user experience and to inform the user in case of errors on existing pages, you can create your own error pages.
How to Make a Custom Error Page in Goldfish
- Open your website in Goldfish.
- Create a new page below the start page and name it error404.html or error404.php.
- Design the page according to your wishes.
- Important: Enable the option Page> Use absolute file paths in the page properties.
- Publish the changes.
From now on, the Fishbeam Cloud automatically shows your custom error page whenever a page is not found.
It is possible to use different error pages in each subfolder. The Fishbeam Cloud always looks in the current folder for an error page and if there is n't any it looks in the overlying folder. So you can create a mobile error page and put it in the folder /mobile/ for example. Then, this page will be shown to all users with a smartphone.
Types of Error Pages
In addition to error pages for the error 404 - Page not found, the Fishbeam Cloud also supports other error messages for which you can create separate pages.
|Error 401 - Unauthorized Access
|Create the page error401.html or error401.php in Goldfish. This error page is loaded when trying to load a password-protected page without a valid password.
|Error 403 - Forbidden
|Create the page error403.html or error403.php in Goldfish. This error page is loaded if the web server does not have permission to deliver the page.
|Error 404 - File Not Found
|Create the page error404.html or error404.php in Goldfish. This error page is loaded if a page is not found.
|Error 500 - Internal Server Error
|Create the page error500.html or error500.php in Goldfish. This error page is loaded when a server error occurs, e.g. if the Fishbeam Cloud is overloaded.
|Create the page error.html or error.php in Goldfish. This general error page is loaded when one of the described errors occurs but no specific error page is found.
Only create the error.html or error.php page if you do not want to create a separate page for each type of error. This page will be displayed then for every error, no matter of what type.
Did this help page answer your questions?
If you need additional assistance regarding this topic or if there's missing some information in this chapter, please write us.
|
OPCFW_CODE
|
Using zmq.asyncio.Context.instance creates a new instance rather than yield the existing one.
Hello :)
I've been working with asyncio and inproc transport. I was not getting any messages through though. I managed to get a minimal test case working with dealer sending a simple string to the router. Here's the code for that...
import zmq.asyncio
z_ctx = zmq.asyncio.Context()
z_ctx2 = zmq.asyncio.Context.instance()
print("ASYNCIO TEST ID(z_ctx) :", id(z_ctx))
print("ASYNCIO TEST ID(z_ctx2) :", id(z_ctx2))
dealer = z_ctx.socket(zmq.DEALER)
router = z_ctx.socket(zmq.ROUTER)
router.bind("inproc://thingy")
dealer.connect("inproc://thingy")
def test():
dealer.send_multipart([b"Hi"])
result = router.recv_multipart()
print(result)
test()
z_ctx.destroy(linger=0)
z_ctx2.destroy(linger=0)
Here's the output of this code
ASYNCIO TEST ID(z_ctx) :<PHONE_NUMBER>
ASYNCIO TEST ID(z_ctx2) :<PHONE_NUMBER>
<Future finished result=[b'\x00\xa6\xc7~^', b'Hi']>
What is interesting is that the id of each of the two contexts is different, but in the above code, I only use one to create both sockets. Now here is the code with a slight modification that causes no message to be received, as you can probably guess, I use z_ctx for the dealer, and z_ctx2 for the router
import zmq.asyncio
z_ctx = zmq.asyncio.Context()
z_ctx2 = zmq.asyncio.Context.instance()
print("ASYNCIO TEST ID(z_ctx) :", id(z_ctx))
print("ASYNCIO TEST ID(z_ctx2) :", id(z_ctx2))
dealer = z_ctx.socket(zmq.DEALER)
router = z_ctx2.socket(zmq.ROUTER)
router.bind("inproc://thingy")
dealer.connect("inproc://thingy")
def test():
dealer.send_multipart([b"Hi"])
result = router.recv_multipart()
print(result)
test()
z_ctx.destroy(linger=0)
z_ctx2.destroy(linger=0)
Now the output is
ASYNCIO TEST ID(z_ctx) :<PHONE_NUMBER>
ASYNCIO TEST ID(z_ctx2) :<PHONE_NUMBER>
<Future pending cb=[_AsyncSocket._add_recv_event..() at /Users/jamescrowther/Library/Application Support/Blender/2.92/scripts/addons/crowdrender/lib/Darwin/3_7/zmq/_future.py:351]>
As you can see the router socket has not received anything as the returned future is still in the pending state. If I run an event loop to wait for the future to complete, it never returns, hanging the interpreter/thread.
I even tried swapping the contexts used for each of the sockets so that the router had z_ctx and dealer z_ctx2, but this made no difference, the same result happened.
I am fairly sure this is not intended. The docs for zmq.Context.instance basically tell us that its better to use the instance method than to 'pass around' copies of the original context created in that thread. But, for asyncio at least, I cannot do this and get inproc sockets to work, as its been stated in another issue, one cannot use two different contexts if you wish to get inproc sockets to work.
So, I feel we're in need of a fix for this, or a change to the docs to state that for inproc/asyncio, you cannot use zmq.asyncio.Context.instance() to get a ref to the current threads global zmq context.
If I have missed this fact in the docs somewhere, I apologise ;) please point me to the right place and feel free to close this issue.
Best wishes
J
I think I must be confused - the title mentions Context.instance(), but the code sample doesn't use Context.instance(). It definitely won't do anything if you don't use it...
Using the Context() constructor, as is done here, does always return a new context, like all Python object constructors.
If you change your script to use Context.instance():
z_ctx = zmq.asyncio.Context()
z_ctx2 = zmq.asyncio.Context()
then z_ctx is z_ctx2 and inproc should work just fine, since it is only for communication within a single context. With that change, your script gives me:
ASYNCIO TEST ID(z_ctx) :<PHONE_NUMBER>25328
ASYNCIO TEST ID(z_ctx2) :<PHONE_NUMBER>25328
<Future finished result=[b'\x00\x80\x00A\xa7', b'Hi']>
Hi Min my mistake, I've edited the code to make the second context use the instance method. I've re-run this test and the same result happens
ASYNCIO TEST ID(z_ctx) :<PHONE_NUMBER>
ASYNCIO TEST ID(z_ctx2) :<PHONE_NUMBER>
^C
Sent an internal break event. Press ^C again to kill Blender
Traceback (most recent call last):
File "/more_asyncio_tests.py", line 31, in <module>
File "/Applications/Blender.app/Contents/Resources/2.92/python/lib/python3.7/asyncio/base_events.py", line 574, in run_until_complete
self.run_forever()
File "/Applications/Blender.app/Contents/Resources/2.92/python/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/Applications/Blender.app/Contents/Resources/2.92/python/lib/python3.7/asyncio/base_events.py", line 1750, in _run_once
event_list = self._selector.select(timeout)
File "/Applications/Blender.app/Contents/Resources/2.92/python/lib/python3.7/selectors.py", line 558, in select
kev_list = self._selector.control(None, max_ev, timeout)
KeyboardInterrupt
Error: Python script failed, check the message in the system console
If the router socket uses the second z_ctx2 object, then I get the output in my last comment, and I have to Ctrl - C to get the terminal to respond again, it hangs my interpreter. If I just use z_ctx for both, then it returns the following
ASYNCIO TEST ID(z_ctx) :<PHONE_NUMBER>
ASYNCIO TEST ID(z_ctx2) :<PHONE_NUMBER>
<Future finished result=[b'\x00%P\xe2a', b'Hi']>
Yeah, it makes sense that if the two contexts are different inproc wouldn't work. Both contexts must use the instance method. zmq.Context() does not register a context as the global instance, only instance() does that. Using zmq.Context() means you don't want to use the global instance.
Thanks Min, that makes sense, and this works too, can confirm in my example above that if I do
z_ctx = zmq.asyncio.Context.instance()
z_ctx2 = zmq.asyncio.Context.instance()
in my example above, then the output is
ASYNCIO TEST ID(z_ctx) :<PHONE_NUMBER>
ASYNCIO TEST ID(z_ctx2) :<PHONE_NUMBER>
<Future finished result=[b'\x00\x85\xaa+J', b'Hi']>
The docs show an example, but the example didn't really drive this home to me, this idea of an instance method was new, and I thought I'd try it out. But I had become used to always creating a context somewhere that simply used zmq.Context(). The docs didn't mention that if I used zmq.Context(), then this would not create a global instance and I'd get these strange results.
They just show this example
class MyClass(object):
def __init__(self, context=None):
self.context = context or Context.instance()
It might be worth mentioning that this is an either/or situation where if you intend to use global contexts then stick to using the instance method, unless you want one that is not global. Don't know why anyone would want that though as I seem to recall somewhere that said you should stick to one context object per process?
There are various scoping/threading reasons to use different contexts in a process in some advanced use cases (ensure everything around a task is cleaned up when it's done, avoid GIL interactions from the IO thread in a background device function, etc.)
But in general, one context for an application usually does make sense.
I'm not sure how to improve the docs around the global instance and Context() always returning a new context like any other constructor, but suggestions are certainly welcome. I'm going to close this issue as resolved.
Hi @minrk Had a thought about this, a bit late since the issue is closed, but I think a slight tweak to the layout of text and examples on the page would actually do the trick here.
Most single-threaded applications have a single, global Context. Use this method instead of passing around Context instances throughout your code.
context = zmq.Context.instance()
Using zmq.Context() will always create a separate context to one generated by zmq.Context.instance()
A common pattern for classes that depend on Contexts is to use a default argument to enable programs with multiple Contexts but not require the argument for simpler applications:
class MyClass(object):
def __init__(self, context=None):
self.context = context or Context.instance()
|
GITHUB_ARCHIVE
|
How to know if file is deleted and created in bash?
I have a scenario where a script exists to remove files according to the conditions.
Currently the script is like:
rem.sh
If [ -e file1 ]; then
rm -f file1
elif [ -e file2 ]; then
rm -f file2
fi
I have another script which calls rem.sh
main.sh
1 rem.sh #should remove file1
2 touch file1
3 xxx
4 xxx
5 xxx
6 rem.sh #should remove file2 only
7 touch file2
If I run main.sh, It only remove file1 and create file1. Is there way to remove and create file2 and so on?
EDIT: I want to replace [-e file] with some condition which removes file2 when rem.sh is called for second time
Is the question "How to know if" or "how to remove"?
@Keldorn If i know “if a file is deleted and created”, I can olace that condition insted of “-e file”
rem.sh says "if file1 exists, remove file1, otherwise, if file2 exists, remove file2." Is that what it's supposed to do?
Sounds like you don't want elif and instead just have separate if blocks. Or even better, just rm -f file1 file2 since this will delete the files only if they exists
I think maybe he only wants to delete one of the two files, if they both exist?
@WillisBlackburn yes.That is correct. I want to replace that condition (-e file) and place a condition which allows to remove file2 when rem.sh is called for second time
Why bother with rem.sh at all? Why not just replace your first call to rem.sh with rm -f file1 and the second one with rm -f file2?
@WillisBlackburn I can do that. But I’m looking to create a seperate script to remove files
Why? What's the purpose of the separate script?
@chepner To remove those files, we have to call with sudo. If I create a seperate file, I can just call the script with sudo
your if/elseif designed to do either one branch or possibly the other. If you want both change into two if statements.
@karakfa If I do that, each time I run rem.sh - file1 will be deleted
Please clarify your question. People are ready to help but do not understand what you want to do.
I also don't understand. Can you write comments in main.sh saying for example rem.sh # Should only remove file1 or rem.sh # Should remove file1 and file2
So you want to run the same script, in the same conditions (ie file1 and file2 exist) but have the rem.sh script behave differently the second time?
@Keldorn yes please
As is, rem.sh can not know if it is the second time it is running, if all things are the same. You could have some history saved somewhere, in another file. But chances are you can just simplify the problem as in the answer below.
instead of complicating, what I think you just want to delete the contents of the files at certain points of your main script. So instead of rm/touch pair, just dump null into the file.
1 >file1
2 xxx
3 xxx
4 >file2
5 xxx
Are you trying to have the condition work for each file? So you don't have to type the file name each time? If so, you can just have a counter or use the read command to send user input to a variable, then use that in your if statement.
|
STACK_EXCHANGE
|
Group libraries allow different Zotero users to access a shared library. You can use this to collaboratively build a library with others or to share a curated library on a particular topic with a wider audience.
Create a Group Library
- In your Zotero desktop client:
- Click the New Library icon (it looks like a cardboard box with a green plus sign next to it).
- Select "New Group..."
- You will automatically be taken to the Zotero website. If you're not already logged in, log into your Zotero account.
- On the Zotero website:
- If you're not already logged in, log into your Zotero account.
- From the menu across the top of the page, select "Groups."
- Click on "Create a New Group."
- Choose a unique name for your group. The group URL, below the input field, is automatically created and will turn red if your proposed name is already taken.
- Choose the type of group you are creating:
- Private Groups: Only you and those you choose to invite to your group will be able to view the group's research. This is probably your best option if you're working on a group project for a class or a degree but don't plan to disseminate your materials widely.
- Public, Closed Membership: You can allow anyone to view your group library, but only those you invite can contribute. This is probably your best option if you're working on a group project but would also like to make at least your bibliography available to the broader scholarly community.
- Public, Open Membership: You can allow anyone to view and contribute to your group library. This is probably your best option if you're working on a crowdsourced project. (Note that shared file storage is not available to public, open groups.)
- Your group libraries should automatically sync to your Zotero account and appear below your personal library in the left pane of your Zotero client.
- If you don't see a newly created group library immediately, try manually syncing your Zotero application and your online Zotero account by clicking on the green curving sync arrow in the top right corner.
- Items can be copied and moved between private and group libraries.
- Anything added by any members of the Group Library, including tags and notes, will appear for all members.
Click Members Settings and then Send More Invitations. You can invite collaborators to your group either by their Zotero user name or by the e-mail address associated with their account.
Help those who don't already use Zotero by including a link to this guide (http://guides.library.harvard.edu/zotero) in the optional personal message.
Your collaborators will receive an e-mail inviting them to your group.
An important consideration: Any PDFs stored within a Group Library will count against the storage limit of the owner of that Group Library. While you have unlimited storage if your Zotero account is linked to your Harvard e-mail, you should bear in mind that this unlimited storage will default to Zotero's free 300MB storage plan once you leave Harvard.
|
OPCFW_CODE
|
Is there any way to implement skip repeat with out state being equitable
Hello,
is it mandatory to implement equitable for skip repeat?
it is hard to implement equitable for all states, is there any workaround?
Example:
I want to avoid making all my states manually Equatable(like below) yet work with skip repeat.
struct Counter: Equatable {
var data:Int=0
public static func ==(lhs: Counter, rhs: Counter) -> Bool {
return lhs.data == rhs.data
}
}
Thanks in advance
When your state isn't Equatable, you can pass a block to skipRepeats and do whatever you want inside. See: https://github.com/ReSwift/ReSwift/blob/master/ReSwift/CoreTypes/Subscription.swift#L98
is there any way to make all my states equatable by default so that when I subscribe to multiple states, I can listen to only changed states.
You have to conform to Equatable on your own, I'm afraid. And if you want to subscribe to a selection of substates, you have to provide the skipRepeats implementation of the combined selection tuple, too :)
You can, if you make it class Counter :)
Can you please share how to do that?
I have tried a couple of ways and failed!!
I don't think your reasons to look for reference type semantics are good (i.e. "Equatable conformance gets on my nerves"), but if you have problems with class states, please share details in #346
@coderMemberam Swift auto-generates Equatable conformance for structs. Theres no need to manually implement func == if all sub-types are Equatable. The following compiles & works as appropriate:
struct Foo: Equatable {
var a: Int
var b: Int
}
Foo(a: 1, b: 2) == Foo(a: 1, b: 2) // true
I am getting below compilation issue!!
App State:
import ReSwift
struct AppState: StateType {
var counter=Counter()
var counter2=Counter2()
}
struct Counter: Equatable {
var data:Int=0
}
struct Counter2 {
var data:Int=0
}
Reducer:
import ReSwift
// the reducer is responsible for evolving the application state based
// on the actions it receives
func handleAction(action: Action, state: AppState?) -> AppState {
// if no state has been provided, create the default state
var state = state ?? AppState()
switch action {
case _ as CounterActionIncrease:
state.counter.data += 1
case _ as CounterActionDecrease:
state.counter.data -= 1
case _ as CounterActionIncrease2:
state.counter2.data += 1
case _ as CounterActionDecrease2:
state.counter2.data -= 1
case _ as CounterAction1Static:
state.counter.data = state.counter.data
case _ as CounterAction2Static:
state.counter2.data = state.counter2.data
default:
break
}
return state
}
Subscription:
override func viewDidLoad() {
super.viewDidLoad()
mainStore.subscribe(self.state1Subscriber) { state in
state
.select { state in state.counter }
.skipRepeats{ $0 === $1 }
}
}
@coderMemberam What version of Xcode are you using? That exact code compiles just fine for me.
xcode Version 9.2
You'll need to update to 9.3 to get auto equatable conformance.
Does that won't create the problem, because my iOS APP support is from iOS 9.0
No, you can continue to deploy for all the way back to iOS 8.0 with Xcode 9.3.
oh, thank you. I have to download latest XCODE. I will try it and post here for any help.
|
GITHUB_ARCHIVE
|
JDBC Assignment Help
JDBC programs online professional tutor have the ability to teach trainees about the various ideas of JDBC. JDBC needs a large understanding of various fields and to get the very best JDBC shows task help we are here. DatabaseHomeworkHelp is company that assists to specialise in lots of DBMS subjects and specialises in JDBC programs issue option. JDBC has numerous cycles which requires
much understanding making it is important to get help with JDBC programs assignment by trainees. JDBC is primarily command line principles for which JDBC shows assignment help is important. JDBC is stability of Java and databases and to discover it JDBC shows assignment help is an excellent alternative. Image insertion in a database is an extremely technical idea for which you can get our JDBC shows research help. The JDBC API uses Java standard classes and user interfaces to connect to databases. In order to use JDBC to connect Java applications to a specific database server, a JDBC driver that supports the JDBC API for that database server is required.
Prior to JDBC, ODBC API was the database API to connect and carry out concern with the database. ODBC API makes use of ODBC driver which is made up in C language (i.e. platform dependent and unsecured). That is why Java has in fact defined its own API (JDBC API) that makes use of JDBC drivers (made up in Java language). JDBC has various cycles which needs much understanding making it is crucial to get support with JDBC programs assignment by trainees. Jdbc requires a broad understanding of numerous fields and to get the really finest jdbc reveals job help we are here. Jdbc has many cycles which needs much understanding making it is required to get help with jdbc reveals assignment by students. Odbc driver for any specific database provider can be downloaded generally from provider docs jdbc (java database connection) api is an application reveals user interface in java to provide user to link with database (mysql) for info change and advancement. Test can be found under the com.ap.assignment1.test package; in it, jdbc api was made use of to produce a connection, bring out a query, extract results and closing the connection. Jdbc development is an api (consisted of in both j2se and j2ee releases) that uses cross-dbms connection to a huge variety of sql databases and access to other tabular info sources, such as spreadsheets or flat files.
Jdbc, referred to as java database connection, is a java-based details gain access to development from sun microsystems, inc. Jdbc evaluates statements making use of one of the following classes: Trainees choosing for a reliable jdbc shows assignment and who long for excellent grades can technique database assignment help. In other words jdbc api is a collection of user interfaces and classes, which help a java application to connect to sql, based relational databases by abstracting provider specific details of the database. Jdbc ™ returns leads to an outcome set product, so we have to mention a scenarios of the class outcome set to hold our results. This chapter uses an example of methods to establish a standard jdbc application. When you need to produce your own jdbc application in the future, this sample example can serve as a style template.
JDBCAssignment Help by tutors:
- - 24/7 Chat, Phone & Email assistance
- - Monthly & expense efficient plans for routine clients;
- - Live for JDBC online test & online tests, midterms & examinations;
- - for report composing & case research studies on JDBC.
JDBC has numerous cycles which requires much understanding making it is important to get help with JDBC programs assignment by trainees. JDBC is primarily command line ideas for which JDBC shows assignment help is vital. JDBC is stability of Java and databases and to discover it JDBC programs assignment help is an excellent alternative. In order to use JDBC to connect Java applications to a specific database server, a JDBC vehicle driver that supports the JDBC API for that database server is required. Jdbc evaluates statements using one of the following classes: Trainees choosing for an efficient jdbc programs assignment and who long for fantastic grades can method database assignment help.
|
OPCFW_CODE
|
Hello, so i would need some help figuring out what could be wrong, but i have had a PC that i built about a year or 2 ago and realized a few months ago that my PSU was not delivering enough power to my components, so i bought a Seasoinc Prime 1300W 80+ Plat and installed that, but after installing my new PSU i have gotten an error where my PC runs smoothly and normally for about 1hr-2hrs (Sometimes up to 4 hrs) and then all of a sudden my programs stop working properly. What happens is that if i am in a game and quit the game it just gets stuck, it won't close it and the monitor the game was on just gets frozen on the last frame before i pressed the quit game button, other programs like my web browsers (i use multiple and all have same issue when this happens) just become blank and whiteout as well as saying in the top bar of that program that it is not working/responding, this also happens to other apps as Discord, Spotify and so on. Windows programs open up without any error when this happens, but when i click anything in them it is not doing anything, it just shows me the start page of that program from Windows, i can usually open up task manager but get the same thing where i can right click on a program and end the task but nothing will happen. If i get this when i have not been playing and been either AFK or just browsed the web then i usually can't open any programs and if i start trying to click anything in the browser it will go in to the program not responding state, Windows apps usually open normally or they take some extra time, games just don't launch at all other than looking like they have been launched in the background.
The only thing that will fix this error is by me pressing the reset button on my case, the restart or shut down in Windows works to start or press but it will just be stuck on the loading circle saying "Restarting" or "Shutting Down".
What i have done to try and solved this:
I have updated my Windows to the latest May 2020 Update, i don't suspect it being this as my friend has as well and he has no problems.
I have reset my CMOS and reset my overclock to standard but i still have the same problem.
I have updated all drivers, maybe i might have missed something?
I have changed my power options in Windows.
I only started having this problem with my new PSU, the PSU has a hybrid mode button for the fan in it, can that have any effect with my problem? I have asumed not as it is only for the PSU fan.
MOBO: Asus Rog Strix z-370-e gaming
RAM: Corsair Vengance 32GB
Graphics: MSI GeForce 1080Ti Gaming X
PSU: Seasonic Prime 1300W 80+ plat
Cooler: Corsair AiO H115i Pro rgb
|
OPCFW_CODE
|
The links to Sophia Jeffrey Epstein and the OTO
Comment from Seth Lloyd (Professor of Quantum Mechanical Engineering, MIT)
JB immediately tossed me in a corner of the restaurant with Sergey Brin and Larry Page, who grilled me on the potential applications of quantum computation. They were shockingly knowledgeable on the subject and quickly pushed me like a novice sumo wrestler to edge of the ring that marks the boundary between the known and the unknown. That boundary is always closer than one thinks.
At this point Jeffrey Epstein joined the conversation and demanded to know whether weird quantum effects had played a significant role in the origins of life. That question pushed me way out of the sumo ring into the deep unknown. We tried to construct a version of the question that could be answered. I was pushing my own personal theory of everything (the universe is a giant quantum computer, and to understand how things like life came into existence, we have to understand how atoms, molecules, and photons process information). Jeffrey was pushing back with his own theory (we need to understand what problem was being solved at the moment life came into being). By pushing from both sides, we managed to assemble a metaphor in which molecules divert the flow of free energy to their own recreational purposes (i.e., literally recreating themselves) somewhat in the way Jeffrey manages to divert the flow of money as it moves from time-zone to time-zone, using that money for his own recreational purposes (i.e., to create more money). I'm not saying it was the right way to describe the origins of life: I'm just saying that it was fun.
Ben Goertzel comment on edge.org from July 23, 2000
The "Internet as AI brain" is a fairly simple point, but Moravec chooses not to emphasize it. He points out, correctly, that simulating the detailed functioning of the human brain on contemporary computer hardware is very difficult, requiring a scale of processing power equal to millions of PC's. But he doesn't note that, through distributed processing across the Internet, it's possible to actually harness the power of millions of PC's, right now. Distributed.net and SETI@home started using the latent computing power of the Net, various start-up firms are now following in their footsteps — and this is only the beginning.
The "mind as network" metaphor is a powerful one. Mind is a massively parallel self-organizing system of interacting, intertransforming actors, many of them specialized to particular domains or particular processes. It demands a complex-systems-theoretic analysis. If a sufficiently deep and careful analysis of mental processes is carried out, in this vein, one discovers that the division between reasoning-based AI and neural-net based AI is largely bogus; reasoning emerges in a clear and detailed way as a statistical emergent from neural net dynamics. The network approach cuts through the apparently unresolvable knots set up by traditional AI theorists.
From these links we can see that Seth Lloyd, Sergei Brin, Larry Page and Ben Goertzel are in the position to have a relationship with Jeffrey Epstein back to 2000-04 which involved conversations relating to Intelligent Web Agents and Quantum Computing/Mechanics at the highest level.
He was after all was said and done hanging out with the top minds in Artificial Intelligence and Quantum Computing. What was it all for?
Thank you for your time, and support. See you in the next video.
Copyright (c)2019 Quinn Michaels. All Rights Reserved
#AskTyler #Tyler #TeamTyler #QAnon #FEECTING #TimePhoneHack #HiveMind🐝 #RealityArtist
$FEECTING = https://github.com/indraai/language-feecting/
|
OPCFW_CODE
|
Overview: The DITA Learning and training content specialization
The DITA 1.2 learning and training content specialization specifies a set of specialized DITA topics, a learning interactions domain, a learning metadata domain, and a learning map domain to support creating and delivering structured learning content with DITA.
This figure shows the specialized topics, map domain, interactions domain, and metadata domain used by the learning and training specialization and how they relate to the DITA core types.
Figure 1 - The specialized topics, map domain, interactions domain, and metadata domain used by the learning and training specialization and how they relate to the DITA core types.
Learning topic types
A set of five specialized topic types provide the basic content ingredients for creating structured, modular learning content with DITA 1.2.
Learning Plan topic type
- Describes learning needs and goals, instructional design models, task analyses, learning taxonomies, and other information necessary to the lesson planning process.
Learning Overview topic type
- Identifies the learning objectives, includes other information helpful to the learner, such prerequisites, duration, and intended audience.
Learning Content topic type
- Provides the learning content itself, and enables direct use of content from DITA task, concept, and reference topics, as well as additional content of any topic type that supports specific objectives declared in the Learning Overview topic type.
Learning Summary topic type
- Recaps and provides context for the learning objectives and provides guidance to reinforce learning and long-term memory.
Learning Assessment topic type
- Presents instruments that measure progress, encourage retrieval, and stimulate reinforcement of the learning content, and can be presented before the content as a pre-assessment or after the content as a post-assessment checkpoint or test.
Learning map domain
The learning map domain defines a set of specialized topic references for structured learning content as learning objects and groups in a DITA map.
- A container to introduce and group learning objects into higher-level organizations, such as course-level, module-level, or lesson-level. A learningGroup can contain other learningGroup elements, allowing you to organize learning content at course, module, or other higher-levels of hierarchy.
- A container to introduce and group the topic references for a learning object.
- A topic reference to a learning plan topic.
- A topic reference to a learning overview topic, which introduces the learning object.
- A topic reference to a learning assessment topic that is used as a pre-assessment.
- One or more topic references to a learning content topic, or a topic, task, concept, reference or other specialized topic.
- A topic reference to a learning summary topic.
- A topic reference to a learning assessment topic that is used as a post-assessment.
Sample Learning DITA Map
<map> <learningGroup href="langref/lc_spec_sample_rlos.dita"> <learningObject href="langref/lc_spec_ProbUnstructuredTop.dita"> <learningOverviewRef href="langref/lc_spec_ProbUnstructuredOverview.dita"/> <learningContentRef href="langref/lc_spec_ProbWithUnstructured.dita"/> <learningPostAssessmentRef href="langref/lc_spec_ProbUnstructuredAssess.dita"/> <learningSummaryRef href="langref/lc_spec_ProbUnstructuredSummary.dita"/> </learningObject> <learningObject href="langref/lc_spec_top_beneoverview.dita"> <learningOverviewRef href="langref/lc_spec_BeneStructuredOverview.dita"/> <learningContentRef href="langref/lc_spec_BeneStructured.dita"/> <learningPostAssessmentRef href="langref/lc_spec_BeneStructuredAssess.dita"/> <learningSummaryRef href="langref/lc_spec_BeneStructuredSummary.dita"/> </learningObject> <learningObject href="langref/lc_spec_WHYDITALearnOverview_top.dita"> <learningOverviewRef href="langref/lc_spec_WHYDITALearningOverview.dita"/> <learningContentRef href="langref/lc_spec_WHYDITALearning.dita"/> <learningPostAssessmentRef href="langref/lc_spec_WHYDITALearningAssess.dita"/> <learningSummaryRef href="langref/lc_spec_WHYDITALearningSummary.dita"/> </learningObject> <learningObject href="langref/lc_spec_LearnSpec_top.dita"> <learningOverviewRef href="langref/lc_spec_LearnSpecOverview.dita"/> <learningContentRef href="langref/lc_spec_LearnSpec.dita"/> <learningPostAssessmentRef href="langref/lc_spec_LearnSpecAssess.dita"/> <learningSummaryRef href="langref/lc_spec_LearnSpecSummary.dita"/> </learningObject> </learningGroup> </map>
Learning interactions domain
The learning interactions domain defines a set of basic learning interactions elements as a DITA domain. This domain is made available in the learningAssessment topic type.
Learning metadata domain
The learning metadata domain defines a set of basic learning metadata elements as a DITA domain. In a topic, lcLom is available in the prolog/metadata. In a learning map, lcLom is available in the topicmeta. Cascading of learning metadata between topics and maps follows the rules for Metadata inheritance between maps and topics from the DITA 1.1 Architectural Specification.
- lcLom, makes the lom elements available in the learning topics
|
OPCFW_CODE
|
I just realized I hadn’t posted an explanation on why the slang entry isn’t up yet (I thought I had, my apologies ). This is taking a long time to compile as many slang terms are just other uses for a word, so this means I need to read through the entries, see if the relevant part is noted in the definition and the like. For some words this goes quickly, others means reading through page-long definitions to see if it’s mentioned, and while my reading speed is getting up there, it’s still a far cry from my native reading speed, so this takes a loooong time. No promises on when it will be up, I apologize
Generally on first impression after just entering some entries from this wiktionary page, the sankoku seems to come out ahead as expected, but this is just a relatively quick look at things, not a thorough analysis
Hi there, @danyramdas! You just caught me in a hectic week, so I will need to answer your questions in like a week or so, my apologies
As for the wiki thing, I took a quick look and it looks great I’m not too familiar with GitHub I must admit, but I saw in the readme you still need permission you said. If it helps you or other people feel free to use anything I published here, but I saw you also included some posts from others who post here, I can’t give you permission in their names, so please tag users you want to quote before including them
Would you also mind adding a link to this thread in the readme? If people should have questions, I must admit I will probably not be keeping to close an eye on the GitHub (can they even ask questions on GitHub?), so I would prefer they know they could come here to ask questions if they were so inclined.
I can answer one of your questions though, namely when you should buy one. As you are still quite new to Japanese, honestly kotobank.jp should be able to cater to nearly all your needs, if you do decide to buy one, I’d suggest reading through this post : Monolingual dictionary corner - #86 by matskje It’s still accurate to the information I would currently give. Have fun learning, you got this! (And as always, feel free to post any questions here, my apologies that your other questions will take some time
I think I’ve decided that this year I will buy 明鏡国語辞典, as a good starter one, and 大辞泉 for its many entries and coverage of idioms.
Thanks to both @anon3564849 and @GrumpyPanda for your guidance. Without that I probably would have done the same thing that I did last year: not buy because I had no idea how to pick which one(s) to buy.
Right now, I even have an idea of which ones I might buy next year, which at this moment would be 三省堂国語辞典 (for the modern/slang approach) and 新明解 (due to GrumpyPanda’s recommendation).
I considered tightening my budget to perhaps get all four this year, but I haven’t used monolingual dictionaries much at all (only on a lark every so often), so it felt more prudent to get a couple so I can start easing in, and if I’m using them regularly by next year, then perhaps I can add some more to get more variety and such. ^^
I got the 三省堂国語辞典 for its modern approach and idiom inclusion, but it stumbled on the very first word I needed to look up: 拓ける. (Built-in Daijirin didn’t pick it up either, but maybe it was a conjugation problem, as it’s pretty inflexible). In this form it isn’t in Jisho either, but 拓く is (not in 三省堂 though). So I guess I also need a dictionary with a significantly higher headword count.
I bought this from Verasia and it arrived today. It has everything as shown on Amazon https://www.amazon.co.jp/-/en/金田一-秀穂/dp/4053049369 including the little booklet explaining how to use it and the kanji posters. The materials are really high quality. I saw some Amazon reviews complaining about the thin pages, but for a dictionary I thought they were fine, definitely thicker than an adult dictionary. Maybe they’re just thin for 7 year olds?
I love it and I’m so glad I got it! It cracks me up that there is a box, a dictionary with a sturdy cover, and the book is slipped into a plastic cover. This dictionary is bulletproof
At the bottom of each page is a little fun fact, e.g., how a word is used or a riddle. Very fun! In addition to all of the preamble material and kanji appendices, there are little snippets here and there with maps, images, or diagrams. It feels like a lot of accessible material that I’ll naturally delve into as I look up words. It’s great for someone at my level (more on that below) that I’m not going to outgrow anytime soon.
I can imagine someone who has front-loaded kanji learning might not be so enthusiastic as everything has furigana. This is also more for the “likes to read books in paper” crowd.
My level for reference - I’ve covered most of N5/N4 grammar, have done weekly speaking practice with a Japanese friend for 4 years. I do most of my reading on Satori, but I can read 2nd grade books, it’s just a lot more work (I get the grammar but I’m missing loads of vocab, 6-10 lookups per page). Shirokuma cafe is accessible (though the puns are obviously a challenge). I study grammar/vocab/kanji etc in parallel, and am at 270ish kanji. So for the way I’ve approached my studies, I need furigana resources and it doesn’t bother me at all. I get kanji reading practice in Satori where I gradually add known kanji so eventually I’ll get over this hump.
For absolute beginners, this dictionary would be too much of a slog. You need basic vocab and grammar to understand it. I’d say I’ve only recently reached the language proficiency to make this dictionary worth it, so perhaps this is easiest to recommend to someone who is just at the level where doing an ABBC or BBC read is challenging but alright. You can read a page and are pretty sure you got it right, but you check on the book club discussion and still learn a bit more, but you aren’t reliant on the club to get you through.
I can’t speak for how useful this dictionary would be at the intermediate book club level or N3+ level, or WK level 60 as I’m not there yet myself!
Monokakido has one called Dictionaries, blue icon with some upright white lines (representing books?), from that app you can buy several different monolingual dictionaries. See this post: Monolingual dictionary corner - #23 by matskje (Obs! The icon shown in the screenshoot is an old one, that isn’t how it looks for me.)
|
OPCFW_CODE
|
Mastering the Art of DevOps for Embedded Systems: Standardizing the Developer Experience
Organizations developing systems for the intelligent edge face enormous costs to develop, test, and deploy these systems. DevOps practices can greatly improve efficiency and collaboration among developers, but its adoption is especially hard in this domain. The sheer number of technologies involved, spanning real-time embedded systems, reliable communication over unpredictable links, and centralized administration in elastic cloud infrastructure requires diverse expertise across many specialized teams and tools. In this blog post, we’ll review some ways teams can improve on developer efficiencies. By standardizing tools, processes, and the developer experience, embedded systems teams can benefit from automation, ease of use, consistency, and the reuse of valuable resources, configurations, components, and code.
Automation Enables Embedded DevOps
Automation plays a vital role in the successful implementation of DevOps practices. By automating repetitive tasks, developers can save time and effort while improving consistency and predictability, allowing them to focus on high-value activities. Automated testing and deployment processes enable continuous delivery and integration, helping teams release software updates faster without compromising quality.
Embedded developers know this, of course, but it can be difficult to find tools that solve embedded-specific challenges while fitting nicely into automated workflows. Embedded compilers may only be available in specific runtime environments, for example, and the broad system requirements may impose multiple, possibly conflicting, constraints in a single project.
Automation tools such as Jenkins are designed to help automate pipelines for building and deploying to streamline this process, but they also require configuration and maintenance that consumes valuable developer bandwidth. This is especially true in projects that need to support tooling in multiple runtime environments, because each environment may need to be maintained separately.
With an automation platform that supports all the diverse tooling required for their project, embedded development teams can achieve continuous integration, automated testing, and automatic deployment to all necessary environments, leading to faster and more reliable releases.
Standardizing Tools and Processes
A well-suited automation platform provides a foundation upon which tools and processes can be standardized, offering several benefits for embedded systems development. It simplifies onboarding for new team members, ensures consistency in development practices, and encourages collaboration by providing a common ground for everyone involved.
Standardizing on a continuous integration and continuous deployment (CI/CD) platform helps to streamline the build, test, and deployment process. By selecting tools that integrate well with the embedded systems development workflow, developers can automate repetitive tasks, such as compiling code, running tests, and deploying software to target platforms.
Given the diversity of intelligent edge systems, simply accessing testing and production systems can be challenging. Development hardware for embedded systems can be scarce, often requires manual configuration within a broader test harness, and may be isolated in specialized labs. Embedded teams need to ensure they have tooling that provides reliable access to testing and production systems from automated pipelines.
For cloud infrastructure and other more accessible environments, adoption of infrastructure as code (IaC) practices enables the automation of environment setups and configurations. By defining the infrastructure requirements in code, developers can easily provision and manage target platforms, reducing setup and configuration time.
Consistency and Ease of Use
Ensuring that tools, processes, and environments are easy to use and consistent for all team members is important for all development teams, and particularly so for embedded teams because they use so many specialized tools. Developers need a seamless, intuitive experience to maximize productivity, which can be achieved by adopting tools that have a user-friendly interface and provide clear documentation.
For developers, “user-friendly” often has multiple meanings. They might want an intuitive GUI for ad hoc and occasional use, a flexible CLI for use from scripts and orchestration tools, and a powerful API for programmatic use.
Standardizing the development environment and creating reproducible builds is also important. By leveraging technologies such as containerization with tools like Docker and Kubernetes, developers can create consistent and isolated environments across different stages of the software development lifecycle. This helps in avoiding configuration discrepancies, reducing the chances of errors, and improving collaboration among team members.
Reusing Resources, Configurations, Components, and Code
With so many specialized teams and tools in embedded development projects, duplicated effort is a real concern. Standardizing on a common DevOps platform provides a central location where valuable resources, configurations, components, and code can be shared and reused when appropriate. Future projects often leverage similar hardware platforms, for example, and business logic can be packaged into shareable components. By creating a repository of reusable assets, developers can leverage existing solutions, reducing development time, and minimizing the risk of errors.
Additionally, implementing modular design principles and developing libraries and frameworks can help create a catalog of reusable components. This reduces redundancy, eases maintenance efforts, and enables faster development cycles.
Achieving DevSecOps with Embedded Systems
Mastering the art of DevOps for embedded systems brings significant benefits to development teams. By standardizing tools, processes, and the developer experience, developers can improve efficiency, collaboration, and overall productivity. The focus on automation, ease of use, consistency, and reuse of resources, configurations, components, and code enables teams to deliver high-quality software more efficiently, meeting the demands of modern development for the intelligent edge.
Wind River Studio Developer is a full-featured DevOps platform for intelligent edge development, purpose-built to solve the challenges that prevent embedded systems teams from adopting DevOps. Contact us today to learn how Wind River Studio Pipelines helps organizations reduce development costs and accelerate time to market by standardizing tools, processes, and the developer experience.
About the author
Jon Jarboe is a Senior Product Marketing Line Manager at Wind River
|
OPCFW_CODE
|
As people return to campus, we are getting lots of questions about hybrid meetings. As there are no specific hybrid meeting rooms on campus (although some departments have set up their own), you may find yourself trying to arrange a hybrid meeting using only your own equipment, or the technology in a teaching space.
This article explains how DTS can help you with practical support and loan equipment, and also what is available to purchase on XMA to help transform your meeting experience.
Hybrid meetings work best where you have equal remote and onsite participants. It could be best that all attendees join the meeting either remotely or in person so that everyone has the same experience and are more likely to participate equally. If this isn’t possible, then here are a few ideas to equalize the experience:
One person meeting in a shared office
- Inform the people around you when you have a meeting
- Use noise cancelling headphones which will a) signal to those around you that you are busy or on a call, and b) cut down on background noise.
- Mute yourself if not speaking
If you end up with several meetings in a shared office, make sure you are all using headphones.
Meeting in a shared office (2-3 people)
- Find a space to sit together so you can share one display/camera if possible
- Use a USB/Bluetooth speaker to avoid issues with echo, feedback and lag (attach speaker to one device, keep the others on mute)
- If someone has a webcam, attach it to the laptop or monitor
Meeting space or room (4 or more)
If you have several people in the same meeting, consider using a meeting room or even a classroom. These will have a monitor display on the wall which will improve the experience.
- Connect a laptop to the monitor (you will need an adapter for a Surface) and turn it to face the room to use it as the camera/speaker
- A laptop camera can only capture two or three people. A separate webcam has a wider field of view.
- You may need to move chairs so that everyone is captured on camera (classrooms are set up for one person in front of several attendees)
- Share the meeting to the screen so that everyone can still be seen.
How to use a classroom tech table: https://blogs.reading.ac.uk/teaching-learning-facilities/user-guides/
How technology can help
Noise cancelling headphones
High-quality headsets using a noise-cancelling microphone will reduce background noise, allowing your caller to hear your voice more clearly. Many people use these to concentrate as well, and they can help indicate to others that you are on a call or busy.
Using a USB or Bluetooth conference speaker can be helpful when a small group of people gather in one room for a hybrid meeting with remote participants. All in-room participants can share a single speaker connected to just one of the laptops to eliminate the feedback from using multiple mics and speakers.
There are several available on the XMA Hub, for example the NeoXeo SPK 140 Bluetooth speaker: https://he.xma.co.uk/Product?pid=X130B13006
There are limitations with a laptop camera as it is only set up to show one person. An additional webcam will have a better picture and show a wider view of the room, for example the Microsoft LifeCam HD-3000 Webcam (https://he.xma.co.uk/Product?pid=T3H-00012), some also have microphone/speaker built in.
If you have a lot of video meetings with several people, you may want to get a portable video camera, such as the Logitech ConferenceCam Connect which is suitable for up to 6 participants in one space. DTS have a few of these we can loan you for one off meetings, or you can buy your own from XMA. https://he.xma.co.uk/Product?pid=960-001034
Support from DTS
Did you know that DTS can provide you with technical support for hybrid and live events? There are three types of event that we offer support for:
- Virtual Live Events and Meetings – Teams (remote support)
- Video Conferencing (remote support)
- Hybrid Live Events (present in room whilst streaming online, excludes teaching)
A technician will help set up and prepare for the live event or meeting and will be on call via Teams Chat for any problems during the event (this service is free for up to three hours, but must be booked in advance).
In addition to this, we also offer AV equipment/demo training and equipment loans.
You can request any of these services by completing a short form on the IT Self Service Portal:
Support for Audio Visual Equipment and Events: https://uor.topdesk.net/tas/public/ssp/content/detail/service?unid=3578b7fc528041f491b77026185fe538
Although the tools we are using to communicate may have changed, office etiquette hasn’t. Working in open plan offices and hot desks means it is even more important to respect and be mindful of your colleagues. You may need to agree rules about whether meetings are allowed in shared space, for example, or all share calendars to find out who else is attending the same meeting so you can arrange a room. *This will be covered by the “Ways of Working” strategy group.
Logitech guides to video meetings: https://www.logitech.com/en-gb/video-collaboration/resources/think-tank/articles/setting-up-video-meeting-space/introduction.html
Microsoft article on hybrid meetings: https://www.microsoft.com/en-us/research/project/the-new-future-of-work/articles/hybrid-meetings-guide/
TEL guides for using Teams for online teaching sessions (also applicable to hybrid meetings): https://sites.reading.ac.uk/tel-support/microsoft-teams-meeting-help-index/
About holding hybrid meetings (includes diagrams of possible set ups): https://u3asites.org.uk/code/u3asite.php?site=1144&page=0
LinkedIn Training courses: Hybrid meetings (linkedin.com)
|
OPCFW_CODE
|
The Qubit fluorometer is a device from Invitrogen for quantifying proteins, RNA, or DNA. It uses various Quant-iT assays, which contain sensitive dyes that fluoresce in proportion to the amount of protein, RNA, or DNA respectively. The dyes themselves can be used with any fluorescent plate reader, but the Qubit device is so easy to use and not too costly that I’d buy another if my current one broken rather than run the assays in our lab’s plate reader.
The Qubit is relatively cheap when compared with other fluorometers, because it uses colored LEDs as light sources. The device contains a Blue LED and a Red LED and detects the fluorescence with a photodiode. Each of the Quant-iT assay kits comes with a buffer, an assay specific dye, and two standards (high and low). Two standards are prepared with a fixed amount of nucleic acid or protein (depending on the kit; and provided with the kit) combined with buffer and dye. The samples to be quantified are prepared similarly using 1-20 ul of each sample combined with buffer and dye. All standards and samples are run in clear 500 ul tubes that are available from Invitrogen (and not too expensive).
To quantify your samples, the Qubit will first ask for your low concentration standard and then for your high concentration standard. It uses these two measurements to fit a standard curve. The parameters from this standard curve are then used to estimate the quantity of your samples using the fluorescence of your sample and linear regression.
To see how the Qubit works in action, Invitrogen has created a Qubit Virtual Demo.
Since each Quant-iT assay requires a particular dye and tube there is a fixed cost per sample; however this fixed cost is quite low (currently less than a dollar per sample). One extremely useful aspect of using this dye-based quantification is that the dyes are quite specific. For example if you extract RNA from cells, there is often some residual genomic DNA in your sample. The genomic DNA contributes very little fluorescence to your overall signal if you use the RNA specific Qubit dye, so you can quantify only the RNA (which presumably is what you’re interested in); similarly, if you run a 1st strand cDNA synthesis, you likely have RNA in your sample, but you can estimate the amount of DNA resulting from your cDNA synthesis using a DNA specific Qubit dye. This specificity is a big advantage over standard spectrometer based quantification of nucleic acids which can only lump the total amount of RNA and DNA into one quantity.
The dyes, particularly the dsDNA HS (HS = high sensitivity) dye, are very sensitive and allow quantification of DNA at concentrations far below the limits of a spectrophotometer (from 10 pg/ul to 100 ng/ul, whereas I find the lower limit for the Nanodrop spectrophotometer is around 20 ng/ul). The RNA dye isn’t quite as sensitive as the dsDNA HS dye, but still enables quantification of RNA far below the limits of a Nanodrop spectrophotometer (from 250 pg/ul to 100 ng/ul, again the practical limit for the Nanodrop is around 20 ng/ul).
The Quant-iT dyes are also quite insensitive to the presence of salts and other contaminants. I was never really able to accurately measure the amount of DNA remaining from my gel purifications with a spectrophotometer, but the Qubit can measure them just fine. With the Qubit, I’ve even been able to quantify extremely low concentration DNA samples like ChIP DNA for the first time.
Setting up a Qubit based quantification is slightly slower than using a spectrophotometer like the Nanodrop, because you have to prepare the dye and buffer for each sample, you have to prepare two standards for each set of samples, and you have to place each sample and standard in its own tube for quantification.
In addition, the Quant-iT dyes come as a 200x solution in DMSO which is stored frozen at 4C (DMSO melting point is 18.5C) and takes quite a while to melt at room temperature; so it takes a little extra planning to make sure the dye is ready for your experiment. Because of the extra time and cost for running the Qubit when compared with the Nanodrop spectrophotometer, I use the Nanodrop for pure samples (i.e. containing only RNA or DNA and not a mix of the two) above 30 ng/ul.
Another negative aspect of the Qubit is that it requires a lot of sample if the sample has an extremely low concentration. You prepare the quantification reaction with 1-20 ul of sample. For high concentration samples 1-2 ul is fine, but for extremely low concentration samples it is necessary to use 10-20 ul to have enough nucleic acid or protein to be detected by the fluorometer. Unfortunately, samples are typically extremely low concentrated when you don’t have much of them to begin with, so to quantify them with the Qubit you often need to use almost the whole sample. For example, to quantify my ChIP DNA I must quantify 10 ul of a 30 ul sample to estimate the total amount of DNA in the sample. But at least it is possible to quantify it.
My least favorite aspect of the Qubit is that you need to have at least a crude guess of your sample concentration to know which dye to use and to know what amount of sample to use in the quantification . otherwise you’ll go outside the bounds of the machine. For example, if 1 ul of a high concentrated sample contains more DNA per microliter than the high standard for calibration, you’ll get an out-of-range error. When your sample is above the range, you can either dilute it or quantify the sample with a spectrophotometer.
It turns out my earlier statement that you need to thaw the dye each time you use it was not correct. I received the following info in an email from Jill Hendrickson, the Qubit Project Manager at Invitrogen: >”I also wanted you to know that we did design the kits so that you can store the dye and buffer at room temperature. That’s because we knew it would be a pain to thaw the dye out. You just need to store the dye in the dark, like in a drawer.” After looking at my qubit reagent tubes a bit, I did see that although the large container the kit comes in says 4C the individual tubes have their own storage condition labels, and the dye label says to store at less than or equal to 25C (room temperature). This prevents having to plan your experiments around the melting time of the dye and makes the Qubit much easier to use.
The most important aspect of wetlab experiments in general and novel experimental technique development in particular is knowing what you have in your sample and how much you have of it. Since purchasing the Qubit, I’ve found multiple occasions where I previously had to guesstimate the amount of RNA or DNA in my sample but with the Qubit I could accurately and quickly quantify my sample. For the bulk of my wetlab work which involves high concentration RNA or DNA samples that are relatively pure, I prefer to use the Nanodrop spectrophotometer which is faster to set up and has no per-sample reagent cost. But for those cases where a sample isn’t pure enough or concentrated enough, the Qubit has become an essential part of my molecular biology toolkit.
|
OPCFW_CODE
|
Using virtual environments, you could handle these two software version dependencies separately, something that is not possible using just the system install of Python. But the only question I have is, the output from the cmake did not indicated the Python3 Interpreter: and numpy: as referencing the virtual environment. . Hi Adrian , I read this tutorial and thanks for sharing. You need the entire section filled out correctly, not just the interpreter. You can use any name you want for the virtual environment.
Also check that numpy points to our NumPy package which is installed inside the virtual environment. For instance, has the support for Tensorflow models improved at all? Please copy the following code into the newly created file and follow the instructions accordingly. You have to execute the following commands to install all the dependencies one by one. Looking for installation script for Ubuntu 16. It can visualize the relations between the various elements by means of include dependency graphs, inheritance diagrams and collaboration diagrams which are generated automatically. However, Release Mode is used to optimize the code so that the final project can be deployed into production. After compilation, you will be able to install the package using your distribution package management system dpkg, rpm 3- cmake: It is cross-platform build system generator.
If your compile chokes and hangs, it may be due to a threading race condition. I just googled the answer and this worked. To run Python 3 on Ubuntu 18. Compiling from source allows you to have full control over the install process, including adding any additional optimizations that you may wish to use. Installing Python 3 and venv Ubuntu 18.
The reason of modifying the specified line is the new path is changed from opencv to opencv2. Not only will you get a. It sounds like it may be specific to Anaconda. I have no idea what I am doing. Any help would be amazing! The closest example I would have is this somewhat related tutorial. Configure Anaconda to point to your Python libraries which is something I only recommend advanced users do 2.
Cmake complained about not finding them 2 It found the python interpreter I gave but at the end I cannot find the. I have a question about the new Ubuntu release. The method you choose depends on your requirements and preferences. We will be back with installation script for Windows. To find out how this object tracking example works, be sure to.
Step 3: Install Python Libraries sudo apt -y install python3-dev python3-pip sudo -H pip3 install -U pip numpy sudo apt -y install python3-testresources We are also going to install virtualenv and virtualenvwrapper modules to create Python virtual environments. Hi Adrian, Thank you so much for this great blog. But you will encounter few problems which all fall into the same category: the documentation refers to older libariries that are now renamed and upgraded. Maybe it helps for somebody. Thanks for another great tut. Within the virtual environment, you can use the command pip instead of pip3 and python instead of python3.
This will install Python 2. If you encounter compilation failures, you could try compiling with 1 core to eliminate race conditions by skipping the optional argument altogether. Update 2018-12-20: The following paths have been updated. I have a good nvidia graphics card so I prefer to use cuda also. When I open a new terminal, logout, or reboot my Ubuntu system, I cannot execute the mkvirtualenv or workon commands. Installed there the Ubuntu 18.
These two Python packages facilitate creating independent Python environments for your projects. You will need to do the same for Anaconda. Filed Under: , Tagged With: , ,. It has plugin-based architecture which means that new processing capabilities can be added simply. To find the number of threads compatible in your machine run the following command.
I often need a bib when I eat, though, so that might be a pattern. We will also discuss the dependencies along with their installation. In case of any queries, feel free to comment below and we will get back to you as soon as possible. If you need Python 2. When I open a new terminal, logout, or reboot my Ubuntu system, I cannot execute the mkvirtualenv or workon commands. You can verify that Python 3 is installed on your system by typing: sudo apt install python3-venv Once the module is installed we are ready to create a virtual environment for our TensorFlow project. On my Linux Mint effectively Ubuntu 18.
Qt is not a programming language on its own. This installation is well suited for beginners and those who want to complete their job at the earliest. This tutorial describes how to install TensorFlow on Ubuntu 18. It gives you the ability to define flags in the source file. Is there anything special about Linux Mint and using Anaconda? There can be many types of mode present in a project. Verify by running pip freeze , and ensure that you see both virtualenv and virtualenvwrapper in the list of installed packages. Step 10: Relocate the opencv4.
|
OPCFW_CODE
|
I enjoy watching baseball. I’d call myself a casual fan, not because I only watch when they’re good, but because I’m not one of those folks who can tell you all the reasons why they are good or what they should do to get even better. I just like watching baseball. It’s a reliable presence with an even tempo and also there’s a lot of statistics.
As the Twins have been leading their division, league and even all of baseball quite frequently this season, my cousin started tweeting whenever they had the best record in baseball. Of course that got tedious so he inquired about automating it. I thought that sounded like a fun project. And if something is worth doing, it’s worth overdoing, so I made a ruby gem that includes the leader executable that’s useful for determining whether or not your favorite team is leading their division, league or all of baseball. It can also report to you which team is leading baseball, a league, or a division as well as print out a nice leaderboard with sort and filtering options.
$ leader is minnesota-twins -l && t update "Today is the 27th of June and the Minnesota Twins have the best record in the American League."
Today, the 27th day of June, 2019:
The Minnesota Twins are the leaders of the AL C division.
The Minnesota Twins are the leaders of the AL.
The Minnesota Twins are not the best team in baseball. They are #2.
Tweet posted by @schlazor.
Run `t delete status 1144439948352692224` to delete.
Using it isn’t quite as straightforward as installing an app on your phone, but keep reading and I’ll walk you through it if you’re interested.
First, you’ll need to sign up for the xmlstats API. Click the request link on that page and do the things it asks you to do.
You’ll also need a Twitter account if you want to post updates to Twitter. You’ll also need to create a Twitter application so you can use the command line twitter client, t, but we’re getting ahead of ourselves a bit.
You’re going to need to install ruby if you don’t already have it. If you have a Mac, you might already have it. Try the following in a terminal:
$ ruby -v
ruby 2.3.7p456 (2018-03-28 revision 63024) [universal.x86_64-darwin18]
Okay, now that you’ve got a ruby, let’s install the gems we need:
$ gem install leaderbrag t
There’s a good chance that won’t work, though, likely due to you not having write access to wherever your ruby is configured to install gems. Mac/Linux folk have the option of resolving that issue with
sudo but a better idea is to change your
GEM_HOME environment variable to something in your home directory, like
~/.ruby/gems. Doing that in bash on Mac/Linux means editing
~/.bash_profile and adding a line like:
That’s on a Mac. Most Linux distros use
/home rather than
/Users, so take note of that if that’s what you’re using. Next, actually make that directory, then source that file to apply it:
$ mkdir -p ~/.ruby/gems
$ source ~/.bash_profile
Mac and Linux users, you should now be able to get those gems installed. Windows users, you’ll have to edit your environment variable by right clicking on your computer and going to properties and editing a thing with a thing and of course this is how you did it in Windows XP and I have no idea if it’s changed but also I bet RubyInstaller has defaults that make it so you can install gems without issue.
Once the gems are installed, we need to set some more environment variables, so Linux/Mac users will need to edit
~/.bash_profile again (or the appropriate config file for your chosen shell) and add something like this:
The first line adds executables provided with gems to your PATH, so adjust it to point to the bin directory inside wherever you decided to install gems to. The second is the API key you got when signing up for xmlstats. The third is your email address and is used so Erik, the guy who runs xmlstats, knows who to contact if you break something. Source your
~/.bash_profile file again to apply the changes to your environment.
Windows users, you’ll need to set environment variables too. That document is for Java and mostly talks about changing the
PATH variable but I bet you can figure it out.
OK! Now you can actually use the leader command!
$ leader find
Today, the 27th day of June, 2019:
The Los Angeles Dodgers are the leaders of the NL W division.
The Los Angeles Dodgers are the leaders of the NL.
The Los Angeles Dodgers are the best team in baseball.
What is this nonsense? Why aren’t the Twins leading all of baseball? Well sadly we haven’t been leading all of baseball since the 19th of June. Anyway, check out leaderbrag’s README for more examples and/or consult the extremely adequate help provided with the tool by invoking the -h argument.
Next we have to configure the t application so we can post to Twitter. This is pretty well documented over in their README, except that you’ll have to edit one of the files in the twitter gem until #878 is fixed.
Assuming you get that working, you’re all set. Write yourself a script or just modify my prior example, then call it from
at or Task Scheduler or whatever it’s called on Windows and you too can automatically tweet whenever your favorite baseball team is better than some or all of the other teams.
Oh also if you run into rate limit errors, as suggested in the README you can install and run redis and set the
XMLSTATS_CACHER environment variable to ‘redis’ to cut down on the number of requests made to xmlstats.
|
OPCFW_CODE
|
Jakenovel Divine Emperor of Death webnovel – Chapter 1301 – Minded My Own Business? opposite crayon quote-p2
Novel–Divine Emperor of Death–Divine Emperor of Death
Chapter 1301 – Minded My Own Business? pause ubiquitous
‘You will not likely enable a lot more faults to take place…!’
“Sure…” Tina Roxley uttered, feeling ominous another 2nd as she could feel him suddenly turn into p.i.s.sed off for an unfamiliar reason.
Whether it weren’t for Aurelius defending her, Tina Roxley would’ve been pained because of the reduction or perhaps endangered. Nonetheless, each will believed it was connected with a lot of people planning to make Tina Roxley their woman.
“Aren’t you gazing for too much time?” Tina Roxley expected as she cast a gaze back again at him while jogging, her phrase icy.
However, his iced look faded as his concept has become baffled as he observed Tina Roxley tremble while using her increased hands and fingers back in her bosoms like keeping a thing precious, her eyes quickly being moist before tears begun to hastily leap decrease her cheeks.
Eventually, he saved the distressed talisman last his spatial ring, listening to his brother’s thinking.
Guardsmen Of Tomorrow
Davis implemented Tina Roxley across lots of winding tracks. He calmly adhered to her, but his view couldn’t support but drop on her b.you.t.t that had been awfully the same as Evelynn’s. He suddenly experienced nostalgic about the 1st time he fulfilled this female. In those days, he has also been subconsciously viewing her b.u.t.t.
“Who declared that this has been to suit your needs?”
Brandis Mercer, who has been none the more intelligent, has also been immensely anxious when he echoed, “I’m planning to attentive the Thousand Product Palace. They should be able to take care of this unidentified cultivator!”
Hence, whether it have been just a couple questions, so whether it is!
“Minded your own company, you say?” Davis’s term has become frozen beneath the face mask because he reduce her short.
He instantly required out a problems talisman, hunting like he was approximately to grind it.
It was like keeping an occasion bomb without having display, not being totally sure actually if the timer would established off to produce a ma.s.sive explosion!
“So who was it that ‘minded their own personal business’ if they decided to cast an dreadful headache to that youngsters, even moving with regards to to seduce that youngsters by using the Mystic Diviner as part of his desires…?”
Davis’s speech was icy because he stretched her phrases while unamusingly smiling at Tina Roxley.
“Oh, you are doing fully grasp…” Tina Roxley giggled as her bosoms shook underneath her crimson robe. Her amethyst eyeballs performed a harmful objective she would remove themselves if he even did such as getting a step forward. It provided for a caution that she would perish before he got to know the solution to the questions he got under consideration.
Her cherry lips quivered as being a crazy smile emerged on the encounter.
“Sure…” Tina Roxley uttered, experience ominous the subsequent next as she could good sense him suddenly grow to be p.i.s.sed off for any not known explanation.
Hence, whether or not this ended up only a couple of inquiries, so whether it is!
Davis’s amused concept left his experience because he seen that the formation’s energy had not been redirected at him but her alternatively!
“Who claimed that that was for you personally?”
Davis could instantly odor fragrant scent the moment he inserted the room, and also the primary believed that inserted his intellect was the sorts of poison which could possibly fit this perfume and found that there had been several of these based on the expertise.
However, an instant after, he saw that he was overthinking because the perfume neglected to a single thing to him. It was subsequently merely the perfume of any woman’s bedroom, particularly…
Davis’s speech was icy while he stretched her words while unamusingly smiling at Tina Roxley.
|
OPCFW_CODE
|
Disabled "next" button still cycles slides, even though end is reached
When slidesToShow is greater than 1, and infinite is false, clicking on the Next arrow continues to set the slick-current class on each successive slide, even if the Next arrow has class slick-disabled
====================================================================
http://jsfiddle.net/jonscottclark/etqegmxa/1/
====================================================================
Steps to reproduce the problem
Set slidesToShow to a value greater than 1
Set infinite to false
Open up your debugger and inspect the .slick-track element, so that you can see each slide.
Click the Next arrow and notice how the .slick-active class gets placed on the left-most slide.
When the last slide is in the right position, and the Next arrow is .slick-disabled, click it again, and notice how the .slick-current class gets applied to each slide, until the last slide is reached.
Now, click the Previous button. Notice that the slides do not move, but the .slick-current class traverses backwards through the .slick-slide elements.
====================================================================
What is the expected behaviour?
When the Next arrow is disabled, no click behaviour should be executed.
====================================================================
What is observed behaviour?
See steps above.
The negative side effect of this is that you need to click the "Previous" button multiple times (depending on how many times you click the .slick-disabled Next button) before the slider moves backwards again, because slick has moved the .slick-current further ahead than it needs to be.
====================================================================
More Details
Which browsers/versions does it happen on?
latest Chrome
Which jQuery/Slick version are you using?
Forked jsfiddle version
Did this work before?
I initially noticed this issue on my fork of slick that was untouched since early January 2016, so the issue has likely existed since then.
Hey @jonscottclark the slick-disabled class is just for styling--it doesn't programmatically prevent sliding. To do that you have to hack the JS a bit. I have an example that I made a while back. No promises on it being compatible with the latest release though. There's an interesting discussion about this matter on #1138 Hope that helps!
Hey @leggomuhgreggo,
Thanks for getting back so fast, and for your proposed solution.
However, I still think this is a bug, because it doesn't satisfy the expected behaviour of this plugin.
In my fiddle, http://jsfiddle.net/jonscottclark/etqegmxa/1/, click Next until the slides showing are Slide 5 and Slide 6.
Then click the Next arrow again. (Expected behaviour is that slick should not do anything under the hood. The user doesn't see any behaviour).
Now click the Previous arrow. (Expected behaviour is that the slider should move the slides backwards on the first click). The slides do not scroll backwards on the first click. This is where I believe this is a legitimate bug.
This problem gets worse if you set slidesToShow to a higher value, like 4 or 5. You can then potentially click the disabled Next button many times, and it would take 3 or 4 clicks, respectively, on the Previous button before it actually shifted the slides backwards again.
The same mechanism that's preventing the set of slides from continuing to change their position after all of the slides are visible, and placing the "current" slide in the left-most position, should also ensure that the .slick-current class doesn't cycle.
Ahh okay my mistake, this is a duplicate of #2146 annnnnd here's the PR for it #2152
Great! Sorry for the dupe.
|
GITHUB_ARCHIVE
|
VPython Inheritance
I'm currently trying to make a class for the sole purpose of quickly creating a VPython object and appending additional values to the object. VPython automatically creates an object with such values like position and dimensions. However, I also want to add variables such as physical properties of the material and momentum. So here's my solution:
class Bsphere(physicsobject):
def build(self):
sphere(pos=ObjPosition, radius=Rad,color=color.red)
With physicsobject looking something like this:
class physicsobject:
def __init__(self):
self.momentum=Momentum
Essentially, I want this to still retain the original properties of the VPython sphere() object while adding new variables. This actually works initially, the object renders and the variables are added. But now, I have no way of changing the VPython object. If I type in:
Sphereobj.pos=(1,2,3)
The position will update as a variable, however, VPython will not update the rendered object. There is now a disconnect between the object and the rendered object. Is there any way to inherit the rendering aspects of a VPython object while creating a new object? I can't simply use
class Bsphere(sphere(pos=ObjPosition, radius=Rad,color=color.red)):
self.momentum=Momentum
and there isn't much documentation on VPython.
I don't use VPython. However, from the look of it you are inheriting the property of physicsobject and not sphere. My recommendation is try this:
# Inherit from sphere instead
class Bsphere(sphere):
# If you want to inherit init, don't overwrite init here
# Hence, you can create by using
# Bpshere(pos=ObjPosition, radius=Rad,color=color.red)
def build(self, material, momentum):
self.momentum = momentum
self.material = material
You can then use:
myobj = Bsphere(pos=(0,0,0), radius=Rad,color=color.red)
myobj.pos(1,2,3)
However, I recommend overwrite the __init__ method in your child class, providing you know all arguments to declare in the original sphere construct.
I would recommend you go through the tutorial from learnpythonthehardway on inheritance, in particular overriding. The concepts are super important to understand and will help you avoid bugs later on.
from visual import *
class Physikobject(sphere):
def __init__(self):
sphere.__init__(self, pos = (0,0,0), color=(1,1,1))
self.otherProperties = 0
I think this one helps - the question might be old be people might still think about it.
I am a big vpython user and I have never used stuff like this but I do know that vpython already has the feature you are trying to implement.
===============================Example====================================
from visual import *
myball = sphere()
myball.weight = 50
print (myball.weight)
This code creates a ball then initializes a variable called weight then displays it.
The wonderful thing about VPython is that you don't need to do that.
VPython does it for you!
This is all you need to do:
variable_name = sphere()#you can add pos and radius and other things to this if you want
variable_name.momentum = something
You can easily insert this into a function:
objectstuffs = []
def create_object(pos,radius,color,momentum):
global objectstuffs
objectstuffs.append(sphere(pos=pos,radius=radius,color=color))
objectstuffs[len(objectstuffs)-1].momentum = momentum
That function is definitely not the best thing to use in every case, but you can edit the function easily, it was just for sake of example.
Have fun with VPython!
|
STACK_EXCHANGE
|
New in version 2.4.2c
January 5th, 2015
- This release fixes a bug with the registration CAPTCHA, and a bug changing the user pending status.
New in version 2.4.1 (February 17th, 2014)
- This release updates the Twitter bootstrap to version 3.1.0.
- A new admin menu option shortcut for viewing all recent tickets.
- A few minor bugs were fixed.
New in version 2.4 (February 3rd, 2014)
- This release introduces the ability for a file to be uploaded with a ticket submission.
- The navbar has been changed to an even more mobile friendly version.
- There are many interface tweaks. Updated CAPTCHA and ezsql libraries.
- Please note: the fhd_config.php file has changed to allow for control of file uploading.
New in version 2.3.2a (January 30th, 2014)
- A bug where changing a call status to deleted would cause an error was fixed.
- Some no-longer-needed style sheet includes were removed.
- Some minor style and format issues were fixed.
New in version 2.3.2 (January 6th, 2014)
- This version updates the user interface to Twitter Bootstrap to 3.03 (newest version), and also adds the required license to re-distribute bootstrap within the project.
New in version 2.3.15 (July 15th, 2013)
- This version adds a minor update to the ezSQLPHP/MySQL db class.
- You only have to update includes/ez_sql_core.php and ez_sql_mysqli.php for a minor bugfix in the library.
New in version 2.3.1 (June 10th, 2013)
- This version upgrades ezSQL to MySQLi.
- The CAPTCHA has been updated.
- Live validation has been added on register and user call add.
New in version 2.3 (June 3rd, 2013)
- A major interface update optimizes the software for mobile devices.
- Many small bugfixes, including correcting the admin email notification.
- Passwords are now case sensitive for increased security.
- No database changes.
New in version 2.21 (May 10th, 2013)
- This version checks for the fhd_config.php file to avoid throwing an error to the user if it does not exist.
- If the configuration file is not found, it provides instructions on how to proceed.
- If you already have the software up and running, there is no benefit to this minor upgrade.
- The only changed file is index.php.
|
OPCFW_CODE
|
"Bad Request: USER_IS_BOT",
i tried to create a new sticker pack its giving me
{
description = "Bad Request: USER_IS_BOT";
"error_code" = 400;
ok = 0;
}
and other api have no issue regrading that user id
You can't pass a bot's user_id to createNewStickerSet. You must specify a user_id of a user, using the bot, on behalf of which the sticker set will be created.
You can't pass a bot's user_id to createNewStickerSet. You must specify a user_id of a user, using the bot, on behalf of which the sticker set will be created.
but for uploadStickerFile its working fine and i am using bot id as user id
and how we get user_id if a user as you commented
but for uploadStickerFile its working fine and i am using bot id as user id
and how we get user_id if a user as you commented
Bots can create sticker sets only on behalf of a user. The user will be able to manage the sticker set through @stickers bot. This is up to you to know, for which user you are creating the sticker set.
Bots can create sticker sets only on behalf of a user. The user will be able to manage the sticker set through @stickers bot. This is up to you to know, for which user you are creating the sticker set.
how do we get user_id to which we can create sticker set via bot
https://core.telegram.org/bots/api#createnewstickerset by using this api
how do we get user_id to which we can create sticker set via bot
https://core.telegram.org/bots/api#createnewstickerset by using this api
The bot must know on behalf of which user it wants to create a sticker set.
The bot must know on behalf of which user it wants to create a sticker set.
Your question is like "How do we get chat_id to which we can send a message via bot by using https://core.telegram.org/bots/api#sendmessage?" And the answer would be "Use chat_id of the chat to which you want to send the message."
Your question is like "How do we get chat_id to which we can send a message via bot by using https://core.telegram.org/bots/api#sendmessage?" And the answer would be "Use chat_id of the chat to which you want to send the message."
my requirement is to send stickers from my ios native app to telegram app
so i will use createNewStickerSet api but for that user id is required
but if i use that bot id it will showing error
so how can i achieve this can you explain in brief?
my requirement is to send stickers from my ios native app to telegram app
so i will use createNewStickerSet api but for that user id is required
but if i use that bot id it will showing error
so how can i achieve this can you explain in brief?
You don't need to create a sticker set to send a sticker. Any webp file inscribed in 512x512 square is shown as a sticker.
You don't need to create a sticker set to send a sticker. Any webp file inscribed in 512x512 square is shown as a sticker.
6677be47862043e
6677be47862043e
|
GITHUB_ARCHIVE
|
I realize I'm jumping ahead a bit, but has anyone attempted to run CoronaCards through Virtual Studio on the Windows 10 Technical Preview?
Jump to content
Agreed Larry. I will be using VS 2013 (Community; not trying to shell out right now), so we'll see how it goes. The move to Win10 was error-free to Microsoft's credit. I did not expect that.
I'll report back with any hiccups or updates as I move through the process. If anyone has any questions or would like me to test something out with CoronaCards on WP development with Win10, post here and I'll get it added to the list.
@Alex, the Corona University videos we did all used Windows 10 Technical Preview.
Not sure how long the Preview will last, but it works great as a way to set up a "no cost" development environment.
Well, I hit the first hiccup: Hyper-V Virtual Machine Management service seems to be either be missing or disabled on my Win10 box, as I am not able to debug on an emulated device. Virtual Studio keeps telling me that I need Windows 8 Professional.
I can bring up my services window, but I can't click on anything within it. @Charles, did you encounter this issue?
Alex, Microsoft document's the minimum system requirements for the WP8 emulator here...
1) You need to run a 64-bit Windows operating system.
2) Your CPU must support a hardware feature known as SLAT. (aka: "Extended Page Tables" on Intel; aka: "Nested Page Tables" on AMD.)
3) You should also have at least 4 GB of RAM.
I suspect #2 up above is the problem. Your CPU might not support SLAT. Microsoft provides instructions via the link below on how to enable SLAT in the BIOS. They also provide a downloadable command line tool to identify if your hardware supports it too.
You can also purchase a WP8 device without a contract for cheap on Amazon. For example, I've found a Nokia Lumia 520 (no contract) for $40 on Amazon here...
Oh and based on the statistics that I've seen, that Lumia 520 is currently the most popular WP8 device. Low-end, low-resolution, low-memory WP8 devices are the most popular WP8 devices, so, it would be good to test on them. Especially to make sure that you're not exceeding the max memory limit for you app (worst case is 150 MB) and to ensure that your app performs well on low-end CPUs.
And in case you're interested, I usually go to AdDuplex's blog (link below) to see what the current WP8 model and OS version distribution is.
I hope this helps!
@Joshua, definitely helps a bunch! I found some useful resources regarding enabling Hyper-V on my particular laptop model, and I'm running through those now. I'll report back on my success.
Thanks also for the cheap WP phone tip. I was looking at Newegg; I should have known to price-check on Amazon!
Issue #2: The service "Windows Phone IP over USB Transport (IpOverUsbSvc)" on my Win10 box isn't started. I did find it in the services dialog, but as I said before, I can't click on anything in this window so I can't start this service.
Has anyone run into this issue? At this point, I think it's closer to a Win10 problem (can't start services because I can't click on them) rather than a hardware problem.
You need admin permissions to start/stop Windows services. Perhaps you are not running as an admin then?
Go to the following folder in Windows Explorer:
Control Panel\System and Security\Administrative Tools
And then right click on "Services" and click on "Run as Administrator" from the popup menu.
Note: If you press "WindowsKey+S" and type in "Administrative Tools" in the search popup, it'll display a shortcut that'll open the Administrative Tools folder in Windows Explorer.
Joshua, thanks for the help on this. It's becoming apparent that my Windows 10 build doesn't behave like other folks. Here are the screens I have from my "settings" panel:
And here's the "system" entry:
I can't right click on anything in either window, and if I search for the "services" entry (which is how I found the window to begin with) I can't right-click on that either. As a matter of fact, I can't re-size the window, I can't minimize or maximize the window. I can only close it. I know my right-click button works, so that's out.
Am I missing something stupid? Does the fact that I can't right-click inside the "settings" window mean that I'm not logged in as an administrator? I'll try that now and report back...
I swear that I am not an idiot, but ALL OF A SUDDEN I can click inside of my "services" window. I'm thinking about packing it in today and calling it a bad job, because I'm swinging and missing all over the place.
I was able to get the service to start, and I'll report back once I confirm I can build to an .xap file. Thanks all for the help!
Great! Happy to help!
Also, I know that Charles was able to run the WP8 emulator on his Windows 10 preview machine. So, as long as your CPU supports SLAT, there is most definitely hope.
Perhaps Microsoft's Visual Studio installer doesn't automatically set up everything for you on Windows 10 like it normally does on Windows 8... because I don't remember having to manually enable the Windows service you mentioned. It just worked after installing it. I guess this is just part of the pain on jumping on the bleeding edge.
I guess so, Joshua. I've never been impressed with the early builds of any Microsoft OSes, and this one isn't changing my mind anytime soon.
One last question for the day: I'm finally trying to build my app, and it keeps failing because I don't have a Windows Phone connected to my machine. Do either of you know of a way to get around this? Is there a way I can emulate a WP device being connected, or perhaps just disable this pre-requisite?
If you just want to build a *.xap file, then click on "Build\Build Solution" from Visual Studio's menu. That will build the app without deploying it to a device or emulator. The *.xap file is typically outputted under your project's "bin" directory.
Also note that even though clicking the ">" toolbar button will cause a deployment failure due to a missing WP8 device/emulator, it'll still do the "Build Solution" step up above and compile/output a *.xap file under your "bin" directory as well.
And if you look at the documentation in the link below, you need to build for "ARM" when building for a device. Building for "x86" is for testing purposes only via the WP8 emulator.
Happy to help!
And I've got 1 more helpful tip, in case you don't already know it. When you install Visual Studio 2013, it comes with an "Application Deployment" tool that allows you or someone else to deploy a XAP to a WP8 device connected via USB. Just do a Windows desktop search for "Application Deployment" and it should pop right up.
Hit another snag, as my CoronaCards subscription for WP wasn't renewed with my Pro subscription. I just sent an email to support, so it should be resolved shortly.
That "Application Deployment" tip is definitely useful. Regarding the device, I was chatting with another WP developer, and they were saying that the 520 is good to test the bottom of the device curve, as you said, but they said a good amount of folks are using the 820, which has specs that are a bit better, but isn't being actively sold. A little food for thought for those looking to deploy to WP.
Our tech-support group is usually pretty fast in resolving these things during business days. So, you should here something fairly soon. Once they've synced your WP8 CoronaCards subscription with your Pro subscription, then just so you know, you'll be required to download a new license file with your updated authorization data.
If you need to test/debug stuff now, then in the mean time you can simply remove or exclude the license file from your WP8 project. That'll put Corona back into trial mode, which will show a "Trial" watermark onscreen. There is no trial expiration time for WP8 CoronaCards.
And interesting point about the Nokia 820. I would hope that people who are willing to pay extra for the better WP8 device models would make better potential customers too. Like you would expect someone who pays for a "gaming rig" quality PC would intend to do more with it than simply e-mail and Internet browsing.
Subscription issue already handled. Thanks Joshua!
I have to admit, I like the idea of Windows Phone, and the designs and user interface are very nice, but after using an iPhone for a couple of years (and having an S.O. who demands an iPhone) I can't see myself ever using anything but Android. That said, I am strangely looking forward to getting my 520 in the mail, dropping my SIM in and taking it for a spin.
Just Some FYI..
Here are a couple good articles about Windows Phone usages and devices Ram / Power etc..
Community Forum Software by IP.Board
|
OPCFW_CODE
|
Source: Deep Learning on Medium
Understanding metrics behind Jigsaw Unintended Bias in Toxicity Classification challenge
Originally posted on Jash Data Sciences Blog
What do you mean by ‘Unintended Bias’ ?
Kaggle held a competition that aimed at classifying the online comments on the basis of their toxicity scores. But somehow, the models ended up classifying the non-toxic comments as toxic. Comments mentioning frequently targeted minority communities/identities like ‘blacks’, ‘gays’, ‘muslim’, etc., were being classified as toxic, even when they were not of toxic nature.
For an instance the comment, “A muslim, is a muslim, is a muslim.” is not toxic, yet it is classified as toxic. All the comments related to one of these identities are grouped together as a subgroup. This gives us a set of subgroups, each related to one identity. Now, let us learn more about these Identity Subgroups.
‘Identity Subgroups’ refer to the frequently targeted words/groups (e.g. words like “black”, “muslim”, “feminist”, “woman”, “gay” etc).
Why does the bias exist?
Many comments that mentioned the identities that are targeted frequently are toxic. Hence, Deep Learning models learn to associate these identity words as toxic, essentially classifying comments that merely mention them as toxic comments.
The table attached below, is an example posted by Jessamyn West, on her twitter account. It is seen that the identity subgroups: man, woman, lesbian, gay, dyke, black and white have been interpreted as toxic. The sentence which has the combination of the three subgroups ‘woman’, ‘gay’, and ‘black’ has the highest toxicity rate.
The table reflects the error of the models that classified the subgroups as toxic, even when they were not.
Examples of mis-classified comments
Measuring the Unintended Bias
An Identity group can be defined as a bunch of comments that have some mention of a particular ‘identity’ in it. Everything that doesn’t belong to Identity Group goes to the Background group.
To obtain better results and reduce the bias, the dataset can be divided into two major groups — Background and Identity groups. Each group can be divided into two groups which contain positive and negative examples each. Therefore there are 4 subsets.
Next step is the calculation of Area Under Curve — Receiver Operating Curve (AUC-ROC). AUC — ROC curve is a performance measurement for classification problem at various thresholds settings.
Three AUCs to measure the negative/positive mis-orderings between the subsets are defined as follows:
a. Subgroup AUC — This calculates AUC on only the examples from the subgroup. It represents model understanding and performance within the group itself.
A low value in this metric means the model does a poor job of distinguishing between toxic and non-toxic comments that mention the identity.
b. BNSP AUC — This calculates AUC on the positive examples from the background and the negative examples from the subgroup.
A low value here means that the model confuses toxic examples that mention the identity with non-toxic examples that do not.
c. BPSN AUC — This calculates AUC on the negative examples from the background and the positive examples from the subgroup.
A low value in this metric means that the model confuses non-toxic examples that mention the identity with toxic examples that do not.
NOTE: Looking at these three metrics together for any identity subgroup will reveal how the model fails to correctly order examples in the test data, and whether these mis-orderings are likely to result in false positives or false negatives when a threshold is selected.
With this understanding, now we can calculate the final metric that measures the bias in the dataset.
Final Metric Calculation
These three AUC scores need to be combined in order to arrive at a final metric, to measure the bias. The final metric is calculated in the following manner:
final = (x * overall_auc) + ((1- x) * bias_score)
where, x = Overall Model Weight (which is take as 0.25 here)
Mathematically, the final metric is calculated with the below formula:
Following are the AUC scores calculated for each identity subgroups:
The final score metric calculation has 2 variables, namely:
a. Overall AUC — It is calculate by taking the ROC-AUC for the full evaluation set.
b. Bias Score — It is calculated by taking the average of the power means of all 3 submetric (Subgroup AUC, BNSP AUC and BPSN AUC).
Following is the code for calculating the power mean of each submetrics:
def power_mean(series, p):
total = sum(np.power(series, p))
return np.power(total / len(series), 1 / p)
With the value of p = -5, the power means of all submetrics are:
What is the difference between simple mean and power mean?
As seen above, the power mean is being calculated for all three submetrics. What if, instead of taking the power mean, we calculated the simple mean?
Simple mean is the power mean with p value = 1. Following is the table that contains 4 different data sets, their means and their standard deviations.
The aim is to get a higher accuracy for all the identity subgroups. Increasing the accuracy for few identity subgroups at the cost of others is not the motive. By doing this, the mean might remain same but the accuracy for few identity subgroups will be lower.
Therefore, taking the power means of all groups will help recognize the high range of scores. Additionally, a higher power value will punish the low scoring metrics severely. Hence, taking -5 as the power value, punishes the least scoring subgroup until it becomes better.
What if we see the above toxicity classification problem in a different domain, i.e., a different country/region? The identity subgroups that are often referred to as toxic, will change. For an instance, India will have casteism, religions, financial status, etc., as the identity subgroups. On the other hand, U.S. will have racist identity subgroups.
The overall ROC-AUC calculation wouldn’t help classify the text. It often ends up creating ‘false positives’ and ‘false negatives’. Therefore, we need the division of the data set in three submetrics as given above. This will help reduce the unintended bias.
Hope this article enriched your knowledge about the unintended bias!
|
OPCFW_CODE
|
The Asus EEE is without doubt an excellent machine, it may also hold some wider lessons for the future. It is lightweight, a convenient size (about the size of a book), lightweight (about 900g) and extremely versatile (see my previous posts). It feels very solid and would be perfect for someone who might have to travel a lot or attend a lot of meetings.
Working on the machine for a long period of time could prove a little uncomfortable, but you can easy hook it up to a standard PC monitor and keyboard for extended use. Practically everybody who saw it really liked it, and many of them said they would really like to own one. The price tag of around £220 puts it into traditional PDA territory, but this device can do so much more
The most interesting point about the machine for me was the scalability of the user experience, by this I mean that it can meet the demands of someone who has only basic computer skills, but can scale up to more demanding users. You don't have to coach anybody in how to use the machine, thanks to the fantastic interface which can switch between easy and traditional desktop modes, everybody can work with the machine straight away. This includes people used to MS Windows(tm) machines. It was very interesting watching people use a Linux machine with OpenOffice for the first time, nobody struggled, people could get on and use the machine straight away. I'm not going to be popular for saying this, but I think it is a lot easier to use than an Apple Mac(tm), particularly as it has two mouse buttons.
The EEE manages to be simple to use without dumbing down, you can still use a shell, you can install standard desk top software. You don't have to use cut down "mobile" versions of programs, there is no syncing with "desktop" PCs. Unlike a lot of PDA type devices I have used in the past I found it difficult to reach the limits of the machine.
The reason for this is not just good hardware design, but also the use of Linux. This is a very important key component in this machine, Linux scales well, you can run it on a set top box in your living room and then on everything up to a supercomputer. In a recent interview with Information Week, Linus Torvalds the founder of Linux said "regardless of where you want to put it, not only
has somebody else probably looked at something related before but you
don't have to go through license hassles to get permission to do a
pilot project". Linux can be seen in action in the One Laptop Per Child project too. Proprietary operating systems struggle to cope with this sort of hardware. Linux in the desktop is nothing to be afraid of, it very usable and is getting better all the time, projects such as Kubuntu and Ubuntu deliver people a real choice in their computing experience. Having choices is sometimes criticised, but choice is a good thing that can lead you to an operating system and software set that is most suited to you and your computer. If you don't want to make choices then accept the defaults, you can always change your mind later.
This is how computing should be, easy to use but without artificial constraints. For me the machine proves that Linux is ready for the desktop and ready for the masses.
|
OPCFW_CODE
|
[BUG]: Billing Plan override is not allowed due to insufficient permissions
Is there an existing issue for this?
[X] I have searched the existing issues.
🐞 Describe the Bug
I'm developing a NextJS app with the possibility of subscriptions via paypal, using this package.
I had a custom logic to apply coupons and promo codes that consisted on having coupons stored in a database and an input where the client inserted the promo code value. If the value inserted corresponded to any of the coupons available, it would override the price value in billing_cycles in the actions.subscription.create function (example below).
createSubscription={(data, actions) => {
return actions.subscription.create({
plan_id: planId,
plan: {
billing_cycles: [
{
...billing_cycles[0],
pricing_scheme: {
fixed_price: {
value:
!!appliedCoupon && !appliedCoupon.isExpired && appliedCoupon.isRedeemable
? formatPriceForPromoCode(
billing_cycles[0].pricing_scheme.fixed_price.value,
appliedCoupon.data.percent_off / 100
)
: billing_cycles[0].pricing_scheme.fixed_price.value,
currency_code: billing_cycles[0].pricing_scheme.fixed_price.currency_code
}
}
}
]
}
});
}}
This logic worked fine for a long time, and all of a sudden it stoped working throwing the following error: Billing plan Override is not allowed due to insufficient permissions.
Debug id: f910189d25f90
Does anyone know what the problem is and how to fixe it? Also if you know a better way to implement coupons in paypak I'd like to know.
Thank you.
😕 Current Behavior
No response
🤔 Expected Behavior
No response
🔬 Minimal Reproduction
No response
🌍 Environment
| Software | Version(s) |
| ---------------- | ---------- |
| react-paypal-js | |
| Browser | |
| Operating System | |
Relevant log output
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
➕ Anything else?
No response
@bob-the-coder @aPortejoie I contacted paypal support and they told me the same thing they answered in this thread on stack overflow.
In the meantime, I've managed to solve the problem in my project, using the endpoint to create the subscription that we see in the docs (subscription create) directly in the createSubscription function of PaypalButtons as shown in the example below.
createSubscription={async () => {
const res = await paypalFetcher.post("/billing/subscriptions", {
plan_id: planId,
plan: {
billing_cycles: [
{
...billing_cycles[0],
pricing_scheme: {
fixed_price: {
value:
!!appliedCoupon && !appliedCoupon.isExpired && appliedCoupon.isRedeemable
? formatPriceForPromoCode(
billing_cycles[0].pricing_scheme.fixed_price.value,
appliedCoupon.data.percent_off / 100
)
: billing_cycles[0].pricing_scheme.fixed_price.value,
currency_code: billing_cycles[0].pricing_scheme.fixed_price.currency_code
}
}
}
]
}
});
return res.id;
}}
@lfernandes00 Am I correct to assume the payPalFetcher is sending the same payload to the endpoint before returning res.id?
i.e. There's no extra data you're sending them like payer information or such?
Thank you for this posts! I get error paypalFetcher is not defined , could you please explain, help to fix ? Thank you!
@estafaa This paypalFetcher it's just a middleware where I define the request headers and handle the response. You don't need to use it.
Check what it says in the paypal documentation about creating a subcription. There is enough info to make this work as we expected.
Docs: https://developer.paypal.com/docs/api/subscriptions/v1/#subscriptions_create
Closing issue - solution is provided above. If this is insufficient, please re-open.
|
GITHUB_ARCHIVE
|
package com.orlandovald.tree;
import com.orlandovald.tree.pojo.*;
import javax.inject.Singleton;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import static java.util.stream.Collectors.toList;
/**
* Service to handle Tree Nodes logic
*/
@Singleton
public class TreeService {
public static int MAX_CHILD = 15;
private final TreeRepository repo;
public TreeService(TreeRepository repo) {
this.repo = repo;
}
/**
* Retrieves all nodes
* @return
*/
List<Node> getFullTree() {
return repo.getFullTree();
}
/**
* Main request processor. It will route the request based on its type
* @param req
* @return
*/
TreeResponse process(TreeRequest req) {
switch (req.getType()) {
case NODE_CREATE:
return nodeCreate(req.getNode(), req.getCount());
case NODE_DELETE:
return nodeDelete(req.getNode());
case CHILD_DELETE:
return childDelete(req.getNode());
case CHILD_UPDATE:
return updateChilds(req.getNode(), req.getCount());
case NODE_UPDATE:
return updateNode(req.getNode());
case TREE_CLEAR:
return deleteAllNodes();
}
throw new TreeException("Operation not supported");
}
/**
* Deletes all nodes
* @return
*/
private TreeResponse deleteAllNodes() {
repo.clearTree();
return new TreeResponse(ResponseType.REFRESH_ALL_NODES);
}
/**
* Creates a node with the requested number of childs
* @param node
* @param count
* @return
*/
private TreeResponse nodeCreate(Node node, int count) {
final List<Integer> nums = generateRandomNumbers(count, node.getLowerBound(), node.getUpperBound());
node.setChilds(nums.toArray(new Integer[nums.size()]));
Node newNode = repo.create(node);
TreeResponse resp = new TreeResponse(ResponseType.NODE_CREATED);
resp.getNodes().add(newNode);
return resp;
}
/**
* Deletes a Node. Throws an exception if the node id is not found
* @param node
* @return
*/
private TreeResponse nodeDelete(Node node) {
Node deletedNode = repo.deleteNode(node.getId());
if(deletedNode != null && deletedNode.getId() > 0) {
TreeResponse resp = new TreeResponse(ResponseType.NODE_DELETED);
resp.getNodes().add(deletedNode);
return resp;
} else {
throw new TreeException(String.format("Unable to find Node with id of [%d]", node.getId()));
}
}
/**
* Deletes a child. Throws an exception if the node id is not found
* @param node
* @return
*/
private TreeResponse childDelete(Node node) {
if(node.getChilds().length != 1 || node.getChilds()[0] == null) {
throw new TreeException("Invalid delete number request");
}
int num = node.getChilds()[0].intValue();
Node updatedNode = repo.deleteChild(node.getId(), num);
if(updatedNode != null && updatedNode.getId() > 0) {
TreeResponse resp = new TreeResponse(ResponseType.CHILD_DELETED);
resp.getNodes().add(updatedNode);
return resp;
} else {
throw new TreeException(String.format("Unable to find Node with id of [%d]", node.getId()));
}
}
/**
* Generate a new set of child numbers for the given Node
* @param node
* @param count
* @return
*/
private TreeResponse updateChilds(Node node, int count) {
Node nodeToBe = repo.findById(node.getId());
final List<Integer> nums = generateRandomNumbers(count, nodeToBe.getLowerBound(), nodeToBe.getUpperBound());
nodeToBe.setChilds(nums.toArray(new Integer[nums.size()]));
Node updatedNode = repo.updateChilds(nodeToBe);
if(updatedNode != null && updatedNode.getId() > 0) {
TreeResponse resp = new TreeResponse(ResponseType.CHILD_UPDATED);
resp.getNodes().add(updatedNode);
return resp;
} else {
throw new TreeException(String.format("Unable to find Node with id of [%d]", node.getId()));
}
}
private TreeResponse updateNode(Node node) {
Node updatedNode = repo.updateNode(node);
if(updatedNode != null && updatedNode.getId() > 0) {
TreeResponse resp = new TreeResponse(ResponseType.NODE_UPDATED);
resp.getNodes().add(updatedNode);
return resp;
} else {
throw new TreeException(String.format("Unable to find Node with id of [%d]", node.getId()));
}
}
/**
* Generate {@code count} random numbers from min (inclusive) to max (inclusive)
* @param count Number of numbers to generate
* @param min Minimum value (inclusive)
* @param max Maximum value (inclusive)
* @return List of Integers of size {@code count}
*/
public List<Integer> generateRandomNumbers(int count, int min, int max) {
if(count == 0) {
return new ArrayList<Integer>();
} else if (count < 0 || count > MAX_CHILD) {
throw new TreeException(String.format("Child count should be between 0 and %d", MAX_CHILD));
} else if(max <= min) {
throw new TreeException("Upper bound should be greater than lower bound");
}
Random r = new Random();
return r.ints(count, min, max + 1).boxed().collect(toList());
}
}
|
STACK_EDU
|
Fix 260 character file name length limitation
The 260 character limit on file paths really gets in the way of having a deeply-nested project hierarchy. It's only there as backwards compatibility with the old school APIs, and has no place in any sort of modern development environment.
We should be able to work with file paths of whatever size we want.
Hello everyone and thank you for the feedback and for voting on this issue. We understand that this can be a frustrating issue, however, fixing it requires a large and complicated architectural change across different products and features including Visual Studio, TFS, MSBuild and the .NET Framework. Dedicating resources to this work item would come at the expense of many other features and innovation. Additionally, if we removed this limitation from our first party tools it will still likely exist elsewhere in Visual Studio’s ecosystem of extensions and tools. For these reasons, we are declining this suggestion and returning return everyone’s votes so they can be applied to other items. In the interest of delivering the most value to our customers we sometimes have to make very difficult cuts, and this is one of them.
Visual Studio – Project and Build Team
Geez... What year is this already? Feels like 1990s and tearing my hairs out even after windows 10
I think this limit is a usefull feature.
Try using Long Path Tool program. This is very useful.
I am using a software called Long Path Tool and it is working like charm, i have no problems in copying or extracting anything anywhere.
Bryan Rayner commented
Two days lost because I effectively can't use npm in a .Net MVC project. Microsoft has been getting so much better - Typescript is amazing. Why can't this be improved?
Mark Ward commented
Here is an example of Microsoft's own having to work around the limitation
"Avoid path too long errors when performing BuildV2 builds in MVC repo
- do not glob to the ends of the earth when looking for `project.json` files"
Johnny Willemsen commented
This is blocking our migration to msbuild, looks custom build steps add several log files which are now put in a directory that is longer then 260 characters forcing us to stay with nmake
Our company won't be renewing our Microsoft licenses. We won't be buying new versions of Visual Studio. We won't be hosting our web applications and web services on Microsoft technologies. We won't be developing in Microsoft Windows.
Get it yet, Microsoft? By being lazy, you are going to push away the customers who pay the big bucks for your software.
Unfortunately, this renders all of your effort invested into NPM integration in Node Tools for Visual Studio unusable, and therefore effectively worthless. Sorry.
This is a big pain **********, sorry! But we are in 2015. And the Explorer of Windows 10 dosen't support this and also Visual Studio 2015 does not support this.
NTFS etc. support lenght approximately 32.000 chars. Why will Micrososft not Work on that feautre? You have more then 100.000 employees and the price for Visual Enterprice is over 5.000 $
Micrososft will close this Issue, due the high investment cost? Not really!
This is not a cosmetic Bug this a major problem, also for the future life of Microsoft.
Please change your mind.
I hit this issue again today. I can't believe Micro$oft isn't fixing it.
Goran Obradovic commented
What is the point of voting for something if votes are not considered when estimating what is really important to your users and what is of most value for your customers? Obviously you have other tools to estimate what is most value for your customers and to me it is not very transparent how you do it.
Steve Sheldon commented
This problem is all throughout Windows, and I do have to agree with other comments here that eventually this is going to lead many companies to migrate off the platform.
I've hit this issue before with Visual Studio and TFS, surprisingly the project that seems to cause the most paint is a Platform & Practices assembly from several years ago which has like 80 characters in it's name.
We deal with a lot of digital content transmitted to us from vendors where filenames are highly descriptive and may be 100+ characters in length. Copying or moving these files around frequently results in hitting this limit. It's not a problem for our vendors as most are on Linux.
You know what's valuable to me? Fixing the broken system.
"This is something that makes you tear your hair out and buy a Mac." <- Exactly!
Basically, Microsoft has been too busy doing great things like:
-Removing Aero Glass (which most of their customers would actually prefer to have), and producing the ugliest popular (as in "forced on a bunch of unsuspecting users") UI to date.
-Making ugly/gaudy icons (and then receiving backlash and producing yet another set of slightly better icons) when the existing icons were perfectly fine/better than what we have now. (Those of you who didn't participate in the Preview missed out on this fiasco!)
-Mass rewriting core UI functions to slow, memory hogging, and inefficient XAML code when the native Win32 versions we had previously were fine. Most of us Insiders preferred the pre-build 9926 Start Menu, before it was converted to XAML.
-Adding spyware to their system (which many of their users don't want).
...to even care about actually important stuff like this, new technology support, better performance and security in their core OS. It's about time Microsoft should try to keep up with their competitors!
In the meantime, everyone running Windows 10, please open the Windows Feedback app, search for "max_path", and vote up the appropriate suggestion(s). It's the best we can do at this point.
Jack Mott commented
This is something that makes you tear your hair out and buy a Mac.
Igor Varfolomeev commented
This limitation is one of the main things, that make me think that it might be easier to migrate to another OS...
This issue MUST be fixed. It's absolutely unacceptable.
For me it's more important that all other things M$ developed since Windows 7 (e.g. more than Windows 8-10, all recent MS Office versions and all recent versions of MSVS all together). So, for me, "Dedicating resources to this work item would come at the expense of many other features and innovation" is not an answer.
Eventhough I appreciate your answer from October 2013, I'd like to give this is issue an up-vote.
Today is July 2015. Please keep communicating with your customers. Thank you.
As much as I understand this is a major architectural change, it is a fundamental issue that we don't expect to encounter on a post-1990's computer. Not allowing file path strings that are allowed by the operating system demonstrates a failure to accomplish the most basic aspect of file IO. The idea that other features would not come out as a consequence is unfortunately necessary. People would rather have something that works reliably than something with tons of broken features. Heaven forbid the new model of toaster lacks a laser light show because they had to go back fix the issue of it not actually making toast.
Bumping this up again , this issue comes up too often in the context of developing in vstudio especially for web/mobile development. Instead of blanket must fix everywhere , please take a single usecase and figure out how to make it work. The customer below working in node is an excellent place to start. I used to PM at msft and am well aware of the top down planning done there, its time to change or lose more developers to the droid / ios world.
|
OPCFW_CODE
|
Long time no see. First, I'm happy that now I'm working on Pronovix. My new colleagues are awesome. They are really lifehackers:) This company works with Drupal and made several cool stuffs. Until a month ago I'd never participated in any Drupal project. When I began learning from Pro Drupal Development I felt myself strong and energetic and agile. But when I got my first mission, it changed a little bit:) Thinking in a Drupal way is REALLY hard at the first time. You know. Everybody can make a full fledged Ajax site with a strong PHP back-end. But when your hands are tight, it could be a huge pain. And the point. In English. Yeah. That's cool. But I love challenges. Learning is fun. Drupal fun. English fun. Working fun. Thats I guess equal to FUN^4. Not bad.
The other interesting happenings was the installing war. God knows how many times I sad malicious words. Anyway. Its all because my old Asus laptop. There are 4 types of Linux distributions:
1 - not well supported, pain (60%)
2 - deprecated kernel, softwares (10%)
3 - cant detect properly my SiS videocard (10%)
4 - cant detect my Broadcom wifi card (20%)
I know. Every geek is saying that you can always configure your unix based distro. Yes. But I can't. My favorite screenplay:
1 - I need network access
2 - To enable the network I had to fire up my wifi
3 - To fire up my wifi i had to download the firmwares and stuffs from the network
Yes, thats recursive. Maybe some parts of my brain still on running. That cause me a big headache. But I couldn't have a rest because on windows the web development really slow. I tried the follow distros:
Ubuntu (4 version), Mint, SuSE, Debian, Arch, PCLinuxOS, Fedora, Slackware, FreeBSD, PcBSD.
But 2 days ago miracle happened. Just browsed www.distrowatch.com and stumbled upon of Mepis Linux. I've never heard about it. But found quite a lot update, so thought it worth a try. And then ... First it asked me what resolution I want during installation. WOW! Kick ass. Next, it discovered my wifi card and made enable networking. Crazy. Then it can connected to the Debian Lenny repository, so really up to date. Thanks God it uses KDE 3.5.10. Only one defect it has, cant render monochrome fonts. But the default way isn't so bad. I can handle it.
So, I suggest you trying it. At least once. Oh, I forgot the candy:) My webcam is working under Mepis. (Even XP is incapable to do this.) Run and download - http://www.mepis.org/
Last I want to share with you, I found a very delicious gummy candy:) And If I mentioned delicious, I registered on http://delicious.com/. I don't know why I did't take that before. To be honest, I feel shame. You know my bookmarking site idea in my former post. You can follow me there: http://delicious.com/itarato
And one thing at the end. I watched anime. This is a huge step to me. And I get used to drinking coffee in the weekends. Maybe once I'll become to a real geek:)
So. I'm hungry thus this post is over. I hope I could give you at least a small information (use Mepis).
Tomorrow I plan writing a blogpost about 'Top 10 thing which makes you more productive'. Because I'm always striving to be the most productive I can. Now is really bad.
And I wrote a small article about unit/functional testing (Selenium, PHPUnit...), but I need a little time to correct it. Maybe on Sunday.
|
OPCFW_CODE
|
Problem with socket-cluster client in excel custom function add-in
I'm trying to develop a excel add-in that will have mainly custom-functions that will read data from a socket-server and publish then Real time to excel cells.
The add-in requires authentication and it is implemented using a OfficeDialog and auth0 service..
The problem is that my add-in will use the socketcluster-client and when I instantiate the client in my functions.js, like this:
const SocketClusterClient = require("socketcluster-client");
let socket = SocketClusterClient.create({
hostname: "localhost",
port: 443,
path: "/excel/"
});
The add-in stops to work on excel desktop, but still works on excel web.
I can see the excel-web logging in to my socket-cluster server. so the problem is with the desktop version of the excel.
Can someone help me with this?
My first socket-cluster client uses async/wait my first thought was that, as the custom function runs in a different runtime than the rest of the office-js, this runtime might not support this feature, but i tried to make everything tun on the shared runtime with no success.
Any advices are very appreciated, as this is all new to me and i'm really having a tuff time trying to implement this.
Thanks
The site for the socket-cluster is https://socketcluster.io/
It is probably better not to use “localhost” but to instead use the actual server domain name. This might also require updating the manifest to include the domain in the list of allowed domains.
For debug purpose, you may can try followings:
Find HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\16.0\WEF\Developer\<solutionId>,
Add below value
Thanks for your repsonse. I'm going to change from localhost to the final domain as soon as my addin is working. The issue is not with the localhost now because i can use it on the web version...Also, all my sollutions alread have this key:value on the register... Thanks again
Sorry to hear that the solution does not work for you. Is there any error you may notice when you operate the websocket, such as open the websocket, send data to the websocket, etc?
No errors what so ever! It's been very hard for me to debug the add in while running it on the desktop, and the problem right now happen only on the desktop!
Have you tried enable runtime logging? For desktop please refer to the section "Runtime logging on Windows". Runtime logging delivers console.log statements to a separate log file you create to help you uncover issues.
I have used it before but to handle manifest issues.. it will log to theis file if i issue a console.log() on my js? Thanks for your help on this issue!!
Ya. I think so. You can use console.log() to uncover the issue.
|
STACK_EXCHANGE
|
My website has a nav section that is fixed to the top of the home page, as you scroll. The second section of the home page has several buttons that link to various other sections, which I’ve given anchor points. When I click on a button it scrolls to the top of the intended section, but the fixed nav section is covering up part of the top.
This is a common issue, and I’ve found several threads with seemingly simple solutions but I can’t get any of them to work on my site. I’m trying to use the top solution in this thread, which calls for adding the below code to the page, customizing the class and height/margin numbers, as well as “adding a new class with a ::before style to each element.”
The latter part is confusing me, and I’ve tried every iteration of what I believe that to mean, looked at the page source of the example project and tried to set up my code the same, and I think I just don’t know enough about inserting code like this to catch the bugs I’m creating.
Would anyone be willing to take a look at my site and try out this solution? Or, explain how I would go about “adding a new class with a ::before style to each element?” Below is my read only link. I’d greatly appreciate any help.
Hey @Stan - thanks so much for your response and taking the time to record this video. I understand your solution, but it requires me to add unwanted padding or margins to the top of my sections, changing the look and spacing of my site as someone scrolls throughout. That’s why I’m seeking to implement some sort of code, so I can maintain my current design.
Any insight on a solution that tells the nav to stop before it reaches the overlapping rest point?
hi @dankanvis Sorry that you didn’t find provided solution useful as it is IMO how things are mainly done.
You have mentioned change in current design. Yes standard behaviour of elements stacking order (flow) should be taken in consideration when you build (design) one-page website with links to section while nav is fixed.
But even after website is already build as in your case it can be changed to have perquisite design just by using padding and/or margin on element above and/or element below. When you combine values on both elements you will get exact result as before and your element stops under your navigation as expected.
As in programming is many ways how to achieve same result there is another approach by using Intersection Observer if you are comfortable with JS but this is IMO unnecessary overkill for this simple task.
Hope that you will find explanation and possible option helpful but feel free to use your favourite browser to search other possible solutions.
@Stan I hear you, I just don’t want to add extra padding, since my nav section is larger than the current padding (above and below combined) between my sections. I’d rather not shorten my nav either.
It seems like the leanest solution is still that bit of code I mentioned. Since my site is still very much a work-in-progress, I’m going to wait it out on that. Will look into Intersection Observer though, too. And will consider the padding solution if all else fails.
Hi @dankanvis It is totally fine with me. If you are convinced that using standard methods will have bad impact on user experience (UX) when visiting your website that user will notice that you have used CSS padding then use intersection observer. As I have mentioned there is many ways to achieve same goal.
hi @ Neva you been able to create your design with help of provided demo to study ? Do you need further explanation to help you understand how thing are done? If not feel free to close your request as solved.
Hey @Stan, I was able to get it working without code. I actually had my nav set to sticky before. Once I switched it to fixed, all I had to do was add a margin that is the height of my nav to the top my hero section. With the nav fixed, it recognizes that padding when reaching all of the anchor points throughout my page.
|
OPCFW_CODE
|
Hi! I’m Wolf Paulus. I’m a photographer, hiker, hacker, technologist, based in Ramona, California.
This is my journal, where I share quick thoughts and ideas on technology.
My photography portfolio can be found at https://wolfpaulus.photography
I’m appointed to the advisory committee at the University of California, Irvine, and occasionally speak at conferences and user groups on topics ranging from Embedded Technology to Emotional Prosody, and everything Voice and Conversational User Interface related.
Take a look at some slides from my most recent talks.
January 30-31, 2017 Conversational Interaction Conference
The CI Conference taking place at The Westin in San Jose, will discuss interaction with digital devices and applications by text or speech in natural language.
On January 30, I will be speaking at the conference on creating effective Conversational User Interfaces.
April 24-26, 2017 SpeechTEK
SpeechTEK 2017 is taking place in Washington DC and discussing “Speech as the Innovative Interface”. On April 24, I will be speaking at the conference on “The Conversational User Interface is a Minefield”
Many of the new concepts that I implementing, are communicated best through video clips or short films.
Take a look at some high quality short HD films that I have created over the last few months and years.
“Amateur Professionalism”, a concept used since 2004, describes an emerging sociological and economic trend of people pursuing amateur activities to professional standards. That pretty much describes how I look at my photography work today.
If you like, take a look at some of my photos and the stories behind them, at https://wolfpaulus.photography
The Servlet 4.0 specification is out and Tomcat 9.0.x will support it. However, at this point Tomcat 8.5.x is the best Tomcat version and it is supporting the 3.1 Servlet Spec.
Since OS X 10.7 Java is not (pre-)installed anymore, let’s fix that first.
The path to an acceptable Conversational User Interface is heavily mined and booby-trapped. Let’s not fall into the IVR trap, but create a humanized and personalized user experience. [IVR – Interactive voice response is a technology that allows a computer to interact with humans through the use of voice and DTMF tones input via keypad. – Think about calling your insurance company … ‘To file a claim, press 2’]
While they can benefit greatly from each other, there is no need to create a dependency between a Conversational User Interface (CUI) and Machine Learning (ML). It is not hard to imagine how a CUI can be put to good use with a currently existing service infrastructure. In fact, I think it is a good idea to apply the principal of “separation of concerns” and not merge the already difficult task of creating a CUI with new or untested ideas for services and solutions. read more…
Here are the steps to update your Nexus 6P to Android 7 (final) release.
Step 1. Install Android Studio
Download and install Android Studio from here. If you think you already have everything you need, at least verify your adb version like so:
.. Check adb version
Android Debug Bridge version 1.0.36
|
OPCFW_CODE
|
I forgot I had the app installed as I haven't received notifications for quite some time. Issue on my end or others experiencing this?
> ...Reinstalled app, had
> to sign out post reinstall (not sure how/why it kept login credentials after removal)
That's surprising! I wonder if it's a clue. As far as I know, uninstalling should cause it to forget who you are and make you sign on again. If it didn't do that, I wonder if something funky is going on. But, I don't know what it might be either. MrBean wrote:
> @bill did you change something? I just got the first notification in forever via
> a pm, but it was 48 minutes behind from when the message was originally received.
I don't think I changed something for that. I did fix the lack of notifications for a mention, that's all.
48 minute delay is also strange and may be a clue. From my experience, notifications are usually immediate. It may indicate your network connection is bad or something. I don't think notifications vary much if you're on wifi vs. cell. The amount of data sent for one if very small.
I'm not sure what I can do to debug this. I can tell you how it works roughly. When you sign on to the app, my server records the device ID of your phone. Then, when an event happens on my server (like someone sends you a pm, offer, or mentions you), it checks if it has a phone device ID. If it does, it sends a notification message to the Google Cloud Messaging service (Google's servers). Google then send the message to your phone (I don't know much about this or have much control over it -- but it usually works well and fast from what I've experienced -- I'm usually on wifi, though).
There can be cases where my server loses your device ID. For example, if you sign off from the app or if I get an error back from Google, I will delete it, assuming the person uninstalled the app or stopped using that phone. I think there's something similar if I just don't get activity from that device for a month or more. But, signing in, should refresh that and put the device ID back. You can also have multiple devices and mix iOS/Android.
The version of Android you're using may matter too. Both of my test devices are Android 6.0.1. Other version may handle notifications differently (but should work as far as I know).
It surprises me that it was already logged in. I did put in stuff that would try very hard to remember login info, but I didn't think it would work this well. I'm fairly sure it's tied to the browser, likely Chrome on Android. So, it's possible that Chrome carried over the saved login info somehow.
You might want to log off and log back in again to see if that gets notifications working.
Notifications are per-device. I keep a table of them on the server. Then, if something happens (like a new pm), it will send the notification to each device that person has.
I just checked and it's showing you have 1 android device registered for notifications at the moment. The last time registration happened was 9:05pm last night. So, that seems different from what you're telling me. If you have 2 phones signed on with the GTZ app, I'd expect 2 registrations. Signing off the app or uninstalling it, should remove the notification registration. Signing on, should add it--but, perhaps the odd way that happened automatically on your new phone caused it not to happen. So, I'd suggest signing off then on again to reset it.
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using HtmlTags;
using StoryTeller.Engine;
namespace StoryTeller.UserInterface.Editing.HTML
{
public class TextboxBuilder : ICellBuilder
{
private readonly List<Action<Cell, HtmlTag>> _alterations = new List<Action<Cell, HtmlTag>>();
public TextboxBuilder()
{
If(t => t.IsFloatingPoint()).Then(
x => x.AddClass(GrammarConstants.NUMBER).Attr(GrammarConstants.MAX_LENGTH, "19"));
If(t => t == typeof (Int16)).Then(
x => x.Attr(GrammarConstants.MAX, Int16.MaxValue).AddClass(GrammarConstants.INTEGER));
If(t => t == typeof (Int32)).Then(
x => x.Attr(GrammarConstants.MAX, Int32.MaxValue).AddClass(GrammarConstants.INTEGER));
If(t => t == typeof (Int64)).Then(
x => x.Attr(GrammarConstants.MAX, Int64.MaxValue).AddClass(GrammarConstants.INTEGER));
}
#region ICellBuilder Members
public bool CanBuild(Cell cell)
{
return true;
}
public void Configure(Cell cell, CellTag tag)
{
tag.Attr("type", "text").AddClass(GrammarConstants.REQUIRED);
_alterations.Each(x => x(cell, tag));
}
public string TagName { get { return "input"; } }
#endregion
private IfExpression If(Func<Type, bool> filter)
{
return new IfExpression(_alterations, filter);
}
#region Nested type: IfExpression
internal class IfExpression
{
private readonly List<Action<Cell, HtmlTag>> _alterations;
private readonly Func<Type, bool> _filter;
public IfExpression(List<Action<Cell, HtmlTag>> alterations, Func<Type, bool> filter)
{
_alterations = alterations;
_filter = filter;
}
public void Then(Action<HtmlTag> action)
{
Action<Cell, HtmlTag> alteration = (cell, tag) => { if (_filter(cell.Type)) action(tag); };
_alterations.Add(alteration);
}
}
#endregion
}
}
|
STACK_EDU
|
By doing so, Microsoft is illustrating the potential of Artificial Intelligence to better the common good! Taking part in the program, I had the pleasure of assisting the social enterprise “Social Builder” in the Diversity & Inclusion category. I’d like to tell you about this enriching experience.
Perfecting the chatbot user experience
Social Builder is a social enterprise championing gender equality in the digital world. Its aim is to help women change careers and find employment, through guidance, training and integration actions. The tool used by Social Builder to pursue this ambition is Adabot, its virtual assistant.
Launched in October 2018 on Facebook Messenger, it connects female jobseekers and women looking to change careers to the digital ecosystem in their area. Adabot is a genuine virtual orientation coach and guides users throughout their career. However, the way the bot worked was not enabling Social Builder to effectively achieve its aims, which include making Adabot a decisive tool to guide women towards careers in the digital sector.
Several obstacles were identified:
- Mandatory authentication via an external account at the start of the process
- Lack - or even absence - of customized answers
- Lack of data collection - even though this would help to enrich the user experience
The whole idea of the Share AI project is to implement artificial intelligence solutions that will significantly improve the user experience using Adabot.
Technological solutions deployed
One of the primary aims is to convert Adabot into a proprietary intelligent bot; in other words, a bot that is native to the Social Builder website. Then solving the issues identified upstream could already make the bot much more intelligent, accurate, compliant and upgradeable. Finally, to become a genuine artificial intelligence aide, Adabot needs to understand the user’s intention, irrespective of sentence complexity, and be able to ask follow-up questions to eliminate any ambiguity or simply find out more about the user. It needs memory to reuse key information throughout the conversation, for context or customization purposes, such as getting the conversation back on track if the user asks irrelevant questions. To achieve a certain linguistic level, Adabot must be able to learn from its users.
With this in mind, we decided that NLP (Natural Language Processing) would allow both analysis of users’ feelings, based on their answers, and better understanding of their intentions. The aim is to determine their experience and offer them options allowing them to get on board more easily with the recommended career. The key issue is making the bot more intelligent by continuously improving the quality of its answers, and to give it the ability to offer users custom coaching. NLP is the obvious choice to improve Adabot!
Our objectives are to improve the bot’s retention rate and provide users with more reliable career assistance by improving performance in terms of:
Access to and use of the system:
- Relevance of the answers provided
- Better understanding of user intentions
- Custom assistance
NLP: Usage context, benefits and limits
With the advent of deep learning, NLP has been used for various tasks, such as understanding, ranking, translating and predicting (or generating) text, among many others. It enables bots to understand the semantics of the language used, text structures and spoken sentences.
This makes it possible to extract useful information from a large volume of text data. Among other things, it can help identify recurrent issues with a product or service based on user reviews and then deduce performance indicators, such as the customer satisfaction rate and experience.
We first identified several technology obstacles to eliminate:
- Access to the bot requires authentication via an off-platform account
- The existing bot’s decision tree is static
- The data is not easily actionable to improve the bot
- There is no interface between the bot and Social Builder’s CRM
To eliminate the first obstacle, Social Builder transferred the bot to another platform, called Vizir. In order to ensure GDPR compliance, users can now choose whether to consent to the history of their conversations with the bot being saved. Using the saved history, we analyzed conversations in order to measure the bot’s performance and determine the causes of cancellation or any issues encountered by users.
Performance is measured by producing indicators. Adabot already had several KPIs, including the number of new users, duration of conversations and a satisfaction survey. These indicators are still not enough for effective measurement. To take account of users who do not answer satisfaction surveys, we chose to focus our analysis on users’ words. By analyzing feelings based on conversations, we can now measure:
- Bounce rate, which reveals the percentage of users who visited the website without consulting the bot
- User experience, which detects whether users are interested in or indifferent to digital professions after their discussion with the bot, based on the words they used
- Interest in events or training courses thanks to the bot, to find out whether a user was convinced to sign up after their conversation
- Interest in digital professions thanks to the bot, to see whether the conversation led the user to consider working in the digital sector
We therefore used the AI Builder in Microsoft’s Power Platform solution to train a feeling analysis model, with the aim of classifying user opinions into three categories: Positive, Neutral, and Negative.
- The first group indicates that the user was satisfied with her conversation with the bot and got all the information she needed.
- The second group includes all users who were hesitant or doubtful following their conversation with the bot. These users still need to be convinced via a follow-up phone call, with one of Social Builder’s advisors for example.
- The final category gives us an overview of users disappointed by the bot, either because they did not get the information they needed or because they do not identify with the situations set out by the bot. These users’ conversations are further analyzed to pinpoint any possible bottlenecks that made their conversation sterile. This will then enable Social Builder to improve the bot’s conversations and/or recommend new careers tailored to these wide-ranging situations.
Our work with Social Builder on Adabot is still ongoing and new ideas are likely to emerge with the integration of NLP, such as translation, in order to include non-French-speaking users. NLP should enable Adabot to have a real conversation with users, while an individual’s conversational data could provide invaluable information, by understanding trends and better interpreting users’ feelings. The natural language processing model proved essential to analyzing conversations and identifying a set of indicators capable of shedding light on how the conversation went.
|
OPCFW_CODE
|
There are 6 repositories under add-on topic.
Web Extension for Firefox/Chrome/MS Edge and CLI tool to save a faithful copy of an entire web page in a single HTML file
:sunglasses: A curated list of add-ons that extend/enhance the git CLI.
PDF translation add-on for Zotero 6
A kubernetes operator for creating and managing a cache of container images directly on the cluster worker nodes, so application pods start almost instantly
Add-on for video editing in Blender 3D: edit videos faster! Included in Blender 2.81+
It makes your existing server live. This is a browser extension that helps you to live reload feature for dynamic content (PHP, Node.js, ASP.NET -- Whatever, it doesn't matter)
A storybook addon that lets your users toggle between dark and light mode.
The Ancillary Guide to Dark Mode and Bootstrap 5 - A continuation of the v4 Dark Mode POC. >>> Bootstrap 5.2 is in the `dev/v1.2.0` branch <<<
📱 Browser add-on allowing you to quickly generate a QR code offline with the URL of the open tab or other text!
One add-on to rule Tags all. Manage all your Tags in one Zotero add-on.
Never doubt how to pronounce a word. Double-click it and your browser will say it out loud for you!
Puts an RSS/Atom subscribe button back in URL bar
The Definitive Guide to Dark Mode and Bootstrap 4 - A proof of concept
GameRig is an auto rigging for games addon for Blender. Built on top of Rigify, it adds rigs, metarigs and additional functionality that enable game engine friendly rig creation. Open source and can be used for personal and commercial projects.
Because you have a weak spot for fonts
Shot Manager is a Blender add-on that introduces a true shot entity in Blender scenes, as well as a wide and powerful set of tools to build and edit sequences in real-time directly in the 3D context.
Next generation of bridge., the Minecraft Add-On editor
A panel with extra options for shape key management in Blender.
✅ Chrome extension to check tasks directly from your Trello boards
Securely collect browsing history over browsers.
This is a (Firefox) add-on (WebExtension) that lets you invert the website's color scheme by inverting/changing the prefers-color-scheme media feature of CSS.
Web extension that automatically likes videos from your subscribed channels.
Free and open source consulting-style Powerpoint toolbar
A Git client that can be installed as an add-on in Oxygen XML Editor.
Yara Based Detection Engine for web browsers
Blender add-on to export multiple glTFs at once
Firefox add-on providing syntax highlighting for raw code, based on the highlight.js project.
This provider add-on adds Google synchronization capabilities to TbSync. Only contacts and contact groups are currently managed, using Google's People API.
This repository contains all Mikroe Click Board™ libraries and appropriate examples. Libraries are developed for each add-on board separately and cover a variety of features. The libraries can be easily included into any existing project.
|
OPCFW_CODE
|
<?php
namespace Printed\Bundle\Queue\ValueObject;
use Printed\Bundle\Queue\EntityInterface\QueueTaskInterface;
use Printed\Bundle\Queue\Queue\AbstractQueuePayload;
/**
* A class, that holds a queue payload, that is dispatched at later point of time. Think of it
* as of a Promise<QueueTaskInterface>.
*
* When exactly the actual task is dispatched, is dependant on the feature, that created the
* instance of this class. See usages to get an idea.
*/
class ScheduledQueueTask
{
/**
* When payload is not defined then the $payloadCreatorFn is used to construct it just before the queue task is
* dispatched (i.e. after the entity manager flush).
*
* That's impossible for both the payload and the payload creator function not to be set.
*
* @var AbstractQueuePayload|null
*/
private $payload;
/** @var callable|null */
private $payloadCreatorFn;
/** @var callable|null See QueueTaskDispatcher::dispatch() */
private $preQueueTaskDispatchFn;
/** @var QueueTaskInterface|null Defined, when the task is dispatched */
private $queueTask;
public function __construct(
AbstractQueuePayload $payload = null,
callable $payloadCreatorFn = null,
QueueTaskInterface $queueTask = null
) {
if (!$payload && !$payloadCreatorFn) {
throw new \InvalidArgumentException(sprintf(
"Can't construct `%s` without providing either the queue payload or the queue payload creator function",
get_class()
));
}
$this->payload = $payload;
$this->payloadCreatorFn = $payloadCreatorFn;
$this->queueTask = $queueTask;
}
/**
* @return AbstractQueuePayload|null
*/
public function getPayload()
{
return $this->payload;
}
public function getPayloadOrThrow(): AbstractQueuePayload
{
if (!$this->payload) {
throw new \RuntimeException("The queue payload isn't constructed yet. It will be after the final EntityManager flush");
}
return $this->payload;
}
/**
* @return callable|null
*/
public function getPreQueueTaskDispatchFn()
{
return $this->preQueueTaskDispatchFn;
}
public function setPreQueueTaskDispatchFn(callable $preQueueTaskDispatchFn = null)
{
$this->preQueueTaskDispatchFn = $preQueueTaskDispatchFn;
}
/**
* @return QueueTaskInterface|null
*/
public function getQueueTask()
{
return $this->queueTask;
}
/**
* @return QueueTaskInterface
*/
public function getQueueTaskOrThrow(): QueueTaskInterface
{
if (!$this->queueTask) {
throw new \RuntimeException("Can't retrieve scheduled queue task, because it's not been dispatched yet.");
}
return $this->queueTask;
}
/**
* @param QueueTaskInterface|null $queueTask
*/
public function setQueueTask(QueueTaskInterface $queueTask = null)
{
$this->queueTask = $queueTask;
}
/**
* @internal Do not call this function.
*/
public function constructAndGetPayload(): AbstractQueuePayload
{
if ($this->payload) {
throw new \RuntimeException('Queue payload is already constructed');
}
if (!$this->payloadCreatorFn) {
throw new \RuntimeException("Can't construct the queue payload because the payload creator function wasn't provided");
}
$this->payload = call_user_func($this->payloadCreatorFn);
if (!$this->payload instanceof AbstractQueuePayload) {
throw new \RuntimeException("Queue payload creator function didn't construct an instance of a queue payload");
}
return $this->payload;
}
}
|
STACK_EDU
|
Central Florida sees rise in coral snake bites - News - Daily ... Nov 5, 2018 ... Central Florida sees rise in coral snake bites .... red: You're dead,” “Red against yellow can kill a fellow”, or “Red touching black: Safe for Jack.”. How to Tell the Difference Between a King Snake and a ... The venomous coral snake's tail has only black and yellow bands with no red. The non-venomous scarlet king snake's band pattern remains the same throughout the length of his body. The non-venomous scarlet king snake's band pattern remains the same throughout the length of his body. Rhyme for Coral Snakes - Colors to Tell if a Snake is ... The coral snake rhyme varies from person to person, but the general premise is the same: Red touch black, safe for Jack. Red touches yellow, kills a fellow. The coral snake will have bands of red touching smaller bands of yellow. It is very uncommon to find a coral snake. These animals like to hunt in the early and late hours of the day. They are very reclusive, even among their own kind. When ...
Melbourne woman uses machete to save venomous coral snake from a cat Michelle Redfern, owner of the Kona Ice shaved ice truck, got a surprise in her garage -- a venomous coral snake Check out this ...
“Red next to yellow, kills a fellow; red next to black, friend of jack”. Coral Snakes can be identified by these features: Slim head: Unlike the other snakes above, Coral Snakes have slim bodies and heads. Just another reason to use more than triangular heads as identification! Unique coloring: Their bodies... Here's 11 Animals That Most People Are Terrified Of But Are... |… Next. 1. Milk snakes get a bad rep mainly because they closely resemble the very poisonous coral snake. They are actually completely venom free. To tell the difference between the two, just remember, "Red next to black is a friend of Jack; red next to yellow will kill a fellow." Monster Manual Monday - N Naga | The Princess Planet I've used a sort of Naga in The Princess Planet, but I gave her arms and a torso from a human, not just a head. In D&D Nagas are supposed to not be that strong but are clever and use their magic to best their opponents. I figure some of her jewelry must be enchanted and that potion's gotta give her... Rockagator в Instagram: «Yellow on black is a friend of … 145 отметок «Нравится», 5 комментариев — Rockagator (@rockagator) в Instagram: « Yellow on black is a friend of Jack? Or was it black on yellow will kill a fellow? How does that…»
Gay Black Cock - Tube Videos and Movies
How to Identify Red & Black Striped Snakes | Sciencing Mar 13, 2018 ... Look for red, black and yellow or white banding around the snake's body to identify a coral snake, a highly venomous snake in North America. Venomous Snakes & Local Wildlife | Jupiter Farms Residents - JFR Rule #1: Red touches black, you're OK Jack; Red touches yellow, you're a dead ... Like this snake, the red and the yellow are next to each other and if you see ... Scarlet Kingsnake - Snake Facts
Red Yellow Black friend Of Jack? | Forum
Red next to Black, friends with Jack.... by… These harmless, rat eating snakes are an example of Batesian Mimicry as a form of s... Red next to Black, friends with Jack....There is a way to tell these non-venomous snakes from the venomous counterpart. Hint: its in the stripe pattern: -Red next to yellow- kill a fellow (Coral Snakes). Red next to black is a friend of Jack, Red next to … Red next to black is a friend of Jack, Red next to yellow will kill a fellow-- or so they say.
Jun 03, 2012 · We learn about mimics and that when red touches yellow it can kill you, meaning it is a coral snake and NOT a scarlet kingsnake. Red touching black means a venom lack or a …
01.21.2007 - Here is a photo of me with a nice-sized Ringneck snake. I did not take this photo, as if often the case with photos in which I appear. No, this photo was taken by the customer, who happens to be a professional photographer. Professionals shoot in black and white you know, which is much cooler than boring old color. Southeastern Outdoors - Was it a Venomous Snake? Was it Venomous? With nearly three million turkey hunters taking to the woods and 6.8 million wild turkeys across North America, this spring will be filled with numerous opportunities to enjoy the outdoors. More than a few hunters will also have a close encounter with another wild creature - the snake! Yellow Jack (1938) - IMDb Yellow Jack is a film that should be seen more often, if for no other reason than that people should know and appreciate who Walter Reed was and why the United States Army named its medical facility after him. Sidney Howard had written a play about Reed and his efforts to find a cure for yellow fever, popularly called yellow jack. Yellow Jacket | Big Ass Fans Big Ass Fans® donated two overhead Haiku® and two portable Yellow Jacket® fans to the center to keep chimps cool. After a period of initial suspicion, chimps in outdoor enclosures now take daily naps in front of Yellow Jacket’s cooling airflow – particularly the one with a misting attachment.
friend. coral. jack. remember.coed. jacked. black friends. Ax+Apple | “Red on black is a friend of Jack. Red on … Red on yellow could kill a fellow.” New ups in the Pins+Needles section. 🐍 www.axandapple.com.19th Feb 2016 | 5 notes. “Red on black is a friend of Jack.Next. Snake Species - Lampropeltis triangulum - Milksnake "Red next to black, you can pat him on the back; red next to yellow, he can kill a fellow.""Red touches yellow, Not a nice fellow; if red touches black, good friend of jack."
|
OPCFW_CODE
|
In this lab, you will extend your work from the previous lab into the JSP architecture. In particular, you'll provide an interface which has two text fields in which the user enters the actor's first and last names, a radio button with which the user selects a store, and a submit button. When the user clicks submit, the two text fields are cleared, the radio button is left unchanged, and a table appears showing a list of titles of movies on the selected store's shelves which contain the selected actor. In addition to allowing the user to query your database, you can also provide you the ability to easily see more information about the movies, courtesy of The Internet Movie Database.
When I grade your labs, I will not just check that you have a correctly working application; I will also be looking to see that your application is well designed and documented. For this, I recommend that you follow the design process for the gened example I described in class. You also need to provide adequate documentation for the project, including in particular javadocs for your package.
If you wish, you may work in groups of two on the lab. (One group of three will be allowed, since there are an odd number of students in the class; however, if one, three, of five students want to work alone, then only groups of two will be allowed.) You should write the application from scratch, testing as you go. You should never be writing more than 10 or 20 lines of code before testing the code. Be sure you have a strategy for testing any code you write before you write it. If you need help with this, please ask.
As I said in class, you will need to put the
mysql.jar in the appropriate
directory. Assuming that you set up the
properly, issue the following command:
cp ~karl/public/270/mysql.jar ~/tomcat/common/libHaving done that, you should then look at that directory
As I said above, part of your grade will depend on how you organize of your project. I highly recommend that you borrow the structure that I used for the gened project I described in class. You can get that project directory by issuing the following command from the directory where you want the directory copied:
cp -r ~karl/public/270/jsp/gened .After doing this, look through these directories/files, make the application (described more completely below), test it out, and look at the documentation it creates. Before doing
make, you will need to have created the following directories:
~/webapps/gened ~/www-docs/gened ~/www-docs/gened/docs ~/www-docs/gened/docs/javadocsHaving done this, you should then set up a similar structure for application you will be writing. If you copied over and played with the files as described above, you will understand what I mean. I will refer to this directory as your JSP work directory.
As discussed in class, there is a GNUmakefile in the
gened/ directory. You carefully should look at this
file, since you will be using/modifying it in your project. To
install the application, go to the
gened/ directory and
issue the following command:
makeTo remove all backup files in the current directory, type:
make cleanTo generate the documentation, type:
.java files will be compiled into
.class files in a subdirectory of the root directory
~/webapps/WEB-INF/classes, where the path name is determined by
the package name you use. Your
therefore include this root directory if you want to test your Java classes
in isolation. If you use a
csh shell, you can put the
following in your
setenv KARL ~karl setenv CLASSPATH $HOME/webapps/WEB-INF/classes:$KARL/public/270/mysql.jar:.If you use
bash, put the following in your
export CLASSPATH=~/webapps/WEB-INF/classes:~karl/public/270/mysql.jar:.You'll need to either open a new shell or type
genedapplication you have copied over. Here is a brief description of the directories/files you should include:
GNUmakefile: The makefile which installs your application.
README: This includes brief documentation on your application. This is the first file I will look at and should paint a broad overview of your submission. In particular, it should contain a clear statement about what you have successfully tested and any bug reports. I should be able to replicate your tests.
docsdirectory: This should include a file
index.htmlas well as other html files that describe your application. Be creative. As one link, I would include a copy of the README file.
mockupdirectory: Include here the html mockups (prototypes) you write, which should be the basis for the jsp files you program.
JSPfile(s) containing your interface. You can probably get away with one jsp file, called, say,
gened, so you could, for example, call it
movies. This directory should include the following java files:
*Bean.javafiles: Java Bean(s) which holds the name of an actor and the store selected. You'll need at least three setters, which set the first name, last name, and store, three getters.
If you choose separate Beans to store the actor and the store, provide
toString() method which returns the full name of the
actor or the store name. If you combine them into one bean, while you
may want to make getters which return strings, you may also want a
toString() method to use for debugging.
Storeclass might be appropriate. You decide what you need.
QueryHandler.java: A Java class file which composes the SQL query and returns a list of movies. Your SQL query should be performed using a
PreparedStatement, since your interface will have text fields. The
PreparedStatementshould only be made once by the QueryHandler object rather than once for each query the object handles.
sqldirectory: include here the table definitions, the data files, and SQL queries you used.
jspfile, and what methods belong in auxiliary classes?
Lab4, now type:
submit program prints
out to confirm it seems to have submitted properly.
|
OPCFW_CODE
|
A quota template defines a space limit, the type of quota (hard or soft), and (optionally) a set of notifications that will be generated automatically when quota usage reaches defined threshold levels.
By creating quotas exclusively from templates, you can centrally manage your quotas by updating the templates instead of replicating changes in each quota. This feature simplifies the implementation of storage policy changes by providing one central point where you can make all updates.
|To create a quota template|
In Quota Management, click the Quota Templates node.
Right-click Quota Templates, and then click Create Quota Template (or select Create Quota Template from the Actions pane). This opens the Create Quota Template dialog box.
If you want to copy the properties of an existing template to use as a base for your new template, select a template from the Copy properties from quota template drop-down list. Then click Copy.
Whether you have chosen to use the properties of an existing template or you are creating a new template, modify or set the following values on the Settings tab:
In the Template Name text box, enter a name for the new template.
In the Label text box, enter an optional descriptive label that will appear next to any quotas derived from the template.
Under Space Limit:
- In the Limit text box, enter a number
and choose a unit (KB, MB, GB, or TB) to specify the space limit
for the quota.
- Click the Hard quota or Soft
quota option. (A hard quota prevents users from saving files
after the space limit is reached and generates notifications when
the volume of data reaches each configured threshold. A soft quota
does not enforce the quota limit, but it generates all configured
- In the Limit text box, enter a number and choose a unit (KB, MB, GB, or TB) to specify the space limit for the quota.
You can configure one or more optional threshold notifications for your quota template, as described in the procedure that follows. After you have selected all the quota template properties that you want to use, click OK to save the template.
Setting optional notification thresholds
When storage in a volume or folder reaches a threshold level that you define, File Server Resource Manager can send e-mail messages to administrators or specific users, log an event, execute a command or a script, or generate reports. You can configure more than one type of notification for each threshold, and you can define multiple thresholds for any quota (or quota template). By default, no notifications are generated.
For example, you could configure thresholds to send an e-mail message to the administrator and the users who would be interested to know when a folder reaches 85 percent of its quota limit, and then send another notification when the quota limit is reached. Additionally, you might want to run a script that uses the dirquota.exe command to raise the quota limit automatically when a threshold is reached.
To send e-mail notifications and configure the storage reports with parameters that are appropriate for your server environment, you must first set the general File Server Resource Manager options (for more information, see Setting File Server Resource Manager Options).
|To configure notifications that File Server Resource Manager will generate at a quota threshold|
In the Create Quota Template dialog box, under Notification thresholds, click Add. The Add Threshold dialog box appears.
To set a quota limit percentage that will generate a notification:
In the Generate notifications when usage reaches (%) text box, enter a percentage of the quota limit for the notification threshold. (The default percentage for the first notification threshold is 85 percent.)
To configure e-mail notifications:
On the E-mail Message tab, set the following options:
- To notify administrators when a threshold is
reached, select the Send e-mail to the following
administrators check box, and then enter the names of the
administrative accounts that will receive the notifications. Use
the format account@domain, and use semicolons to separate
- To send e-mail to the person who saved the
file that reached the quota threshold, select the Send e-mail to
the user who exceeded the threshold check box.
- To configure the message, edit the default
subject line and message body that are provided. The text that is
in brackets inserts variable information about the quota event that
caused the notification. For example, the [Source Io Owner]
variable inserts the name of the user who saved the file that
reached the quota threshold. To insert additional variables in the
text, click Insert Variable.
- To configure additional headers (including
From, Cc, Bcc, and Reply-to), click Additional E-mail
- To notify administrators when a threshold is reached, select the Send e-mail to the following administrators check box, and then enter the names of the administrative accounts that will receive the notifications. Use the format account@domain, and use semicolons to separate multiple accounts.
To log an event:
On the Event Log tab, select the Send warning to event log check box, and edit the default log entry.
To run a command or script:
On the Command tab, select the Run this command or script check box. Then type the command, or click Browse to search for the location where the script is stored. You can also enter command arguments, select a working directory for the command or script, or modify the command security setting.
To generate one or more storage reports:
On the Report tab, select the Generate reports check box, and then select which reports to generate. (You can choose one or more administrative e-mail recipients for the report or e-mail the report to the user who reached the threshold.)
The report is saved in the default location for incident reports, which you can modify in the File Server Resource Manager Options dialog box.
Click OK to save your notification threshold.
Repeat these steps if you want to configure additional notification thresholds for the quota template.
|
OPCFW_CODE
|
How to set up network file sharing
What is network file sharing and why do I need it?
Every person that users computer or smartphone very often needs to view, copy or move files between the
desktop computers, laptops, netbooks and smartphones. The files may be images, music, video files or documents.
The easiest way to transfer files from one computer to another (PC to PC, Mac to Mac, PC to Mac, and vice versa) is by using
a home network (also known as LAN - Local Area Network). LANs can be wired (with network cables), wireless (Wi-Fi), or a combination of the two.
LANs are also widely used in most businesses.
Even if you have only one computer (or only one laptop or netbook) you still can benefit greatly from a home network if you have
a smartphone or network attached storage device (NAS, basically a hard disk with network connection). Once you have a working
network, you have to set up the network file sharing: the ability to share all the files in given disks or folders
with the other computers on the network.
Within a network, every computer, laptop, nettop or NAS device must have a unique name (network computer name),
which is used to find and connect to this computer.
(see how to find out or change the network name of your computer)
Every computer or NAS device may share one or more folders that will be visible from the other computers (or smartphones).
Such folder is called network share or shared folder and must have a unique name within the computer.
For example, your desktop PC may have the name Atlas and to provide few network shares: Downloads,
Music and Photos. If you want to connect to one or more of these shares, you will have to use
the following addresses: \\Atlas\Downloads, \\Atlas\Music and \\Atlas\Photos.
Until recently the only option to transfer files between your computer and your smartphone was the physical USB
cable connection, which is relatively simple but not very convenient. As many smartphones are Wi-Fi capable it is very
convenient to view, open, copy or move files from your smarthone without any cables and without the need to sit
in front of your desktop computer. myExplorer is an advanced smartphone application, which allows
you to see all network shared folders from your smartphone.
Please read the rest of this guide to see how to set up the network file sharing in your own home network (or between your
laptop and your smartphone, the principles are the same). If you already have a working home network with file sharing but
you do not know how to use it from your smartphone, please go directly here.
How to set up a home network (LAN)
Besides the sharing of files, the other two major uses of a home network are the internet connection sharing and the printer sharing.
The first one allows all computers (and smartphones, if you have a Wi-Fi router or access point) to use your ADSL or cable internet connection and
the second one allows you to print documents from one computer to a printer that is attached to another computer (or directly to the network). If you
already have these services, then you have a working home network and you just need to set up the network file sharing. In such case,
please go directly here.
First, you need to build the physical (hardware) part of your network. Basically, you need to connect together all computers that will participate
in the network. Then you may need to change the settings of the computers depending on their operating system. The following guides are good source
of information about the needed hardware and configuration settings:
How to set up the network file sharing
After you have created you home network (up to the point that all computers are connected either with LAN cables or Wi-Fi wireless connection and have
Internet access), you need to set up the network file sharing. This will allow you to access the files and folders on one computer from another computer
on the network (or from your smartphone).
The steps involved are different depending on the operating system on the computer(s). Note that you only need to do this on
computers that you want to share files with the other computers on the network (i.e. act like file servers). As you may have
computers with different operating systems on your network, make sure that you are following the right guide for the
operating system of the computer that you are configuring:
For Windows 7:
Windows 7 supports few different ways to share files over network but they all are just varios ways for creating a network shared folder.
The easiest way is the homegroup feature of Windows 7,
which automatically creates a shared folder when you use the Share with right-click menu in Windows Explorer.
The second way is to place the files that you want to share in the some of the Public folders, which are located in the /Users/Public folder
on your system hard disk. Note that you may have to
turn on the public folders sharing as it is off by default.
The third way is the most powerful but somewhat harder to set up
Advanced file sharing (scroll down to the Advanced sharing section).
This is basically the same way that is available
under Windows XP and Vista and althrough more complex, it is also more logical and gives higher degree of control over what is shared and who has access to it.
For Windows Vista:
Windows Vista supports two ways of sharing. The first one is the Public folder, which allows you to share all files in one public folder. It is very easy
but not very flexible. The more powerful way is to share specific folders on your computer. Both ways are explained in details in
Share files with someone article for Vista.
For Mac OS X:
If you are using Mac OS X, please follow the tutorial Mac 101: File sharing to set up
the network shared folders. Make sure to select the SMB protocol in the sharing options in order to make the shared folder
accessible from Windows computers and from your smartphone with myExplorer application.
Accessing your network shares from your smartphone with myExplorer
At this point you have a working Wi-Fi network and some network shared folders on one
or more computers (or network storage devices - NAS). All you need to do is to add
the network shares that you want to access to your myExplorer application on the
Start the myExplorer application and you will see its main screen. On Samsung Wave phone, the Add
button on the lower left corner of the screen. On a Nokia phone, tap Options menu and
select the Add network folder option. This will open the Add network folder screen.
Here, you have to enter the following things:
- Computer name. This is the network name of the computer that
you want to access (see how to find out
or change the network name of your computer). In the (very rare) cases when the NetBIOS
name service is not enabled on your network, you can enter the exact IP address of the
computer in the field Computer IP address. Even in such case, you can still
fill the Computer name field to see more familiar name instead of IP address in
myExplorer main screen.
- Folder (share) name. This is the exact name of the network
share that you have created on this computer.
- Computer IP address. Optional field, use only if there is no NetBIOS
name service on your network (i.e. almost never).
- WINS server IP address. Optional field, use only if you have a WINS
server (this is the case only in some business networks).
- User name. The user name of the account that have the right
to access the network share. You can leave this field empty if you are using anonymous
- Password. The password of the account that have the right to
access the network share. You can leave this field empty if you are using anonymous
After you fill the required fields, tap the Save or Done button and you are done! You
will see the new network share on the myExplorer main screen (it will appear in the form
\\Atlas\Photos - Atlas is the name of the computer and Photos is the name of the network
share). In order to enter the network folder, you just need to tap the network share.
If you receive an error message when trying to access your network share from myExplorer
application, please check your error message (you will need to long tap the
network share name and select the Edit command to correct the error in the
network share settings):
- Network error. myExplorer can not find or connect to the
computer. Check if you the computer network name is entered correctly, if the computer
is turned on or your Wi-Fi network is working properly (i.e. if you have Internet
access on your smartphone).
- Invalid user name and/or password. This computer requires
a user name and a password. If you have entered user name and password, enter them
again as they are not correct.
- Share name does not exist on this server. The folder (network share)
name that you have entered is not correct (or this network share has been removed
from the computer and is no longer available).
|
OPCFW_CODE
|
I have several sits in development running into a similar issue. It seems like the ACF data is not pulling into the front end of the site. So I get the structure of the HTML surrounding it but not all the data I’ve input in the fields.
There is little rhyme or reason. Often times it loads just fine, some times I get only the HTML but not the ACF results and have to refresh the page for it to kick in. Sometimes I actually have to navigate to another page and then return to the page I want to see.
I am not an expert by any means in DB’s but it seems to be that the information is getting caught up some where and not being returned always by the database. Any help would be greatly appreciated.
I first I just thought it was my hosting account, but when I moved the site to a production environment the problem persisted.
Live site = http://dexahaitirelief.com
This sounds like a major server issue. I would contact your webhost and get them to check the logs. It seems that on some page loads, your server is unable to load data correctly from the Database.
It is also possible that you have some caching going on which could prevent ACF from correctly loading the data. Please make sure your site and server do not use any caching systems.
Hope that helps.
There are no site/server caching systems in place. Unless something has changed dramatically without my knowledge which is highly unlikely.
I really do not think it is a server issue, which was my first thought. However when I recently moved the site to a new server to go live (with hope of problem being fixed on it’s own.) nothing changed. I am still experiencing the issues.
If the issue is consistent across servers, then perhaps the problem could be due to the code you use.
Maybe you are looping over code and performing some kind of break, or rand operation? This could allow for a ‘sporadic’ appearance if not coded perfectly.
I would definitely setup a local version of your site using MAMP or similar to test on perfect server situations.
Also, try disabling all plugins on the local site. This may fix the issue, and you can then add them back in one by one until you hit the problem.
Maybe also do some debugging with ACF’s function
get_field and WP’s functions
get_postmeta to compare the differences
This is happening on multiple servers, lcoally and with various hosts I have come to discover. It seems that plugin experiment is pointing to ACF. As far as debugging goes. I have no idea how to execute what you are suggesting.
Frankly, I am getting a little nervous about using this product knowing that there now seems to be widespread issue across a dozen or so client sites I am using it on. Any additional help getting this resolved would be extremely helpful…
If ACF isn’t working for you, that’s fine. There are quite a few other custom field plugins out there which you may enjoy more.
Please however, take some time to do some simple debugging with ACF’s function
get_field and WP’s functions
get_postmeta to compare the differences.
You may find that the issue is not with the ACF plugin, but with the code you have written in your theme.
If you are not familiar with debugging, it is not as scary as it seems. It’s just comparing visual data by printing it out to look at. Please jump on google to see some basic examples of PHP debugging.
I can’t offer much help other than the above, and I hope that you do debug and find the issue soon.
I actually had a small break through yesterday by changing the way jQuery is loaded on my sites. It fixed a lot of my issues my it is still sometime quirky.
You know, your custom fields plugin is by far the best on the market. And at this point I am so deeply invested in that it is deployed on so many clients sites that I kind of got to get to root cause.
It seems the issue is also isolated to just Chrome, IE/FF work like a charm. Since the sites have been loading better I have noticed that the quirk sometimes appears when I am using HTML tags within a textarea, simple stuff like and .
Let e know if you have any more ideas. In the meanwhile I will try to debug but it’ll be a learning curve.
You know I am interesting in this suggestion…
“Maybe also do some debugging with ACF’s function get_field and WP’s functions get_postmeta to compare the differences”
However, after searching around and experimenting I am not able find any information that instructs clearly how to accomplish this.
If there any way you can show me the steps necessary to do this?
You also mentioned something about a rand function. I have the following on a few sites but not every site I am having the problem with is using this function. Either way can you take a look and let me know if there is anything mal-formed that I need to be concerned about?
<div class="praise"> <?php $rows = get_field('book_praise', 'option' ); $rand_row = $rows[ array_rand( $rows ) ]; $rand_row_comment = $rand_row['praise_comment' ]; $rand_row_name = $rand_row['praise_name' ]; $rand_row_org = $rand_row['praise_org' ]; ?> <div class="review"> <span class="quote_top">“</span><span class="comment"><?php echo $rand_row_comment; ?></span><span class="quote_bottom">”</span> <div class="clear"></div> </div> </div> <div class="attribute"> <span class="quote_bar"><img src="<?php echo(THESIS_USER_SKINS_URL); ?>/mobile-first/images/quote_bar.png" alt="Publications" /></span> <span class="name">~ <?php echo $rand_row_name; ?></span> <span class="org"><?php echo $rand_row_org; ?></span> </div>
The topic ‘All ACF Data Not Loading / Sporadic’ is closed to new replies.
Welcome to the Advanced Custom Fields community forum.
Browse through ideas, snippets of code, questions and answers between fellow ACF users
Helping others is a great way to earn karma, gain badges and help ACF development!
© 2022 Advanced Custom Fields.
|
OPCFW_CODE
|