Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Selecting Your Computer
Selecting a computer should be an educated decision. You should be aware of the many options. Computers are grouped into 2 main categories, open architecture and proprietary. Sometimes, the open architecture computers are called clones.
There are two styles of open architechure computers. AT is the original, including the 286, 386, 486, and first series of Pentiums. ATX is the latest for Pentium II, III, and 4. This also means a Pentium can not be upgraded to a Pentium II or Pentium III, even with the open architecture. This redesign has to do mostly with the layout of the motherboard and the power supply fan being located over the processor on a new location on the motherboard. It also has to do with the "soft power switch" used on newer computers. Soft power switching includes the ability of software, when Windows shuts down, to be able to turn off the computer power.
Some major manufacturers claim to have ATX standard motherboards when in fact there are enough differences that the open architecture motherboards still won't work in them. There are several versions of ATX including NLX, micro, and others.
IBM, Compaq, Sony, HP, Packard Bell, Dell, Gateway, and other well know manufacturers use, for the most part, proprietary motherboards, power supplies, cases, sound cards and modems, and sometimes even memory modules. This means that when the computer needs a replacement motherboard or power supply or other proprietary part, that part can only come from that manufacturer, and usually at a very high cost of replacement out of warranty.
These computers usually can NOT be upgraded (in most respects) to newer technology requiring a change of motherboard. Many of these manufacturers also put the display interface on the motherboard rather than an expansion card. This is a serious mistake. Many owners want a special display interface with a certain chipset or other features. Open architecture computers have a much lower maintenance cost.
The are, at last count, over 200 manufacturers of computer motherboards that make motherboards of a standard design that are interchangeable, because size, mounting, and connections are standard. Since this type of computer uses a standard motherboard, it is easily upgraded to newer technology.
All motherboards (main system boards) starting with Pentiums now have hard and floppy controllers as well as serial and parallel ports integrated onto the motherboard in today's versions because "Plug and Play" requires close control of these. This is good for lower manufacturing costs of computers in general, but bad for the consumer requiring a motherboard replacement.
Consumers are good at destroying their computers in one form or another. Most of this destruction comes from not knowing what does destroy a computer. Static electricity is one cause, wrong connections are another, and connecting anything, particularly the keyboard, with the power turned is the third reason. Surges and power failures also do a lot of damage.
Static electricity is usually thought to do its destruction when handling individual components out of the computer. In fact, static electricity does damage when connecting a computer for the first time, or any time you connect a computer. This happens usually when the keyboard, monitor, modem, printer, scanner, or other device is plugged into the computer before the computer is grounded. Plugging in the power to the computer and monitor, then making sure the power is off to the computer, and then making the rest of the connections usually prevents static electricity from causing damage. If you forget to turn the power off, you can cause other damage in most cases.
Wrong connections include mixing the keyboard and mouse connections, using an adapter on a printer when none should be used, accidently connecting it to the serial port. For those who go inside the computer, reversing the ribbon cables on drives and controllers is quite common. This sometimes happens when a cable is accidently knocked off and you try to replace it. It also happens when you add or change a drive.
Surges take out more computers than customers realize. It can happen on a clear day with no storm around. Cheap worthless surge protectors can give a false sense of protection. Not protecting the modem is a back door into the computer for surges. Having the printer unprotected while all other devices are on the best surge protector made can and will lead a surge into the computer through "the back door". Now you see how easily a computer is destroyed.
A motherboard replacement for a major manufacturers computer can cost $300 to $600 exchange while a motherboard for an open architecture computer costs typically $125 to $175.
Purchasing an open architecture computer is the best way to go for many reasons. First, they are usually a better quality computer in every way. Major manufacturers are heavily competing with each other and are putting out computers at the lowest possible price to beat out their competition. Second, out of warranty repairs are less expensive with open architecture computers.
Don't be fooled by the so called better warranty of 3 years by the "majors". This warranty is over the minute you open the case and add any equipment that is not original. Check the seal on the case. In many cases, owners have found that upgrading to the next version of Windows has voided the free support.
Open architecture computers can be fixed by any good technician, and usually very fast. Reports indicate that when the "major's" computers fail, they usually get sent back and we have heard many times that the computer is gone for 6-8 weeks.
On site support is another matter. Read the small print in the agreement. If you have a software problem unrelated to the computer under warranty that causes what might appear as a computer problem, you are billed at standard rates of $60 to $90 per hour or more, because it is not included in the warranty
Do your homework and find a local dealer who can build you a good computer. We do this. We have been doing it now for 15s years.
Gene's Computer Outlet, Frazer, Pa
|
OPCFW_CODE
|
The introduction of the OpenAI Assistants API at Developer Day is a big step forward in giving developers the ability to build experiences that feel more like interacting with a real agent in their apps. This API lets OpenAI’s users create their own customized “assistant” with specific instructions, tapping into a wide range of knowledge.
What’s more, it equips these assistants with access to OpenAI’s suite of generative AI models and tools for tackling various tasks. The potential uses for this API are wide-ranging, from providing a natural language interface for data analysis to aiding in coding, or even offering an AI-driven vacation planning service.
How exactly does OpenAI Assistants API work?
The heart of the OpenAI Assistants API is the Code Interpreter, a powerful tool by OpenAI built to write and run Python code in a safe and controlled setting. Introduced back in March for ChatGPT, the Code Interpreter boasts a wide range of capabilities—it’s not just about generating visual graphs and charts, but also adept at managing file operations. This upgrade allows the assistants created using the OpenAI Assistants API to run code in an iterative manner, offering solutions for coding and mathematical challenges.
The OpenAI Assistants API is crafted to be a flexible tool for developers, with the ability to seamlessly incorporate external sources of information like product specifications or exclusive documents into the assistants they create.
The OpenAI Assistants API achieves this by incorporating a retrieval component that enhances the assistants with information beyond what’s available in OpenAI’s own models. Additionally, the API enables function calling, allowing these assistants to execute pre-defined programming functions and seamlessly integrate the results into their interactions.
Currently in its beta phase, the Assistants API is now open to all developers. Usage is calculated and billed based on the per-token rates of the selected model, where a “token” is defined as a segment of text, like breaking down the word “fantastic” into “fan,” “tas,” and “tic.”
OpenAI’s Assistants API has become a game-changer for developers, making it easier to integrate GPT-like functionalities into applications and services. This leap forward is exemplified by the recent introduction of the Code Interpreter API. It’s designed to streamline the development process, which previously could take months and require extensive teams. The Assistants API equips developers with powerful capabilities like code interpretation, data retrieval, and function calling.
Advancing AI development with long threads and data safety
The Assistants API takes AI development a step further with the introduction of persistent and infinitely long threads. This makes it easier for developers by handling thread states, allowing them to focus on creating applications that are nuanced and context-aware. OpenAI places a strong emphasis on data safety, assuring that data processed by the API is not used to train their models, giving developers the confidence to manage their data independently.
While the API is still in beta, it’s open to all developers eager to explore its potential. OpenAI’s dedication to flexibility and developer control is clear as they look towards the future, with plans to enable the integration of custom tools that can work alongside its existing features.
In the upcoming developments, OpenAI aims to enhance the customization capabilities of its platform. This means customers will have the ability to incorporate their own tools into the framework provided by the Assistants API, complementing existing features like the Code Interpreter, the retrieval component, and function calling abilities. Apparently, OpenAI is going to be paving the way for even more versatile and tailored applications in the foreseeable future.
Meanwhile, if you are interested in the introductions that OpenAI made during their Developer Day, make sure to check out our articles on GPT-4 Turbo, better GPT for a lower price and Custom GPTs, GPTstore, and GPT builder.
Featured image credit: OpenAI
|
OPCFW_CODE
|
<?php
/**
* WebEngine CMS
* https://webenginecms.org/
*
* @version 2.0.0
* @author Lautaro Angelico <https://lautaroangelico.com/>
* @copyright (c) 2013-2018 Lautaro Angelico, All Rights Reserved
*
* Licensed under the MIT license
* https://opensource.org/licenses/MIT
*/
class PayPal {
private $_titleMinLen = 1;
private $_titleMaxLen = 50;
protected $_id;
protected $_title;
protected $_config;
protected $_credits;
protected $_cost;
protected $_cfg;
function __construct() {
// offline mode
if(config('offline_mode')) throw new Exception(lang('offline_mode_error'));
// database object
$this->we = Handler::loadDB('WebEngine');
// configs
$this->_cfg = loadConfig('paypal');
if(!is_array($this->_cfg)) throw new Exception(lang('error_66'));
}
/**
* setId
*
*/
public function setId($id) {
if(!Validator::UnsignedNumber($id)) throw new Exception(lang('error_239'));
$this->_id = $id;
}
/**
* setTitle
*
*/
public function setTitle($title) {
if(!Validator::Length($title, $this->_titleMaxLen, $this->_titleMinLen)) throw new Exception(lang('error_240'));
$this->_title = $title;
}
/**
* setConfig
*
*/
public function setConfig($id) {
$creditSystem = new CreditSystem();
$creditSystem->setConfigId($id);
$this->_config = $id;
}
/**
* setCredits
*
*/
public function setCredits($credits) {
if(!Validator::UnsignedNumber($credits)) throw new Exception(lang('error_241'));
$this->_credits = $credits;
}
/**
* setCost
*
*/
public function setCost($cost) {
if(!Validator::Float($cost)) throw new Exception(lang('error_242'));
$this->_cost = number_format($cost, 2);
}
/**
* addPackage
*
*/
public function addPackage() {
if(!check($this->_title)) throw new Exception(lang('error_243'));
if(!check($this->_config)) throw new Exception(lang('error_243'));
if(!check($this->_credits)) throw new Exception(lang('error_243'));
if(!check($this->_cost)) throw new Exception(lang('error_243'));
$data = array(
'title' => $this->_title,
'config' => $this->_config,
'credits' => $this->_credits,
'cost' => $this->_cost
);
$query = "INSERT INTO `"._WE_PAYPALPACKAGES_."` (`title`, `config`, `credits`, `cost`) VALUES (:title, :config, :credits, :cost)";
$addPackage = $this->we->query($query, $data);
if(!$addPackage) throw new Exception(lang('error_244'));
}
/**
* updatePackage
*
*/
public function updatePackage() {
if(!check($this->_id)) throw new Exception(lang('error_245'));
if(!check($this->_title)) throw new Exception(lang('error_245'));
if(!check($this->_config)) throw new Exception(lang('error_245'));
if(!check($this->_credits)) throw new Exception(lang('error_245'));
if(!check($this->_cost)) throw new Exception(lang('error_245'));
$packageInfo = $this->getPackageInfo();
if(!$packageInfo) throw new Exception(lang('error_246'));
$data = array(
'id' => $this->_id,
'title' => $this->_title,
'config' => $this->_config,
'credits' => $this->_credits,
'cost' => $this->_cost
);
$query = "UPDATE `"._WE_PAYPALPACKAGES_."` SET `title` = :title, `config` = :config, `credits` = :credits, `cost` = :cost WHERE `id` = :id";
$updatePackage = $this->we->query($query, $data);
if(!$updatePackage) throw new Exception(lang('error_247'));
}
/**
* deletePackage
*
*/
public function deletePackage() {
if(!check($this->_id)) throw new Exception(lang('error_248'));
$deletePackage = $this->we->query("DELETE FROM `"._WE_PAYPALPACKAGES_."` WHERE `id` = ?", array($this->_id));
if(!$deletePackage) throw new Exception(lang('error_249'));
}
/**
* getPackageInfo
*
*/
public function getPackageInfo() {
if(!check($this->_id)) return;
$packageInfo = $this->we->queryFetchSingle("SELECT * FROM `"._WE_PAYPALPACKAGES_."` WHERE `id` = ?", array($this->_id));
if(!is_array($packageInfo)) return;
return $packageInfo;
}
/**
* getPackagesList
*
*/
public function getPackagesList() {
$packagesList = $this->we->queryFetch("SELECT * FROM `"._WE_PAYPALPACKAGES_."` ORDER BY `id` ASC");
if(!is_array($packagesList)) return;
return $packagesList;
}
/**
* processPayment
*
*/
public function processPayment($data) {
if(!is_array($data)) return;
$custom = explode(',', $data['custom']);
if(!is_array($custom)) throw new Exception('PayPal: invalid custom data.');
$packageid = $custom[0];
if(!Validator::UnsignedNumber($packageid)) throw new Exception('PayPal: invalid package id.');
$userid = $custom[1];
if(!Validator::UnsignedNumber($userid)) throw new Exception('PayPal: invalid user id.');
// payment status
if($data['payment_status'] != 'Completed') {
$this->_processRefund($data, $userid);
return;
}
// check package
$this->setId($packageid);
$packageInfo = $this->getPackageInfo();
if(!is_array($packageInfo)) throw new Exception('PayPal: invalid package id.');
$packageCost = number_format($packageInfo['cost'], 2);
$paymentGross = number_format($data['payment_gross'], 2);
// package cost
if($packageCost != $paymentGross) throw new Exception('PayPal: payment gross and package cost don\'t match.');
// account data
$Account = new Account();
$Account->setUserid($userid);
$accountData = $Account->getAccountData();
if(!is_array($accountData)) throw new Exception(lang('error_12'));
// send credits
try {
$creditSystem = new CreditSystem();
$creditSystem->setConfigId($packageInfo['config']);
$configSettings = $creditSystem->showConfigs(true);
switch($configSettings['config_user_col_id']) {
case 'userid':
$creditSystem->setIdentifier($accountData[_CLMN_MEMBID_]);
break;
case 'username':
$creditSystem->setIdentifier($accountData[_CLMN_USERNM_]);
break;
case 'email':
$creditSystem->setIdentifier($accountData[_CLMN_EMAIL_]);
break;
default:
throw new Exception(lang('error_127'));
}
$creditSystem->addCredits($packageInfo['credits']);
} catch(Exception $ex) {
// TODO: log system
throw new Exception($ex->getMessage());
}
// save log
$this->_saveLog($data, $userid, $packageid);
}
/**
* getLogs
*
*/
public function getLogs() {
$logs = $this->we->queryFetch("SELECT * FROM `"._WE_PAYPALLOGS_."` ORDER BY `id` DESC");
if(!is_array($logs)) return;
return $logs;
}
/**
* _processRefund
*
*/
private function _processRefund($data, $userid, $packageid) {
if(!check($data)) return;
if(!check($userid)) return;
// ban account
if($this->_cfg['ban_on_refund']) {
$Account = new Account();
$Account->setUserid($userid);
$Account->blockAccount();
}
// update log
$this->_updateLog($data);
}
/**
* _saveLog
*
*/
private function _saveLog($data, $userid, $packageid) {
$logData = array(
'txnid' => $data['txn_id'],
'payeremail' => $data['payer_email'],
'userid' => $userid,
'packageid' => $packageid,
'paymentgross' => $data['payment_gross'],
'paymentdate' => $data['payment_date'],
'itemname' => $data['item_name'],
'paymentstatus' => $data['payment_status']
);
$query = "INSERT INTO `"._WE_PAYPALLOGS_."` (`txn_id`, `payer_email`, `userid`, `packageid`, `payment_gross`, `payment_date`, `item_name`, `payment_status`) VALUES (:txnid, :payeremail, :userid, :packageid, :paymentgross, :paymentdate, :itemname, :paymentstatus)";
$log = $this->we->query($query, $logData);
if(!$log) return;
}
/**
* _updateLog
*
*/
private function _updateLog($data) {
if(!check($data['parent_txn_id'])) return;
$logData = array(
'txnid' => $data['parent_txn_id'],
'paymentstatus' => $data['payment_status']
);
$query = "UPDATE `"._WE_PAYPALLOGS_."` SET `payment_status` = :paymentstatus WHERE `txn_id` = :txnid";
$log = $this->we->query($query, $logData);
if(!$log) return;
}
}
|
STACK_EDU
|
ALL >> Education >> View Article
Java Definition And Features In Detail
Here we will see in detail about the Java definition and features. In the distributed environment of the internet, the Java programming language is widely used. It is the most famous programming language used in Android Smartphone Applications and is used mostly in the edge device and internet of things development.
The look and the feel of the C++ language were designed by Java but it is quite simpler to use when compared to C++ and enforces and enforces an object-oriented programming model. Complete applications can be created by Java on a single computer or be shared among servers and clients in a network. It can be used to construct a small application module or applet for use similar to a webpage.
Elements and features of Java
There are lots of reasons behind Java's omnipresence nature. Moreover, the language's major characteristics have impacted the following components:
Programs developed in Java provide portability in a network: The source code is compiled as the Java calls the bytecode which can run wherever they want in a network on a server or client that has a Java virtual machine (JVM). The JVM provides the meaning of the bytecode that will run on computer hardware. On the contrary, most programming languages like COBOL, Visual, C++, Smalltalk or Basic, compile code into a binary file. They are platform specific therefor a program provided for an Intel-based Windows machine cannot run a Mac, a Linux-based machine or an IBM mainframe. Just-in-time (JIT) compiler is optional for JVM. There are lots of cases where JIT compilation is faster when compared to the virtual machine interpretation.
The code is robust: There are lots of programs that are composed in C++ and other languages, Java objects contain no references when compared to the data external or other similar objects. This makes sure that an instruction will not have the address of data stored in another application or in the operating system itself. The JVM makes lots of checks to make sure about integrity.
Java is an object: An object can fetch an advantage of being a part of class objects and get the code that is common to the class. Objects are considered as nouns where the user might be related to the traditional procedural verbs. A method can be considered as the object's capability or behavior. The ability to develop with language from the object-orientation has made this programming language an explicit platform to program upon.
Applet provides flexibility: Apart from being executed on the client instead of the server, a Java applet has other characteristics to run it in a very fast manner.
Java programming can be learned quickly by the developers: Java concept is very easy to learn, with a syntax similar to C++. The background of both the languages is C.
The Java applications are developed by the programmers in three different key platforms:
1) Java SE: With the help of Java Standard Edition, simple, stand-alone applications are developed. Earlier it was termed as J2SE, Java SE offers all the APIs required to create traditional desktop applications.
2) Java EE: Earlier the Java Enterprise Edition is termed as J2EE and it offers the ability to develop server-side components that can respond to web-based request-response cycle. This arrangement permits the development of Java programs that can talk communicate with the online based clients like web browsers, COBRA based clients and REST and SOAP-based web services.
3) Java ME: It also offers a lightweight platform for mobile development termed as Java Micro Edition and it was earlier called as J2ME. Java ME has proved a very famous platform for embedded device development but it had a tough time to gain attention in the smartphone development arena.
History of Java
In the year 1996, the World Wide Web and internet were emerging and on the contrary, Microsoft's flagship Windows95 operating system was not even packed with an internet browser. Java programming was not actually designed with the concept of the internet in mind.
As a result, the Java programming language showcased lots of attention to the network programming complicated task. It is always a challenge but the Java via java. ent APIs took wonderful strides to reduce the ancient onerous task of programming across a network.
The famous JavaBeans interface was enhanced in Java 1.1 in Feb 1997. The famous JavaBeans was unveiled in Java 1.1 in February 1997.
Java has unveiled lots of versions of JDK 1.2 being referred to as Java 2. Very large improvements of API collections have been done by Java 2 while Java 5 has big changes to Java Syntax with an excellent feature called Generics. The Android software developer's kit (SDK) has been unveiled for mobile device developers to compose applications for Android-based devices with the help of Java APIs.
The platform was acquired by Oracle Corp When it got Sun Microsystems in Jan 2010. The acquisition got over the release of Java 7 and Oracle scaled back few of the ambitious plans available for it.
In the month of March 2014, Java 8 was unveiled and it includes Lambda expressions which have very fewer features in competing for the languages that have been absent in Java. With Lambda expressions, developers can compose applications with the help of a functional approach as opposed to an object-oriented one.
Education Articles1. Mechanical Training In Chennai
Author: CNC Training
2. Top Reasons To Pursue Bba
Author: ASBM University
3. 4 Time-management Tips For Online Students
Author: Online Class Assist
4. 10 Awesome Ways To Engage With Parents: School View
Author: ONNE APP
5. Iso 45001 Training: Fantastic Benefits For Business | Refines The Way You Work
Author: Jones Smith
6. New Era In Marketing
Author: Yogesh Sashi
7. Feeling More Inclined To Compliance By Online Assignment Help
Author: jenny thomas
8. Good Human Resources Platforms
9. How To Become Digitally Stronger In The Indian Education Sector?
Author: Hrishikesh Deshmukh
10. Top 10 Business Schools In India
Author: Kishore sharma
11. How To Learn Digital Marketing - A Beginner’s Guide
Author: Yogesh sashi
12. The Actual Prerequisite For Machine Learning Is Not Math, It's Data Analysis
13. Pmp Certification: Superb Choice For A Career In Project Management
Author: Avan Jack
14. What Is The Scope Of Ipr Exams In India?
Author: Gargi Upadyay
15. Free Classifieds In Pune Encouraged Me To Get Graphic Design Certificate
Author: charmi patel
|
OPCFW_CODE
|
Posted 04 August 2003 - 04:39 PM
I am running 2 NIC Cards one for internet (DCHP) and one for XBOX ((Static)Crossover Cable). I have no problem using either connection, but when I go to connect to my xbox, I lose my internet connection.
Is there a setting I could change to always keep my internet connection on. I hate having to lose the connection when connecting to my xbox.
Im running WIN XP
Does anyone else have this issue...
Posted 04 August 2003 - 04:49 PM
Posted 04 August 2003 - 04:51 PM
You problem is that you are asking your PC to be a router. Without a routing protocol you can't do that. If you want to have both connections working at the same time you will need to install server or some other program that will allow you to route across the NIC interfaces.
XP is not a router by default. You could try to enable the ICS and see if that helps.
Posted 04 August 2003 - 04:52 PM
Posted 04 August 2003 - 09:21 PM
Really, if you don't know what you're talkin about STFU
Posted 04 August 2003 - 10:34 PM
Posted 05 August 2003 - 12:22 AM
scooby_dooby: well he's right but I'll elaborate on what he says in a bit..
lamt: just completly wrong. its not routing any more than any computer. Your problem is because of a routing issue... but it's easily fixed
firstname.lastname@example.org: you don't need a gateway. they are misleading... you only need a gateway when leaving your subnet which isn't happening here.
x30n: what he said works too but it will also enable your xbox to get out to the internet which you may not want to do.
bottom line is start out with what scooby said. you may also want to make the ip of the nic to the internet on a different subnet than the one connected to crossover. For example if ip of card that goes out to internet is 192.168.1.100 then make your crossover 192.168.0.2
being on diff subnets ensures that packets know how to get out to internet.
Posted 05 August 2003 - 12:41 AM
it took me a couple hours to figure out how to get everything working. And ya ICS will work, but the gateway thing is even easier. btw, I know that with my method my XBOX can still connect to the internet, since my streaming radio works.
I can guarantee this will work perfect with windows XP Pro.
Edited by scooby_dooby, 05 August 2003 - 02:35 AM.
Posted 05 August 2003 - 03:04 AM
that's why it works.
Posted 05 August 2003 - 04:20 PM
|QUOTE (mrRobinson @ Aug 4 2003, 09:04 PM)|
| yep and i bet your internet nic doesn't have an ip of 119.0.0.x|
that's why it works.
My ICS is just for someone that wants a quick configuration change with internet access for their xbox.
See I run ICS on my lap top because my wireless router is on the second floor of my home and my xbox is on the first. So it was easy for me to configure my wirless nic to ICS. It took a whole 5 seconds for everything thing to be live. The downside is, if I take my lap top somewhere then needs a hard connection, I have to place my wireless nic in and turn off ICS before I can use my hard connection again.
One last thing, I did suggest ICS because if someone knows about networking, they wouldnt be asking and the answer I feel that are being given are too advance (no matter how easy you make it sound).
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
How to deal with name confilts in T-SQL filetables via subdirectories ON INSERT?
I encountered an issue with storing files in filetables, due to them having same filenames and thus violating the unique constraint. I decided to try and work around this, by adding new trigger into database ON INSERT, that would create new randomly named subdirectory, and then set it as a parent path.
I have found some examples of how this can be done by updating the row after writing, effectively moving the file. Is it possible to do this during the insert? I don't want to risk a situation where by any chance two files would attempt to use the same directory before moving into their own (since on files with same names, this causes a crash).
Basically and a BEFORE INSERT trigger that would:
INSERT statement
TRIGGER
create subdirectory
cset new parent_path
END
finish inserting
I know it can't be done by simply adding the parent_path_locator as it is computed value.
This code doesn't work. It was my first attempt (which i now know cant work due to computed property) and describes what i would like to do.
DECLARE @val table (id hierarchyid);
DECLARE @val2 hierarchyid;
INSERT INTO [fulladb].[mock___filetable]
([name]
,[is_directory]
,[is_archive]
)
OUTPUT inserted.path_locator INTO @val
VALUES
('ducken', 1, 0);
SELECT @val2 = id from @val;
INSERT INTO [fulladb].[mock___filetable]
([name]
, [parent_path_locator]
,[is_directory]
,[is_archive]
)
VALUES
('ducken_inner', @val2,1, 0)
GO
FWIW: When I've needed to keep track of files in a database I've used a surrogate key, e.g. an identity column, to generate the name of the stored file while the table keeps the original filename. The files are then distributed into directories with no more than 1000 files each to avoid performance problems. (Easily done if you format Id % 1000 for the directory and Id / 1000 for the name with leading zeros.) Name collisions become an application issue, e.g. handling revisions of documents.
Not entirely clear what your use case is but file system is essentially a database. Simplest of its kind where keys are paths and values are byte strings.So we are talking synchronization issue between two databases. Let alone usability of triggers which may be super hard to test in concurrent world. Your friends are now, one writer/multiple readers or transactions or locking or, the best one, moving this entire logic away from database to server business logic.
|
STACK_EXCHANGE
|
MATLAB Invalid MEX File
So I'm running software I downloaded for analyzing the position of fluorescent proteins in microscopy images. The software is called plusTipTracker, and it runs off MATLAB.
So the first function (detecting 'spots' in the images) works fine, but the second function ("track spots") fails. In particular, it seems to be an error with a MEX file:
??? Invalid MEX-file
'/Users/ethanbuchman/Documents/MATLAB/plusTipTracker_1pt1pt3_2012-07-07/software/createDistanceMatrix.mexmaci64':
dlopen(/Users/ethanbuchman/Documents/MATLAB/plusTipTracker_1pt1pt3_2012-07-07/software/createDistanceMatrix.mexmaci64,
1): no suitable image found. Did find:
/Users/ethanbuchman/Documents/MATLAB/plusTipTracker_1pt1pt3_2012-07-07/software/createDistanceMatrix.mexmaci64:
unknown required load command 0x80000022.
I can locate this file in the software folder i downloaded. There are actually multiple versions, each with a different extension (eg. .mexa64, .mexmaci, .mexmaci64, etc.). There's also a .dll file. While there are other mex files in the folder, each with multiple extensions, none of the others have an associated dll file. Not sure if that's relevant.
But I have no idea what to do about this.
Im on MacOSX 10.5.8 using Matlab R2010b.
Any insight would be greatly appreciated. Thanks.
I assume you're using 64bit matlab and Mac OSX?
From what I'm reading here, it seems like the MEX file was compiled for a different version of Mac OS. Can you recompile the MEX files and DLL on your own system?
that sounds right. how can i go about that? i dont have the original source code, only the mex file.
You could try moving all the .mex*64 files out of your path so that matlab will run the 32-bit versions... Some of what I'm reading says that the 10.6 specific features should only be used by 64-bit binaries.
ok. i tried movin the file (.mexmaci64) out of the path. but now I just get:??? Undefined function or method 'createDistanceMatrix' for input arguments of type 'double'.
Unless you feel like upgrading your OS, the only other thing I can think of is to email the authors and request either the source or a version for 10.5. Sorry I can't be more help.
mmmm ok. i was afraid that might be the case. thanks for your help.
|
STACK_EXCHANGE
|
I am currently trying to finish up a DIY project I've been working on for a while: Building an electric potters wheel. I acquired a treadmill and salvaged quite a few good parts from it, and thought this would be a great way to put it to use. Unfortunately when I got everything together, and after a few tests I committed a terrible crime: I dropped a washer on my MC-60 motor controller board. As you guessed, on my next start-up there was magic smoke everywhere.
So I have all the physical assembly completed, and a good condition treadmill motor without power supply. So I'm venturing into salvaging and building my own, not spending $50-$100 on a new board if I can help it.
I have been doing quite a bit of reading over the past few weeks and came to the conclusion that PWM supply offers the most consistent torque, and this was what I was concerned about. I don't want the thing to be choppy and low torque, so I'm going to do some physical reductions with pulleys/gears to be able to maintain a decent amount of torque on the motor.
I've completed my PWM circuit with the quick help of Netduino isolated with Opto to drive mosfets. Everything seems to be fine, yes I'll have to adjust components to varying voltages as time moves forward.
I know I don't have to drive this thing with full 90vdc to get the small amount of toque/rpm I need correct? I mean it's a pottery wheel, not a lathe. I figured I might need 30-40vdc max, or is this a bad assumption? Will this drastically lower my torque to an unusable level? I really would like to avoid dealing with 90vdc PWM, that seems like way overkill.
My theory on the power supply is I could simply use a transformer to reduce a 110vac voltage to say ~50vac (or whatever necessary) and then rectify and smooth this resulting DC voltage to a usable limit. Then drive this through the MOSFET, to the "clutch" (which helps further smooth any ripple), to the motor. Am I headed in the right direction or am I under-thinking something here? I just really don't want to waste anymore money on components until I can be certain it's the correct way to go. I also don't want to waste money on a control board that is complete overkill for my needs. All I need is simple on/off with a little bit of speed control, nothing really specific.
Thanks for any help.
Motor Specs: Permanent Magnet DC Motor Electrical Rating: @130vdc 2.5hp 6700rpm 18amps Continuous Duty @95vdc 1.5 hp
Just realized it would probably be better to maintain a higher voltage to keep as much torque as I can, and adjust the duty cycle. Rather than adjusting the voltage to say 60vdc and then PWM that..
|
OPCFW_CODE
|
It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Originally posted by frazzle
That would depend entirely on the other cultures in the cosmos. And of course there's no saying whether any of those other cultures would welcome mine with open arms, either.
Originally posted by Christosterone
Originally posted by N3v3rmor3
i would come on ats, and say hello, my name is liz nice to meet you. if you need more info look at my bio!!!!!! Lmao....
though i cannot prove i am an alien... the earth sure proves not to agree with my kind.
that or i would become a cat. if you notice by petting their heads just right and folding back the ears they have the ultimate alien appearance, the only difference is they are not green.... they are little, can... when they want to walk on two legs and are smart enough to look down upon humans. lol
My avatar gives away my status as a cat person...
------->One of my cats has green eyes so she may be part alien..although she is pretty fat these days so is no longer 'little'...
I can see this is the beginning of a beautiful friendship between two people who believe their cats are better than humans
Originally posted by Unity_99
Time is a program in our heads and the universe is a multiverse, extremely compartmentalized, different schools on different levels, and the stars are the clocks, and each body has its own time realm, so we don't see alot unless its a close match for time and frequency, depending on who is there.
So if you program a bot to experience your day in 15 minutes, you would disappear and he would be in his own channel. Though you could see.
Higher ups would see us.
But to them, we would be standing still, or very slow mode, depending on their level.
Time is not consistent, and from a Higher Perspective, which is also a measurement in infinity and therefore a perception as well, but we use them, they would be reading this anime, or magazine story. They could code and make changes as well, (metaphor metaphor metaphor), even take the old dusty dvd off the shelf and select the right clip to join. (metaphor metaphor metaphor). Since there really is no time for its also a measurement of infinity, they could do outreach to help someone, say be right back, and spend ages with their loved ones, then locate the clip and portal in.
Originally posted by Rathyas
I would send vast fleets of highly advanced spacecraft around the universe.
Originally posted by N3v3rmor3
reply to post by Christosterone
sweet! glad to be your friend! i love having more friends
i dont have many either... i kinda keep away from peeple
Originally posted by Scramjet76
I would use the power of the mind since mind is inherent in every electron and electrons exist everywhere.
Originally posted by Eaglecall
I would just befriend the other species on facebook to let them know about my existence.
On a serious note, I wouldn't do any campaign to let others know about me. What would be the purpose for it? Instead I would just travel colonizing other systems, observing primitive races without interfering in their own home planet.
Originally posted by holton0289
you would have to be able to think like one to answer this question.
But a guess, and a loose theory, would be to manipulate gravity somehow. Once your civ is advanced enough to work with gravity, it opens up a lot of possibilities.
If you could somehow build a craft that has an engine that creates a point of high gravity ahead of the craft, you could ride the gravity wave or disturbance and travel massive distances. Time and space would matter little if at all.
|
OPCFW_CODE
|
- Azure Sentinel—A real-world example - Tue, Oct 12 2021
- Deploying Windows Hello for Business - Wed, Aug 4 2021
- Azure Purview: Data governance for on-premises, multicloud, and SaaS data - Wed, Feb 17 2021
Of these three, DPM has some interesting improvements in this release, whereas the others received very little TLC. The basics are covered. They can all now run on Windows Server 2012 R2, with SQL 2012 as the backend, and they support Windows 8.1 or Windows 2012 R2 as clients where applicable.
Data Protection Manager
In the move to make sure all products in the System Center suite can be virtualized, the fact that DPM can run as a VM is probably the biggest new feature in R2. You can run DPM in a VM in production (which hasn’t been supported until now), and you can store backup data on VHD storage pool disks through the VMM library.
If you want to virtualize DPM, be aware that if you back up to tape today, the only drive type that can be connected to a VM is an iSCSI drive (and you need a dedicated NIC for that connection). Also, if you’re planning to store backup data on VHD files, be aware of all the limitations: in short, no VHD on storage spaces, no disk deduplication, and no BitLocker or NTFS compression. The sentence indicating that “performance can suffer in scaled up environments using VHDX files compared to SAN” in the same TechNet article really doesn’t create confidence.
A genuinely useful new feature is Linux VM backups, which can now be done while the VM is running, whereas previous versions would pause the VM (briefly) while creating the snapshot. Be aware that this is a file-consistent—not an application-consistent—backup, as there are no VSS writers for Linux applications.
Planning your Protection Groups and schedules is a key part to maximize the benefit of DPM.
Another thing to take into account is your domain/forest structure. If you have a separate forest for hosts versus VMs, setting up DPM to back up both environments becomes a tricky proposition. Mostly, you’ll want to back up from the host side, since it’s cheaper to have a DPM agent on the host compared to having one in each VM, but there are cases such as SQL Server and Exchange where only an agent in the VM gives you the full experience for restores.
The backend of DPM has received some attention. Now, there’s a SQL database for each DPM server, making it easier to spread the load across multiple servers. DPM also supports SQL clusters, for the first time, which should improve reliability. However, it only stores metadata and indexes in the database; the actual backed-up data is stored on disks or tapes.
The extra flexibility around the backend database for DPM is welcome.
Not strictly new in R2 (as it was added in 2012 SP1) is the ability to back up to Azure. Unlike a vanilla Windows Server backup, this lets DPM back up Hyper-V VMs and SQL databases (but still not System state or Exchange).
I really think Microsoft hasn’t figured out their “self-service console” story. There has been a new console in almost every version of Virtual Machine Manager, followed by the System Center 2012 release where App Controller was poised to take over. The unique point to App Controller is that it can connect to multiple Virtual Machine manager private, on-premises, clouds as well as one or more Azure subscriptions. Furthermore, it can connect to third-party hosted clouds that use the free Service Provider Foundation (SPF). So, you can manage your on-premises VMs, third-party clouds, and Azure resources in one web-based console.
But with the 2012 R2 release, Microsoft provided Windows Azure Pack, which is another console for self-service provisioning. Built on the old (since the new Azure DevOps preview portal was announced at Build 2014) console that Microsoft uses in Azure, this allows you to offer self-service provisioning of VMs, networks, and databases on your on-premises infrastructure only.
So where does that leave App Controller in 2012 R2? Well, the only new feature is that it can connect to Virtual Machine Manager 2012 R2 (and that’s the only version it can connect to). That’s it.
My suggestion for Microsoft? Two options. One is to expand App Controller to do the same as the Azure Pack for on-premises as well as connect to Amazon Web Services, so ALL public and private cloud resources can be managed from one console. Or, ditch App Controller and use the Azure Pack to manage both on-premises and Azure; after all, the console was originally built for Azure, so it shouldn’t be hard.
Whichever you select, please stick with one or the other, not another self-service console in SC vNext.
This is another product that seems to have fallen between the chairs a bit. Introduced in SC 2012 as a way to visually automate IT tasks, and built on the Opalis acquisition, it offered a great way to build runbooks. But then Service Management Automation (SMA) was released, which is the backend for the Azure Pack—not Orchestrator (although the two work together).
That seems to be reflected in the lack of new features in the R2 release. The only new features are that you can create runbook workers for the Azure Pack, a new Integration Pack (IP) for SharePoint and you will see some updates to the Orchestrator IP and the Virtual Machine Manager IP.
If you have Windows 2012 R2/8.1 in production, upgrading to System Center 2012 R2 makes sense; just don’t expect any big surprises in the three products covered here. I really like DPM, but, to be taken seriously in the Enterprise space, it needs built-in data deduplication (not just the ability to back up deduped data but to dedupe the data it’s storing).
And Orchestrator is still an excellent product that’s fun to play with. I just wish Microsoft had added some great new features.
Want to write for 4sysops? We are looking for new authors.
hie, i have a client thats using DPM 2012 R2 and they are having issues maybe you can be able to help me with steps to archieve this, their storage has run out of space and recovery points cannot be created anymore.
I manually deleted old recovery points and tried pruning shadow copies, after that i manually modified disk allocations and restarted the server but there is no change whatsoever. i used a Microsoft script that calls out diskpart later on but still it didn’t help.
changed short term recovery points by halved, and more than 3 days later we are still not able to see free space on the drive. please help
Unfortunately I’m not sure I can be much help here, the last DPM server I had in production was running 2010 and when we ran out of disk space we added another disk. You can look here on TechNet http://technet.microsoft.com/en-us/library/jj642912.aspx. I would also try the forums at http://www.systemcentercentral.com.
|
OPCFW_CODE
|
import openml
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
def openmlwrapper(data_id=31, random_state=1, n_samples = 2000, verbose=True, scale=True, test_size=0.25):
"""
Wrapper for preprocessing OpenML datasets. Train/test split (75/25) and fill missing values with median of
training set.
Optional: scale data through normalization (subtract mean, divide by standard deviation).
Parameters
----------
data_id : int
openml dataset id
random_state : int
random state of the train test split
n_samples : int
number of samples from the data that will be returned
Returns
-------
data_dict : dict
Dictionary with data, including: X_train, X_test, y_train, y_test, X_train_decoded (original feature values),
X_test_decoded (original feature values)
"""
dataset = openml.datasets.get_dataset(data_id)
X, y, cat, att = dataset.get_data(target = dataset.default_target_attribute,
return_categorical_indicator=True,
return_attribute_names=True)
print('Start preprocessing...')
# Sample at most n_samples samples
if len(X) > n_samples:
prng = np.random.RandomState(seed=1)
rows = prng.randint(0, high=len(X), size=n_samples)
X = X[rows, :]
y = y[rows]
if verbose:
print("...Sampled %s samples from dataset %s." % (n_samples, data_id))
else:
if verbose:
print("...Used all %s samples from dataset %s." % (len(X), data_id))
# Split data in train and test
X_train, X_test, y_train, y_test = train_test_split(pd.DataFrame(X, columns=att),
pd.DataFrame(y, columns=['class']),
random_state = random_state,
test_size=test_size)
# Fill missing values with median of X_train
X_train = X_train.fillna(X_train.median())
X_test = X_test.fillna(X_train.median())
if verbose:
print('...Filled missing values.')
# Create decoded version with original feature values for visualizations
X_train_decoded = X_train.copy()
X_test_decoded = X_test.copy()
for f in att:
labels = dataset.retrieve_class_labels(target_name=f)
if labels != 'NUMERIC':
labels_dict = {i : l for i,l in zip(range(len(labels)), labels)}
else:
labels_dict = {}
X_test_decoded[f] = X_test_decoded[f].replace(labels_dict)
X_train_decoded[f] = X_train_decoded[f].replace(labels_dict)
if verbose:
print('...Decoded to original feature values.')
# Scale data
if scale:
scaler = StandardScaler()
scaler.fit(X_train)
X_train = pd.DataFrame(scaler.transform(X_train), columns=list(X_train))
X_test = pd.DataFrame(scaler.transform(X_test), columns=list(X_test))
if verbose:
print('...Scaled data.')
print('Preprocessing done.')
return {'X_train' : X_train,
'X_test' : X_test,
'y_train' : y_train,
'y_test' : y_test,
'X_train_decoded' : X_train_decoded,
'X_test_decoded' : X_test_decoded}
def plot_roc(y, y_score, label, max_fpr, xlim, mln = True):
"""
Plot de ROC curve up to a particular maximum false positive rate.
Parameters
----------
y : array like [n_observations]
true classes
y_score : array like [n_observations]
classification probabilities
label : string
dataset name
max_fpr : numerical
maximum false positive rate
xlim : numerical
limit of plot on x axis
mln : Boolean
display FPR per million
Returns
-------
fpr : array
fp rates
tpr : array
tp rates
thresholds : array
prediction thresholds
"""
ax = plt.axes()
fpr, tpr, thresholds = roc_curve(y, y_score, drop_intermediate=False)
plt.plot([0, 1], [0, 1], '--', linewidth=1, color='0.25')
plt.plot(fpr, tpr, label = 'Classifier')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve %s data' % label)
if xlim:
if mln:
plt.plot([max_fpr,max_fpr], [0, 1], 'r--', linewidth=1, label = 'FPR $\leq %.f*10^{-6}$'%(max_fpr*10**6))
labels = plt.xticks()
ax.set_xticklabels(['%.0f' %(i*10**6) for i in labels[0]])
plt.xlabel('False Positive Rate (per 1 mln)')
else:
plt.plot([max_fpr, max_fpr], [0, 1], 'r--', linewidth=1, label = 'FPR $\leq %.2f$'%(max_fpr))
plt.xlabel('False Positive Rate')
plt.xlim(-.000001,xlim+.000001)
plt.legend()
plt.show()
return fpr, tpr, thresholds
|
STACK_EDU
|
How can I set up a configuration file for .NET console applications?
Is it possible to use a ".net configuration" file for a .NET console application?
I'm looking for an equivalent to web.config, but specifically for console applications...
I can certainly roll my own, but If I can use .NET's built in configuration reader then I would like to do that...I really just need to store a connection string...
Thanks
Yes - use app.config.
Exactly the same syntax, options, etc. as web.config, but for console and WinForms applications.
To add one to your project, right-click the project in Solution Explorer, Add..., New Item... and pick "Application Configuration File" from the Templates box.
At least for me in VS2008 it doesn't deploy it (as @Greg Ogle says in his answer) if I name it anything but exactly "app.config". Thought it might be worth mentioning, since it bit me.
app.config... If you have an App.config in your project, it will get copied as executableName.exe.config in the case of a console application.
+1. Don't know why somebody voted this down - it's absolutely right.
This might help to some people dealing with Settings.settings and App.config: Watch out for GenerateDefaultValueInCode attribute in the Properties pane while editing any of the value (rows) in the Settings.settings grid in Visual Studio (VS2008 in my case). If you set GenerateDefaultValueInCode to True (True is the default here!), the default value is compiled into the exe (or dll), you can find it embeded in the file when you open it in a plain text editor. I was working on a console application and if I had defaults in the exe, the application always ignored the config file placed in the same directory! Quite a nightmare and no information about this on the whole internet.
Yes. Look up "application configuration file" in the documentation.
Yes, it's possible. You just need to make an app.config file.
SInce I haven't fully made the leap to TDD yet (though I hope to on some upcoming project) I use a console app to test my library code that I produce for another web developer in our company to use.
I use app.config for all of those settings, and as @Dylan says above the syntax is exactly the same between that and web.config, which means I can also just hand the content of my app.config over to the other dev and he can put them directly in his web.config. Very handy.
I asked almost the same question some days ago and got really good answers, take a look:
Simplest way to have a configuration file in a Windows Forms C# Application
|
STACK_EXCHANGE
|
Someone mentioned somewhere recently, but I can’t remember where. Ideally, I need somewhere that will support SVG’s, but PNG would be an acceptable alternative. I would use Flickr, but I believe it converts things to JPG, and I need the transparency. Can anyone suggest a suitable host?
Personally, for CodePen I would either grab the raw SVG code, or for a PNG, I’d use a data URI. That way you can keep everything in one place.
Of course, for a fairly low price you can have a Pro account at CodePen, and thus host your images right there.
I must admit, I didn’t know you could do that to be honest. I’ve only ever looked at them as if they were another image format. How do I access the code?
There are various ways, but in my experience, the simplest way is to drag the SVG image onto a new tab in Chrome. That loads the image in the browser. Then go to Inspect element, and you’ll see the raw code in the Inspector. Right-click on the
svg element in the Inspector, choose Edit as HTML, and copy the entire SVG code. (You can also just view source to get the code.)
From there, presumably, it’s just a case of pasting it into the relevant position within your HTML document and style, if required, in the CSS.
Once you have that code, you can paste it into your HTML, and it will appear as is.
If you want to use it in your CSS, or inside an
img element, for example, then you need a “data URI”, which is different. This also works for other image formats, too.
Again, there are many tools, but I like to use Chrome. So—
drag the image (be it an SVG file or PNG or whatever) into a new tab in Chrome, and it should display on screen.
right click on the image, and choose Inspect Element.
Right click on the image’s URL in the Inspector, and choose Open Link in Resources Panel:
in the Resources panel, right-click on the image and choose Copy Image as Data URL:
The copied code is a long string of gibberish, starting with something like this:
data:image/png;base64,. You can then use that code in several ways:
in your HTML, as the
srcof an image: <img src=“data:image/png;base64,iVBO…ltkg”>
in your CSS, like so: background-image: url(data:image/png;base64,iVBO…ltkg);
Hope that helps!
Yep, I’m good with all that and I’ve now dropped what I need into a new Codepen. You can anticipate an entirely new question in ‘HTML & CSS’ very shortly.
As an aside, my usual work provided IE11 wouldn’t let me do the right-click, edit HTML thing. Had to jump assorted hoops before I got there, but I did in the end.
I guess it’s easier to use online tools like these:
I prefer to do without extra tools if possible, which is why I like the Chrome option. But it is a bit awkward.
This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.
I’ve struggled with this issue a bit myself until I figured you can host SVG as linkable Gists on Github. You’ll need to use RawGit.com to get the mimetype served correctly, but they’ll even serve your SVG via a production-capacity CDN for free.
It’s fairly straight forward to work out, but I’ve written up the detail here:
|
OPCFW_CODE
|
/* See LICENSE file for copyright and license details. */
#include "graphicsbezieredge.hpp"
#include <QPoint>
#include <utility>
#include <algorithm>
#include <QPainter>
#include <QGraphicsSceneMouseEvent>
#include <QGraphicsDropShadowEffect>
#include <iostream>
#include "graphicsnode.hpp"
#include "graphicsnodesocket.hpp"
GraphicsDirectedEdge::
GraphicsDirectedEdge(QPoint start, QPoint stop, qreal factor)
: _pen(QColor("#00FF00"))
, _effect(new QGraphicsDropShadowEffect())
, _start(start)
, _stop(stop)
, _factor(factor)
{
_pen.setWidth(2);
setZValue(-1);
_effect->setBlurRadius(15.0);
_effect->setColor(QColor("#99050505"));
setGraphicsEffect(_effect);
}
GraphicsDirectedEdge::
GraphicsDirectedEdge(QPointF start, QPointF stop, qreal factor)
: GraphicsDirectedEdge(start.toPoint(), stop.toPoint(), factor) {}
GraphicsDirectedEdge::
GraphicsDirectedEdge(int x0, int y0, int x1, int y1, qreal factor)
: GraphicsDirectedEdge(QPoint(x0, y0), QPoint(x1, y1), factor) {}
GraphicsDirectedEdge::
GraphicsDirectedEdge(qreal factor)
: GraphicsDirectedEdge(0, 0, 0, 0, factor) {}
GraphicsDirectedEdge::
GraphicsDirectedEdge(GraphicsNode *n1, int sourceid, GraphicsNode *n2, int sinkid, qreal factor)
: GraphicsDirectedEdge(0, 0, 0, 0, factor)
{
connect(n1, sourceid, n2, sinkid);
}
GraphicsDirectedEdge::
GraphicsDirectedEdge(GraphicsNodeSocket *source, GraphicsNodeSocket *sink, qreal factor)
: GraphicsDirectedEdge(0, 0, 0, 0, factor)
{
connect(source, sink);
}
GraphicsDirectedEdge::
~GraphicsDirectedEdge()
{
delete _effect;
}
void GraphicsDirectedEdge::
mousePressEvent(QGraphicsSceneMouseEvent *event) {
QGraphicsPathItem::mousePressEvent(event);
}
void GraphicsDirectedEdge::
set_start(int x0, int y0)
{
set_start(QPoint(x0, y0));
}
void GraphicsDirectedEdge::
set_stop(int x1, int y1)
{
set_stop(QPoint(x1, y1));
}
void GraphicsDirectedEdge::
set_start(QPointF p)
{
set_start(p.toPoint());
}
void GraphicsDirectedEdge::
set_stop(QPointF p)
{
set_stop(p.toPoint());
}
void GraphicsDirectedEdge::
set_start(QPoint p)
{
_start = p;
this->update_path();
}
void GraphicsDirectedEdge::
set_stop(QPoint p)
{
_stop = p;
update_path();
}
void GraphicsDirectedEdge::
connect(GraphicsNode *n1, int sourceid, GraphicsNode *n2, int sinkid)
{
n1->connect_source(sourceid, this);
n2->connect_sink(sinkid, this);
_source = n1->get_source_socket(sourceid);
_sink = n2->get_sink_socket(sinkid);
}
void GraphicsDirectedEdge::
connect(GraphicsNodeSocket *source, GraphicsNodeSocket *sink)
{
source->set_edge(this);
sink->set_edge(this);
_source = source;
_sink = sink;
}
void GraphicsDirectedEdge::
disconnect()
{
if (_source) _source->set_edge(nullptr);
if (_sink) _sink->set_edge(nullptr);
}
void GraphicsDirectedEdge::
disconnect_sink()
{
if (_sink) _sink->set_edge(nullptr);
}
void GraphicsDirectedEdge::
disconnect_source()
{
if (_source) _source->set_edge(nullptr);
}
void GraphicsDirectedEdge::
connect_sink(GraphicsNodeSocket *sink)
{
if (_sink) _sink->set_edge(nullptr);
_sink = sink;
if (_sink) _sink->set_edge(this);
}
void GraphicsDirectedEdge::
connect_source(GraphicsNodeSocket *source)
{
if (_source) _source->set_edge(nullptr);
_source = source;
if (_source) _source->set_edge(this);
}
void GraphicsBezierEdge::
update_path() {
QPoint c1, c2;
QPainterPath path(_start);
// compute anchor point offsets
const qreal min_dist = 0.f;
// const qreal max_dist = 250.f;
qreal dist = 0;
if (_start.x() <= _stop.x()) {
dist = std::max(min_dist, (_stop.x() - _start.x()) * _factor);
} else {
dist = std::max(min_dist, (_start.x() - _stop.x()) * _factor);
}
// dist = std::min(dist, max_dist);
c1.setX(_start.x() + dist);
c1.setY(_start.y());
c2.setX(_stop.x() - dist);
c2.setY(_stop.y());
path.cubicTo(c1, c2, _stop);
setPath(path);
}
void GraphicsBezierEdge::
paint(QPainter * painter, const QStyleOptionGraphicsItem * /*option*/, QWidget * /*widget*/) {
painter->setPen(_pen);
painter->drawPath(path());
}
|
STACK_EDU
|
The Statistical Genetics and Genetic Epidemiology Lab has created an array of innovative software for the analysis of complex genetic mechanisms and genetic epidemiology.
Each software package is publicly available for use in biomedical research. The software incorporates Mayo Clinic's quantitative methods and includes well-documented procedures and examples for use.
The armitage R package performs the Armitage trend test to evaluate the association of a trait with SNP genotype predictors given a dose vector of length 3. (Software updated October 2015.)
CAVIARBF is a fine-mapping tool for identifying potential causal variants in a region where it's assumed that causal variants exist in the data.
CAVIARBF can be used to prioritize potential causal variants for follow-up functional analysis after performing genome-wide association studies. It uses an approximate Bayesian method and can deal with multiple causal variants.
One output is the marginal posterior probability of each variant being causal. The input requires only the marginal test statistics and correlations among variants, so it's also useful for analyzing meta-analysis results.
CAVIARBF is implemented in C++. (Software updated October 2015.)
See: Chen W, Larrabbe BR, Ovsyannikova IG, Kennedy RB, Haralambieva IH, Poland GA, Schaid DJ. Fine mapping causal variants with an approximate Bayesian method using marginal test statistics. Genetics. 2015;200:719.
The GeneSetScan software offers a general approach to scan genome-wide SNP data for gene-set association analyses.
The test statistic for a gene set is based on score statistics for generalized linear models and takes advantage of the directed acyclic graph structure of the gene ontology to create gene sets. The method can use other gene-set structures, such as the Kyoto Encyclopedia of Genes and Genomes (KEGG), or even user-defined sets.
The approach of Dr. Schaid's Statistical Genetics and Genetic Epidemiology Lab combines SNPs into genes, and genes into gene sets, but ensures that positive and negative effects on a trait do not cancel. To control for multiple testing of many gene sets, the lab uses an efficient computational strategy that accounts for linkage disequilibrium and correlations among genes and gene sets, and provides accurate step-down adjusted p values for each gene set. (Software updated October 2014.)
See: Schaid DJ, Sinnwell JP, Jenkins GD, McDonnell SK, Ingle JN, Kubo M, Goss PE, Costantino JP, Wickerham DL, Weinshilboum RM. Using the gene ontology to scan multilevel gene sets for associations in genome wide association studies. Genetic Epidemiology. 2012;36:3.
The haplo.stats package is a suite of R routines for the analysis of indirectly measured haplotypes.
The statistical methods assume that all subjects are unrelated and that haplotypes are ambiguous (because of unknown linkage phase of the genetic markers). The genetic markers are assumed to be codominant (that is, 1-to-1 correspondence between their genotypes and their phenotypes). (Software updated August 2015.)
The hwe R package allows users to test the fit of genotype frequencies to Hardy-Weinberg equilibrium proportions for autosomes and the X chromosome.
Different statistical tests are provided, along with an option to evaluate statistical significance by either exact methods or simulations. README and R source package are provided. (Software updated February 2011.)
The hweStrata program calculates an exact stratified test for HWE for diallelic markers, such as single nucleotide polymorphisms (SNPs), exact tests for HWE within each stratum, and an exact test for homogeneity of Hardy-Weinberg disequilibrium. An update for version 1.0 verifies if the exact test for homogeneity can be computed; if not, the program calculates the p value using an asymptotic test.
The hweStrata software is written in the C programming language and is available as executable for Linux x_86_64 and Solaris, in addition to the source code. (Software updated May 2011.)
See: Schaid DJ, Batzler AJ, Jenkins GD, Hildebrandt MAT. Exact tests of Hardy-Weinberg equilibrium and homogeneity of disequilibrium across strata. American Journal of Human Genetics. 2006;79:1071.
The ibdreg package is written for S-PLUS and R to test genetic linkage with covariates by regression methods with response IBD sharing for relative pairs. It accounts for correlations of IBD statistics and covariates for relative pairs within the same pedigree.
See: Schaid DJ, Sinnwell JP, Thibodeau SN. Robust multipoint identity-by-descent mapping for affected relative pairs. American Journal of Human Genetics. 2005;76:128.
README and package sources are provided. (Software updated December 2006.)
The kinship2 R package contains routines to handle family data with a pedigree object. The primary methods include the creation of pedigrees, plotting, trimming and the calculation of kinship matrices. (Software updated July 2015.)
See: Sinnwell JP, Therneau TM, Schaid DJ. The kinship2 R package for pedigree data. Human Heredity. 2014;78:91.
The ld.pairs R package contains a method to compute composite measures of linkage disequilibrium, their variances and covariances, and statistical tests for all pairs of alleles from two loci when linkage phase is unknown. It is an extension of Weir and Cockerham (1989) to apply to multiallelic loci. README and package source are provided. (Software updated October 2015.)
See: Schaid DJ. Linkage disequilibrium testing when linkage phase is unknown. Genetics. 2004;166:505.
The mend.err R package checks pedigrees for mendelian errors and, when errors are found, systematically jackknifes every typed pedigree member to determine if eliminating this member will remove all mendelian errors from the pedigree. (Software updated February 2011.)
PedBLIMP is a tool for genotype imputation of individuals with pedigree information.
PedBLIMP uses both relatedness information between family members and correlations among genotypes from an input reference panel. Family members can have completely missing genotypes or very different density of genotype markers. It is implemented in R but runs as a command line tool with Rscript. (Software updated October 2015.)
See: Chen W, Schaid DJ. PedBLIMP: Extending linear predictors to impute genotypes in pedigrees. Genetic Epidemiology. 2014;38:531.
The pedgene R package performs gene-level kernel and burden association tests for genetic variants with disease status and continuous traits for pedigree data and unrelated subjects. (Software updated July 2015.)
See: Schaid DJ, McDonnell SK, Sinnwell JP, Thibodeau SN. Multiple genetic variant association testing by collapsing and kernel methods with pedigree or population structured data. Genetic Epidemiology. 2013;37:409.
regmed: Regularized Mediation Analysis
Mediation analysis for multiple mediators by penalized structural equation models with different types of penalties depending on whether there are multiple mediators and only one exposure and one outcome variable (using sparse group lasso) or multiple exposures, multiple mediators, and multiple outcome variables (using lasso, L1, penalties).
|
OPCFW_CODE
|
In the ever-evolving landscape of enterprise search, maintaining optimal relevance in search results is a perpetual challenge. In fact, the delicate balance of relevance often teeters on the edge of uncertainty as organizations regularly update their data. An intricate web of parameters, filters, and algorithms dictates what users see in their search results, and those dynamics demand a strategic approach. The goal of that approach is to ensure updates do not inadvertently disrupt the delicate equilibrium of relevance. Accordingly, ensuring any changes made to alter search relevancy are moving the needle in the right direction is imperative.
Search relevance is a measure of how effectively a search system aligns with the user’s intentions and expectations. As the enterprise Search space has evolved, locating the correct documents has become easier, but arranging them in a meaningful sequence remains a formidable challenge.
Many factors influence relevancy in an enterprise search solution. As data is added/removed or organizational search strategy changes, how can relevancy be measured to ensure it stays, well, relevant?
In an ideal world, you would leverage users to identify “good” and “bad” search results and then tweak your search engine. However, this strategy may not be possible in the world of enterprise search due to budget or resource constraints. This raises a question: how can relevancy changes be tracked automatically? This blog presents one of many possible ways to track relevancy changes in an enterprise search solution. After the initial setup, this solution can largely be automated.
Pick the top 100 (or 1,000 or more!) of the most popular terms of your search application. Ideally, these terms should be extracted from an analytics system already in place in the organization, such as Google Analytics.
For each term in the list, figure out what a “good” set of results looks like. This task can be daunting, but it doesn’t need to be. As a starting point, take the current result set from your system for each term. Then, over time, this result set can be updated a few terms at a time. This list is your “ideal” result set — given a search term, this result set tells you information about which documents should be included and in what order.
The list can be stored in a database or simply as a CSV. Here is an example of what a CSV might look like for a system where the top 5 results matter.
|another thing user searches for
For the search term homer simpson, results must be in the following order: id-123, id-423, id-391, id-508, id-185.
Your organizational needs will ultimately determine which information is important. For example, if search strategy dictates that there is some flexibility with the order of search results, then the positions can be stored in the CSV. Here’s what that might look like.
|another thing user searches for
For the term homer simpson, document with ID id-123 must either be the first or second result.
As noted previously, this list will never be complete. It will evolve alongside organizational needs. With time, terms will added/removed and results for existing term(s) will be adjusted.
Build a tool in the language of your choice to compare the “ideal” list with the actual results from your search system.
This tool will need to do the following:
- Read the existing “ideal” list created in Step 2 above
- Query the search engine to record actual results from each term from the list
- Compare the actual vs ideal for each term
- Save the results in the target repo of your choice, which not only simplifies the evaluation process but also helps identify trends over time
For simplicity, having a formula for the comparison is beneficial. The end goal is to have a number that shows you whether a search term’s relevancy improved or worsened. In a solution with a flexible order of results, here’s what that formula might look like:
For each search result within a search term,
- if it is within the “ideal” range, give it a score of 0
- if it is higher or lower than it should be, find the difference in position
Add scores for each result. The closer this number is to 0, the closer the results are to the “ideal” result.
The tool is now built. Next, let’s look at how to use it.
Put It All Together
Every time the tool is run, you get a snapshot of how well the system is performing in terms of relevancy. Organizational goals, however, will ultimately dictate how often the tool is run. Perhaps more importantly, the tool enables you to test impacts to relevancy when making changes!
Let’s look at a factitious example.
Organizational search strategy now dictates that fieldA is the most important field in the data corpus. Accordingly, a boost rule is to be added that increases the weight of fieldA. Thanks to the tool, you can now understand how such a change will impact search relevancy.
- Run the tool to get current relevancy in the system
- Add the boost rule to the search application
- Run the tool again
- Compare results from Step 1 and 3
There you have it! With that practical solution, you can track relevancy changes in an enterprise search application.
The need for a strategic approach to track and adapt to changes is evident in the dynamic realm of enterprise search, where relevance is a perpetual challenge. But it doesn’t have to be one; the presented solution allows for adaptability to organizational changes, accommodating shifts in user expectations and content updates. While the initial setup may require thoughtful consideration of what constitutes an “ideal” result set, the subsequent automation of the process ensures ongoing relevancy assessment without overwhelming resource demands.
As organizations evolve, so too can their approach to tracking and adapting search relevancy. That evolution will help ensure a seamless alignment with user intentions and expectations in an ever-changing landscape.
|
OPCFW_CODE
|
This is a quick post of links and demos for those who attended the Premier webcast today What’s New in PowerShell v5.
These are some helpful links for folks new to PowerShell and Desired State Configuration.
Today we learn how to efficiently filter event log queries, going beyond simple event ID filtering into the specific values of the XML message data. Then we will run this filter against multiple servers in parallel for faster data collection.
Today’s post is the second in a series on using PowerShell DSC with Active Directory. We will demonstrate configuring the AD Recycle Bin and domain trusts with PowerShell Desired State Configuration. As a bonus we will throw in a registry key for some special logging on the domain controller.
Do you have Group Policies gone wild? Did you realize too late that it might not be such a good idea to delegate GPO creation to half the IT department? Have you wanted to combine multiple policies into one for simplicity? This blog post is for you.
I am excited! Microsoft Premier customers now have access to 17 hours of real-world PowerShell Desired State Configuration (DSC) video training through the Premier Workshop Library on Demand (WLOD) subscription. We recorded 11 modules on topics from beginner to advanced. The training gracefully builds one concept upon another until the student is armed with all of the information they need to successfully begin using PowerShell DSC in their environment.
We wrote this training after spending months in the field helping customers just like you get started with PowerShell DSC. We took that experience and wrapped it into this video training for the real-world knowledge that our customers love.
If you're like me you enjoy geeking out on all the bells and whistles of PowerShell. Leveraging that schweetness across thousands of machines is a key goal. Managing PowerShell in the enterprise is a different conversation that does not get enough visibility. So much of the PowerShell content in the community is geared towards features rather than operations. The goal of this session is to get into the nitty gritty of making PowerShell effective in your enterprise-scale environment with Group Policy. What setting are available? What are my options for managing client and server settings of PowerShell? Come find out about mass configuration of execution policy, module logging, Update-Help, WSMAN, and more. These topics sound simple on the surface, but they quickly spiral into some interesting conversations.
Today’s post is the first in a series on using PowerShell DSC with Active Directory. I don’t know how many blog posts there will be. I haven’t written them yet. What I can tell you, is that I have a load of fun scripts to share. Today we will start out with deploying a new forest using PowerShell DSC.
This script automates creating a PowerShell DSC composite resource:
I have also included some pro tips for working with DSC composite resources.
In a previous post I created a report of all organizational units (OUs) and sites with their linked group policy objects (GPOs). This report gives visibility to all of our group policy usage at-a-glance. Since this is one of my most popular downloads I thought it was time to give it a fresh coat of paint. Today I am releasing two significant updates:
I don’t know of anywhere else you can find a report like this. Enjoy!
Someone just now added Jimmy to the Domain Admins group! How do I know? Because I used PowerShell to check. Let me show you how.
Some of the best customers that I visit get email pages when high value group memberships change. Obviously this is strongly encouraged for IT shops of any size. Of course you can buy products to do this, but here on my blog we build these tools ourselves. It’s more fun and FREE with PowerShell.
Welcome! Today’s post includes demo scripts and links from the Microsoft Virtual Academy event: Using PowerShell for Active Directory. We had a great time creating this for you, and I hope you will share it with anyone needing to ramp up their AD PowerShell skills.
I built extra secret demos that you have never seen before on my blog or at any conference presentations I have given to date. I guarantee everyone from beginners to seasoned scripters will pick up new techniques in this free training.
Whew. This has been a busy season for speaking, blogging, and recording. I’ve spent more time on airplanes than in my office at home for the last few months. It’s all good, and I want to share it with you.
Here are some places you can find me online, on stage, and on camera…
Read the full post for links and more information. Hope to see you soon.
Reduce Server Outages Using PowerShell Desired State Configuration - Ever configured a server only to find someone changed it? Ever tracked an outage back to an unauthorized change? Tired of manually configuring new server builds? Come learn how PowerShell Desired State Configuration can help you save time building servers and reduce outages.
It has been a while since I’ve released any updates to the Active Directory SID History PowerShell Module. Today’s release leverages improvements in PowerShell v3.0 for faster and better results.
This week I am presenting a session on GPO migration at TechMentor Redmond 2014. This is an expanded version of the session I gave at the PowerShell Summit back in April. I received feedback in April that WMI filters must be supported before this would be considered a viable solution. So I went back to my lab, integrated some code from the TechNet Script Center, and we have version 1.1 now, including WMI filter migration.
While working on DNS automation for a customer recently I needed some quick scripts to inventory Active Directory-integrated DNS server and zone configurations. All too often the way we think things are configured does not match reality. Are the forwarders consistent and correct? Is scavenging enabled where you thought it was? Do the right zones have aging enabled? Are the zones stored at the domain or forest level? Today's script is an easy way to check.
Have you ever wanted to roll up all of your reverse zones into a "big 10" super zone? Do you need to copy DNS zones between environments and preserve the record aging? Today's post is for you.
Over the years on this blog I have created a number of short links to my most popular posts. I thought it might be handy to post a greatest hits list of these short links for easy reference and sharing. Enjoy!
I know this post is a little late, but I wanted to offer some helpful information that I picked up at the PowerShell Summit last month. This post is packed with links to keep you surfing high-value PowerShell content for days.
|
OPCFW_CODE
|
const { get } = require('snekfetch');
const Player = require('./structures/player');
const Clan = require('./structures/clan');
const Tournament = require('./structures/tournament');
/**
* The client for fetching data from the RoyaleAPI.
*/
class Client {
/**
* @typedef {object} RequestOptions
* @property {Array<string>} [keys] The keys to get in the response from the API.
* @property {Array<string>} [exclude] The keys to exclude in the response from the API.
*/
/**
* Constructs the croyale client
* @since 2.0.0
* @param {string} token The token to interact with the API.
*/
constructor(token) {
if (!token) throw new Error('Token is an essential component to interact with the API. Make sure to provide it.');
/**
* The token provided
* @since 2.0.0
* @type {string}
*/
this.token = token;
/**
* The valid characters for any tag
* @type {string}
* @private
*/
this.tagCharacters = '0289PYLQGRJCUV';
/**
* The base URL of the API.
* @type {string}
* @private
*/
this.baseURL = 'http://api.royaleapi.com/';
}
/**
* A string conatining the valid tag characters.
* @typedef {string} tag
*/
/**
* check if the provided tag is a valid one.
* @since 2.0.0
* @param {string} tag The tag that is to be checked.
* @returns {tag} the verified tag.
* @example
* let tag = API.verifyTag('CVLQ2GV8');
* if (tag) console.log(`${tag} is a valid tag.`);
* else console.error('The tag has invalid charactrs.');
*/
verifyTag(tag) {
if (!tag) return false;
tag = tag.toUpperCase().replace('#', '').replace(/O/g, '0');
for (var i = 0; i < tag.length; i++) {
if (!this.tagCharacters.includes(tag[i])) return false;
}
return tag;
}
/**
* The base funtion that makes all the necessary API calls.
* @since 2.2.0
* @async
* @param {string} endpoint The endpoint to call.
* @param {RequestOptions} options The options for the API call.
* @returns {Promise<Object>} The raw API response.
* @private
*/
_get(endpoint, options = {}) {
return get(`${this.baseURL}${endpoint}`)
.query(options)
.set('auth', this.token)
.then(res => res.body)
.catch(error => { throw `There was a error while trying to get Information from the API: ${error}`; });
}
/**
* get the player data from the api with the tag
* @async
* @since 2.0.0
* @param {string} tag The player tag to get the data for.
* @param {RequestOptions} options The options to be passed for customized result.
* @returns {Promise<Player>} the arranged player data.
* @example
* API.getPlayer('CVLQ2GV8', {
* keys: ['name']
* })
* .then(player => {
* console.log(`The Player's name is ${player.name}`);
* })
* .catch(console.error);
*/
getPlayer(tag, options = {}) {
if (!tag) throw new Error('Invalid usage! Must provide the tag');
if (typeof tag !== 'string') throw new Error('Tag must be string');
// checking if the input for options are correct or not.
if (options.keys && options.exclude) throw new TypeError('You can only request with either Keys or Exclude.');
if (options.keys && !options.keys.length) throw new TypeError('Make sure the keys argument you pass is an array.');
if (options.exclude && !options.exclude.length) throw new TypeError('Make sure the exclude argument you pass is an array.');
// making the query parameters ready
if (options.keys) options.keys = options.keys.join(', ');
if (options.exclude) options.exclude = options.exclude.join(',');
const verifiedTag = this.verifyTag(tag);
if (!verifiedTag) throw new Error(`The tag you provided has some invalid character. Make sure it contains only the following characters: "${this.tagCharacters}"`);
return this._get(`player/${verifiedTag}`, options)
.then(res => new Player(res));
}
/**
* get the clan data from the api with the tag
* @async
* @since 2.0.0
* @param {string} tag The clan tag to get the data for.
* @param {RequestOptions} options The options to be passed for customized result.
* @returns {Promise<Clan>} the arranged clan data.
* @example
* API.getClan('2CCCP', {
* keys: ['name']
* })
* .then(clan => {
* console.log(`The clan's name is ${clan.name}`);
* })
* .catch(console.error);
*/
getClan(tag, options = {}) {
if (!tag) throw new Error('Invalid usage! Must provide the tag');
if (typeof tag !== 'string') throw new Error('Tag must be string');
// checking if the input for options are correct or not.
if (options.keys && options.exclude) throw new TypeError('You can only request with either Keys or Exclude.');
if (options.keys && !options.keys.length) throw new TypeError('Make sure the keys argument you pass is an array.');
if (options.exclude && !options.exclude.length) throw new TypeError('Make sure the exclude argument you pass is an array.');
// making the query parameters ready
if (options.keys) options.keys = options.keys.join(', ');
if (options.exclude) options.exclude = options.exclude.join(',');
const verifiedTag = this.verifyTag(tag);
if (!verifiedTag) throw new Error(`The tag you provided has some invalid character. Make sure it contains only the following characters: "${this.tagCharacters}"`);
return this._get(`clan/${verifiedTag}`, options)
.then(res => new Clan(res));
}
/**
* get top 200 players (global or specific location).
* Have a look at royaleapi-data/json/regions.json for the full list of acceptable keys.
* @async
* @since 2.0.0
* @param {string} locationKey The specific location to get the top players of.
* @returns {Promise<Array<Player>>} array of top 200 players.
*/
getTopPlayers(locationKey) {
if (locationKey && typeof locationKey !== 'string') throw new Error('Location key must be a string');
return this._get(`top/players${locationKey ? `/${locationKey}` : ''}`);
}
/**
* get top 200 clans (global or specified location).
* Have a look at royaleapi-data/json/regions.json for the full list of acceptable keys.
* @async
* @since 2.0.0
* @param {string} locationKey The specific location to get the top clans of.
* @returns {Promise<Array<Clan>>} array of top 200 clans.
*/
getTopClans(locationKey) {
if (locationKey && typeof locationKey !== 'string') throw new Error('Location key must be a string');
return this._get(`top/clans${locationKey ? `/${locationKey}` : ''}`);
}
/**
* @typedef {Object} ClanSearchOptions
* @property {string} [name] The name of the clan you want to search for.
* @property {number} [score] The score of the clan you want to search for.
* @property {number} [minMembers] Minimum members of the clan you want to search for.
* @property {number} [maxMembers] Maximum members of the clan you want to search for.
*/
/**
* search for a clan with some query options.
* @async
* @since 2.0.0
* @param {ClanSearchOptions} options The options which you want the clan to match.
* @returns {Promise<Array<Clan>>} array of clans matching the criteria.
* @example
* API.searchClan({
* name : 'ABC',
* minMembers : 45
* })
* .then(clans => {
* console.log(`${clans.length} clans found with the criteria you set.`);
* })
* .catch(console.error);
*/
searchClan(options = {}) {
if (!options.name && !options.score && !options.minMembers && !options.maxMembers) throw new Error('You must provide at least one query string parameters to see results.');
if (options.name && typeof options.name !== 'string') throw new Error('Name property must be a string.');
if (options.score && typeof options.score !== 'number') throw new Error('Score property must be a number.');
if (options.minMembers && typeof options.score !== 'number') throw new Error('minMembers property must be a number.');
if (options.maxMembers && typeof options.score !== 'number') throw new Error('maxMembers property must be a number.');
return this._get('clan/search', options)
.then(res => res.map(clan => new Clan(clan)));
}
/**
* @typedef {string} APIVersion
* The current version of the API.
*/
/**
* get the current version of the api
* @async
* @since 2.0.0
* @returns {Promise<APIVersion>} the api version.
* @example
* API.getVersion()
* .then(result => {
* console.log(`The Current API version is ${result}`);
* })
* .catch(console.error);
*/
getVersion() {
return get('http://api.royaleapi.com/version')
.then(res => res.text);
}
/**
* get a list of open tournaments
* @async
* @since 2.0.0
* @returns {Promise<Array<Tournament>>} list of open tournaments.
* @example
* API.getOpenTournaments()
* .then(tournies => {
* console.log(`The data of first open tournament is ${tournies[0]}`);
* })
* .catch(console.error);
*/
getOpenTournaments() {
return this._get('tournaments/open')
.then(res => res.map(tourney => new Tournament(tourney)));
}
}
module.exports = Client;
|
STACK_EDU
|
Fri Jun 28 2019
Tools that use for Big data
Big Data is a term that directly indicates a huge amount of data which can be exceeding Terabytes in size. This large and complex data set is difficult to process using traditional applications or tools. Big data always brings a number of challenges with its volume and complexity. Most of the time, the real world data generate without any proper structure. Now the challenge is how we are going to store these unstructured data and analyse it to improve our daily life.
Today, there have thousands of tools which can be used in big data but, not all are efficient and it also takes a lot of time to find a perfect tool. To save your valuable time, we set up a list of top big data tools.
Let’s take a look at our list -
NoSQL (Not Only SQL) databases can handle unstructured data and store with no particular schema which is very common in big data management. NoSQL database like MongoDB gives better performance in storing a massive amount of data. It is a good resource to manage data that is frequently changing or data that is semi-structured or unstructured. Most commonly, it is used for store data in mobile apps, product catalogs, real-time personalization, content management, and applications that deliver a single view across multiple systems.
HPCC Systems platform is set of easy-to-use software features enabling developers and data scientists to process and analyze data at any scale. It belongs to the open source community, the HPCC Systems platform is available free of licensing and service costs. It supports SOAP, XML, HTTP, REST and JSON. This system can store file part replicas on multiple nodes to protect against disk or node failures. It has administrative tools for environment configuration, job monitoring, system performance management, distributed file system management, and more. It’s highly efficient and flexible.
Apache Storm is free and open source distributed real-time computation system. Storm makes easy to reliably process unbounded streams of data, doing for real-time processing. Storm can be used with any programming language. Storm can be used in many cases such as - online machine learning, continuous computation, distributed RPC, ETL, and more. The storm is fast, scalable, fault-tolerant, guarantees your data will be processed and is easy to set up and operate.
Hadoop is an open-source software framework for distributed storage of large datasets on computer clusters. It is designed to scale up from single servers to thousands of machines. Hadoop provides large amounts of storage for all sorts of data along with the ability to handle virtually limitless concurrent jobs or tasks. It offers robust ecosystem that is well suited to meet the analytical needs of the developer. It brings Flexibility In Data Processing and allows for faster data Processing. But it’s not for the data beginner.
OpenRefine is a powerful big data tool for working with messy data by cleaning, transforming formats, and extending with web services and external data. OpenRefine is an open source tool. It is pretty user-friendly that can help you to explore large data sets with easily and quickly even the data is unstructured. And you can ask questions to the community if you get stuck. They are very helpful and patient. You can also check out their Github repository.
Cloudera is the fastest, easiest and highly secure modern big data platform. It allows anyone to get any data from any environment within single, scalable platform. It is free and an open-source project to store large amount data and can access better the stored data. Cloudera is mostly an enterprise solution to help manage a business. It will also deliver a certain amount of data security, which is highly important if you’re storing any sensitive or personal data.
Talend is the leading open source integration software provider to data-driven enterprises. Talend connects at big data scale, 5x faster and at 1/5th the cost. Talend offers a number of data products. It's Master Data Management offering real-time data, applications, and process integration with embedded data quality and stewardship.
It facilitates managing and querying large datasets residing in the distributed storage. Apache Hive can help with querying by using HiveQL – and SQL-like language and managing large datasets real fast. It offers Java Database Connectivity interface.
NodeXL is free and open-source network analysis and visualization software. It provides exact calculations. It is one of the best statistical tools for data analysis which includes advanced network metrics, access to social media network data importers, and automation.
KNIME helps you to manipulate, analyze, and model data through visual programming. It is used to integrate various components for data mining and machine learning.
The Tableau platform is a recognized leader in the analytics market and is a good option for non-data scientists working in enterprises, across any sector. A big benefit that users find from Tableau is the ability to reuse existing skills, in the Big Data context. Tableau makes use of a standardized SQL (Structured Query Language) to query and interface with Big Data systems, making it possible for organizations to make use of existing database and analyst skills sets to find the insights they are looking for, from a large data set. Tableau also integrates its own in-memory data engine called "Hyper" enabling fast data lookup and analysis.
It’s a distributed database that is high-performing and deployed to handle mass chunks of data on commodity servers. Cassandra offers no space for failure and is one of the most reliable Big Data tools. It was first developed by the social media giant Facebook as a NoSQL solution.
Spark is the next hype in the industry among the big data tools. It can handle both batch data and real-time data. As Spark does in-memory data processing, it processes data much faster than traditional disk processing. This is indeed a plus point for data analysts handling certain types of data to achieve a faster outcome. Spark is flexible to work with HDFS as well as with other data stores. It’s also quite easy to run Spark on a single local system to make development and testing easier.
SAMOA stands for Scalable Advanced Massive Online Analysis. It is an open source platform for big data stream mining and machine learning. It allows you to create distributed streaming machine learning algorithms and run them on multiple DSPEs (distributed stream processing engines). SAMOA’s closest alternative is BigML tool.
Pentaho provides big data tools to extract, prepare and blend data. It offers visualizations and analytics that change the way to run any business. This Big data tool allows turning big data into big insights. It empowers users to architect big data at the source and streams them for accurate analytics. Seamlessly switch or combine data processing with in-cluster execution to get maximum processing. It allows checking data with easy access to analytics, including charts, visualizations, and reporting. It supports a wide spectrum of big data sources by offering unique capabilities.
|
OPCFW_CODE
|
package com.github.robindevilliers.onlinebankingexample.steps;
import com.github.robindevilliers.cascade.annotations.*;
import org.openqa.selenium.WebDriver;
import com.github.robindevilliers.onlinebankingexample.AccountsStateRendering;
import com.github.robindevilliers.onlinebankingexample.domain.Account;
import java.util.ArrayList;
import java.util.List;
import static com.github.robindevilliers.onlinebankingexample.Utilities.assertElementIsNotPresent;
import static com.github.robindevilliers.onlinebankingexample.Utilities.assertTextEquals;
@SuppressWarnings("all")
@SoftTerminator
@Step({Challenge.class, Notice.class, BackToPorfolio.class})
public interface Portfolio {
public class CurrentAccountOnly implements Portfolio {
@Demands
private WebDriver webDriver;
@Supplies(stateRenderer = AccountsStateRendering.class)
private List<Account> accounts = new ArrayList<>();
@Given
public void given() {
accounts.add(new Account("Premium Current Account", "Current", "1001", 40000));
}
@Then
public void then() {
assertTextEquals(webDriver, "[test-row-1001] [test-field-account-name]", "Premium Current Account");
assertTextEquals(webDriver, "[test-row-1001] [test-field-account-balance]", "£ 400.00");
}
}
public class CurrentAndSaverAccounts implements Portfolio {
@Demands
private WebDriver webDriver;
@Supplies(stateRenderer = AccountsStateRendering.class)
private List<Account> accounts = new ArrayList<>();
@Given
public void given() {
accounts.add(new Account("Premium Current Account", "Current", "1001", 40000));
accounts.add(new Account("Easy Saver Account", "Saver", "1002", 10000));
}
@Then
public void then() {
assertTextEquals(webDriver, "[test-row-1001] [test-field-account-name]", "Premium Current Account");
assertTextEquals(webDriver, "[test-row-1001] [test-field-account-balance]", "£ 400.00");
assertTextEquals(webDriver, "[test-row-1002] [test-field-account-name]", "Easy Saver Account");
assertTextEquals(webDriver, "[test-row-1002] [test-field-account-balance]", "£ 100.00");
}
}
public class AllAccounts implements Portfolio {
@Demands
private WebDriver webDriver;
@Supplies(stateRenderer = AccountsStateRendering.class)
private List<Account> accounts = new ArrayList<>();
@Given
public void given() {
accounts.add(new Account("Premium Current Account", "Current", "1001", 40000));
accounts.add(new Account("Easy Saver Account", "Saver", "1002", 10000));
accounts.add(new Account("Fancy Mortgage", "Mortgage", "1004", -15498700));
}
@Then
public void then() {
assertTextEquals(webDriver, "[test-row-1001] [test-field-account-name]", "Premium Current Account");
assertTextEquals(webDriver, "[test-row-1001] [test-field-account-balance]", "£ 400.00");
assertTextEquals(webDriver, "[test-row-1002] [test-field-account-name]", "Easy Saver Account");
assertTextEquals(webDriver, "[test-row-1002] [test-field-account-balance]", "£ 100.00");
assertTextEquals(webDriver, "[test-row-1004] [test-field-account-name]", "Fancy Mortgage");
assertTextEquals(webDriver, "[test-row-1004] [test-field-account-balance]", "£ (154,987.00)");
}
}
public class MortgageAccountOnly implements Portfolio {
@Demands
private WebDriver webDriver;
@Supplies(stateRenderer = AccountsStateRendering.class)
private List<Account> accounts = new ArrayList<>();
@Given
public void given() {
accounts.add(new Account("Fancy Mortgage", "Mortgage", "1004", -15498700));
}
@Then
public void then() {
assertTextEquals(webDriver, "[test-row-1004] [test-field-account-name]", "Fancy Mortgage");
assertTextEquals(webDriver, "[test-row-1004] [test-field-account-balance]", "£ (154,987.00)");
assertElementIsNotPresent(webDriver, "[test-link-payments]");
}
}
}
|
STACK_EDU
|
LFS Version requirement: v11.46 or later
The Limio for Salesforce acquisition journey does not natively support capturing a billing address, as the data model behind it might vary from one Salesforce implementation to another, so a specific data structure isn't enforced. By default, billing address is captured within the Zuora iframe, and in turn passed from Zuora to Limio when the order is created.
LFS however does support passing a custom defined billing address, that can be captured in whichever way our client wishes.
In version 11.45 and later of LFS managed package, the Limio Acquisition Journey comes with a built in flow variable, called billingAddress, of type apex class i42as__CustomerAddress (where i42as is the namespace of the managed package).
This is where a billing address being captured should be stored, to then be used in the rest of the flow. Key flow elements that support passing this additional variable to are:
- The apex class "Get Order Preview", to calculate accurate order total, inclusive of tax. This also allows for the use of the Preview variables in dynamic text such as compliance scripts, see Flow Customisation: Compliance Script for the Acquisition Flow
- The custom component for Zuora iframe, to be used to pre-fill the billing address to send to Zuora
- The custom component Limio - Order Total, to calculate accurate order total, inclusive of tax
- The custom component Limio - Order Summary, to calculate accurate order total, inclusive of tax, as well as sending the captured billing address to Limio as part of the order
Additional Requirement: formatting of Country and State
The format accepted by Zuora and Limio for country (and state) must be an iso2, so for example for country Canada and state/province Ontario, the 2 must be passed as CA and ON, respectively.
However, LFS does expose a custom invocable apex class that converts country and state labels into iso2 codes, so should the billing address be stored in that format, this apex action can be used to convert them both, before using them. Below an example of how this class could be used
In this case, the custom method takes the state stored on the contact record (as a label), and assigns its iso2 format to the billingAddress.MailingState variable.
How to capture Billing Details
The easiest option for capturing billing details without the need to add any additional steps for the agent, is to leverage the query element of flow builder, query the relevant custom or standard fields, and assign them to the relevant billingAddress flow variable.
If the billing address is not captured already, or if the experience is to have the agent confirming the address with the option to change it, it is always possible to add a screen element, with an address type component, and capture the billing details there. These need to be assigned to the flow variable billingAddress.
|
OPCFW_CODE
|
Some of my notes and best practices learned over the years designing and developing microservices.
- Thin layer. Do not add any business logic to this layer.
- Contains React code and may contain web assets.
- Has a shared look and feel that provides a visual consistency to the project.
- It is a web application, like Play, to allow us to set and remove cookies and delegate calls to the proxy.
- It can be simpler to share data among react components.
- It does not handle sessions so it can be part of a distributed system.
- May be part of a larger web application, like a portal.
- CSS can be shared across the application, achieving consistency in a simpler way.
- It is stateless.
- It is deployed independently of other components and services.
- It can be configured locally along with a proxy to set up fake requests during development. This way front-end developers do not have to wait for back-end developers to complete their work.
The following are some issues that may come up when we split the UI in multiple subprojects:
- Subprojects don’t share a common look and feel
- It is difficult to share state among React components
- Dependecies issues may come up if we have different component versions across the subprojects
So for those reasons it’s preferred to have a single UI project.
- Forwards calls to business services.
- Calls Auth and has logic to allow or deny calls to other services based on Auth result.
- Does not perform any business logic. For example, it does not know how to join orders and products.
- Performs caching operations.
- It is statless.
- Can be deployed on its own.
- Never calls external services.
- Can be implemented with AWS’ API Gateway.
- Performs user authentication.
- Performs user authorization.
- Manages and caches sessions.
- Creates and validates JWTs.
- It is deployed on its own.
- May use external Auth services like Facebook.
- May return a list of privileges that can be easily matched by the proxy to determine whether it can forward a request to a specific business microservice or not.
- Or may return 200-OK/403-Forbidden in response to a request containing a target business service and operation.
- May return a JWT with minimum user data and session data to be passed by the Proxy to other business microservices.
- Domain oriented. We can use a hexagonal architecture.
- Services can call other services and perform business logic.
- There may be an optional orchestrator in charge of handling complex business logic.
- Services always return full responses. The proxy should never have to deal with calling multiple microservices to complete a request. For example, if a request requires an order that includes a list of items, the orders microservice should make any additional requests to other services to complete it.
- They have their own database and should never interact with the database of any other service.
- They are stateless.
- They return caching headers that allow the Proxy to decide whether resources should be cached and for how long.
- They can be deployed independently of other services.
- They may use kafka (or a similar technology) for asynchronous calls.
- They should not be publicly accessible.
- If you have to make multiple calls to another business microservice, consider adding a new endpoint to it to handle this type of requests. This approach is helpful as well for those scenarios that may be surrounded by a transaction. For example, a two operation process like money transfer between accounts should not be implemented by calling the withdraw and deposit endpoints on the same microservice, but rather by using a single transfer endpoint that internally performs both operations.
- It is usually until you have to make changes when you realize whether your microservices granularity is correct. If you find that a change requires updates to several business microservices, then maybe your services are too fine grained. On the other hand, if testing becomes too long compared to the coding effort, then maybe the microservices are too coarse grained.
- Kuc, Karol. Hexagonal Architecture by example - a hands-on introduction. Allegro Tech. https://blog.allegro.tech/2020/05/hexagonal-architecture-by-example.html
- robloxro. Summary of the Domain Driven Design concepts. Medium. https://medium.com/@ruxijitianu/summary-of-the-domain-driven-design-concepts-9dd1a6f90091
- Brown, Kyle. What’s the right size for a microservice? Medium. https://kylegenebrown.medium.com/whats-the-right-size-for-a-microservice-bf1740370d47
- Gupta, Lokesh. How to design a REST API. REST API Tutorial. https://restfulapi.net/rest-api-design-tutorial-with-example/
- Client Error Responses. MDN Mozilla Developers Network. https://developer.mozilla.org/en-US/docs/Web/HTTP/Status#client_error_responses
|
OPCFW_CODE
|
The default installation directory for Readerware 3.0 is different from earlier versions so you can have both versions installed on the same computer.
The first time you run Readerware 3.0 it will detect your existing installation. It will automatically migrate your Readerware preferences to 3.0 format so that Readerware 3.0 looks the same as your current 2.0 system. The table view will consist of the same columns as it does now, any additional views you have created will still be there etc.
Your Readerware 2.0 database also needs to be converted to 3.0 format. This is done by copying your database to create a new 3.0 database. Your existing 2.0 database is not touched and is still available. The database upgrade wizard will be launched automatically to step you through the conversion process whenever you access a 2.0 database in Readerware 3.0.
If you have several Readerware 2.0 databases, you will need to open each one in Readerware 3.0 to convert it. 2.0 databases will not automatically show up in the Readerware open database dialog, by default Readerware only displays 3.0 databases. To open a Readerware 2.0 database, change the file type drop down list at the bottom of the open dialog to select Readerware 2.0 databases, you will then be able to open and convert the database to the new format.
Once you have your database in 3.0 format it is a good idea to back it up for safe keeping.
The Readerware database upgrade wizard will guide you through the process of upgrading your 2.0 database to 3.0 format. Most users will just click the Next button to proceed through the conversion process as Readerware makes default choices for you which will be appropriate in most cases.
The actual conversion is pretty quick but the speed will depend on the size of your database. Readerware will keep you posted on the progress displaying the total number of rows that need to be converted and the current status.
Confirm that you want to convert this database by clicking on the Next button.
If you decide not to convert the database at this point, click on the Cancel button.
Readerware has already set the 3.0 database for you based on the 2.0 database. The new database will be stored in the same folder as the 2.0 database and will have the same name. Readerware uses a different file type for 3.0 databases.
If you want to use another name or location for your new database, click on the Browse button. The standard file selection dialog will be displayed.
Note that under some circumstances Readerware will relocate your database to a new folder rather than store it in the same folder as your 2.0 database. Over time operating systems have changed the rules as to where you should store data. For example in most cases it is no longer permitted to store data in the program installation folder. If Readerware detects that your existing 2.0 database is stored at an invalid location, it will relocate it to your Documents folder. You can of course change the location, but please do not try and store it in the invalid folder as this can cause problems in the future.
Readerware automatically loads your new 3.0 database once the conversion is completed. You should see your records and Readerware 3.0 is ready to go.
Nothing else is required to access your data in Readerware 3.0, but there are some optional steps you might want to take.
Readerware 3.0 includes a lot of new fields, you might want to run auto-update to populate these new fields if the data is important to you.
Readerware lets you add user defined fields to your database. If you added a new field for data that is now standard in Readerware 3.0, you might want to copy the data from the user defined field to the new 3.0 field using the replace wizard.
You will need to open your Bookography database in Readerware 3.0 to convert it. Bookography databases will not automatically show up in the Readerware open database dialog, by default Readerware only displays 3.0 databases. To open a Bookography 1.0 database, change the file type drop down list at the bottom of the open dialog to select Bookography 1.0 databases, you will then be able to open and convert the database to the new format.
|
OPCFW_CODE
|
I work as the web coordinator for a univeristy, so essentially I'm just a web/code junkie that sits in front of a computer (AKA::average nerd). I've had lots of other jobs, including sales, accounting, radio dj, many stints in a variety of management, publishing, even carpentry, reconstruction of historic homes, and welding. I'm not knocking other jobs in general, or what people do, but I love what I do now! I'd rather sit and code and research on a computer, than any other job.I just finished a project at my house. I put up a fence in my backyard. I am so sore from that one day of physical labor, that it made me thankful that I don't do that all of the time. While I was using the post hole digger (for four hours) I had a chance to reflect on my current projects. It was amazing. I had such a clarity of thought on some problems that I need to overcome. I eventially ran back to my house and got a notepad, so I could jot notes as things came to me.Even though most of us are in IT or IS, and we don't do the physical activitiy as say a construction worker, I do think that we get just as exhausted after a good day of work. It's not necessarily the physical exhaustion, rather the mental exhaustion of trying to think of every possiblity during flowcharting, planning, debugging, or whatever. I also believe that I feel the same sence of accomplishment when I finish a wonderful piece of code, as I do when I finished a construction project. I think that I found my calling as a computer person. I enjoy doing other things, but only a true coder would think about code while doing about anything.
My Medition Today: When do we know that we find our calling? Does it scream at us, or is it silently there all the while?
I know for me it was silent. With every job that I mentioned above, I became involved with compters and systems at work. It wasn't something I tried to get involved with, but I just became more and more involved. One day a friend asked why I didn't just go into the compter field. I guess I never thought about it until Rog mentioned it to me. I just assumed that you needed to have more experience in computers (BTW: I have a degree in history... thus my long list of jobs above. ;) ) What I realized after my first year in the computer field, is that you don't need the experience, but rather the determination, and resolution to always keep learning and not to give up as you face problems.
"Heck I don't know how to do it either, but do you think that's going to stop me?!!"
|
OPCFW_CODE
|
VW Golf MK4 1.4 16v 1998 - Steering wheel hard to turn after Power Steering Rack change
Sorry for my English,
I had a problem with my power steering rack that had some fluid leaks and some noise when the fluid level was down. I never used the car without fluid.
Except the steering noise and the oil leak, the steering wheel was easy to move.
I brought the car to the mechanic yesterday and he changed the steering rack, but now the steering wheel is very hard to move mostly when the car is stopped.
The mechanic told me that maybe the new rack needs 2 - 3 days and it will be better. If it doesn't improve then it will be necessary to change the Power Steering Pump but he doesn't look so sure that it will fix the problem and I was thinking that with the old rack the power steering pump was OK. This is my first car and I am already hating it because I spent a lot of money repairing it.
Except for burning this car, does somebody have an idea or suggestion?
Thanks in advance.
Some steering racks come with plugs in them which, if not removed during installation, can cause exactly what you are talking about. If the power steering pump is making a lot of noise, this might be the issue. If the power steering pump is making a lot of noise, it could just be the pump, though. I take it the steering wheel wasn't hard to turn before the rack was replaced? Was the system properly bled?
Do you still hear the noise when you try and turn the steering wheel or is that gone now?
No, there is no noise when i turn the steering wheel. Its just hard to turn. After i asked the question here i used my car in the highway at 110km/h about 70mph and it looks that the steering wheel is easy to turn just for a short range, and if I want to turn it more at that speed it's still hard to turn.
Is the steering wheel any easier to turn when the car is stationary but running compared to when stationary with the ignition off? Also, do me a favor, please don't take the car out on the highway again until this is resolved.
The steering wheel is the same when is the car is on or off.
Did enough fluid get put into the system after the rack was changed?
I don't believe avoiding highways is necessary if the steering rack was just replaced and the mechanic didn't attempt to repair the old rack. The power steering system is quite foolproof: either it gives boost or it doesn't. It is extremely rare that it would randomly start to steer to the left or to the right. At high speeds, you can turn the wheel quite easily without needing boost. The trouble is that turning the wheel when stationary is hard if there's no boost.
That a new rack or rebuilt . either way it's either been tighten way to much if it's s remanufactured or if it's used it's just no good .you should have been able to turn it all way right then all way left once or twice that should have bleed air out .must are self bleeding like that anyway and after should have been able to turn with two finger's.if your pump turned other rack sure should turned that one. I would change it back to other one one that pulls either way inner tie rod ins out of adjustment or wore out Just my opinion but have been in a few rebuilds on them .good luck hope you get it straightened out.
|
STACK_EXCHANGE
|
Every MOB in the game has a hate list. This is how the game determines whom the MOB wants to attack. This list is quite large - enough so that there is no practical limit. Dealing lots of damage quickly is generally a reliable way to get to the top of the list, but there are many rules that govern the MOB hate list and other ways to generate hate (or aggro) besides dealing damage.
In the case of non-KoS MOBs, you can tell if you are on the MOB's hate list by considering it. If you are, then it will con as threatening instead of in the range [dubious - ally]. This is useful in conjunction with the feign death skills that some classes have.
A successful taunt instantly moves the taunter to the top of the hate list. This is achieved by assigning the taunter as much hate as the entity currently at the top of the list, plus one point.
A player is added to a MOB's hate list in several ways
- The player actively attacks or casts a detrimental spell on the MOB.
- The player taunts the MOB.
- The player attacks the MOB with their pet. In this case, both the pet and its master are added to the hate list, but the pet will be placed higher in the list initially.
- The player assists someone else attacking a MOB. This includes healing or buffing them (bard songs included).
- The player is KOS to the mob and passes within its aggro range.
- The player is on the hate list of a social MOB, and the MOB is near others of its kind. This will add you to the hate list of the other MOBs.
- The player is performing a quest that has a scripted event that creates 1 or more MOBs that automatically have the player on their hate list. (unsure of this one)
A player increases hate (and possibly moves higher in the list) in several ways
- The player does damage to the MOB.
- The player casts a detrimental spell on the MOB.
- The player taunts the MOB.
- The player buffs or heals someone on the MOB's hate list.
- The player sits down.
- The player begins casting a spell (sometimes referred to as casting aggro).
A player is removed from a MOB's hate list in several ways
- The player zones.
- The player dies.
- The player successfully feigns death and the mob has reset.
A MOB will attack whomever is at the top of its hate list with some exceptions
- The MOB is rooted, in which case it will melee whomever is closest.
- Casting a spell can cause some MOBs to temporarily turn to the caster. Usually (always?) it will attempt to interrupt the caster with a stun spell or bash, before turning back.
Managing one's position in the hate list is a very important skill. Players must be aware of the ways they and others generate hate. For example, during the the initial engagement of a MOB, it's crucial that everyone other than the tanks minimize the amount of hate they generate. This allows the tanks to build a solid position at the top of the list. As the fight continues it's important to avoid outpacing the tanks' hate generation.
This might need some more additions and testing concerning pets (/pet taunt, etc.) and summoning mobs.
|
OPCFW_CODE
|
I saw the call for proposals while I was starting to develop the project. It was a perfect opportunity to create a beta version of the software and play around with making a prototype image before continuing to work on the larger body of work.
In your artwork you put forward the difference between the “apolitical” view of raw data which a machine simply collects and organises and the “political” view – or, better, perception – that humans have of the artifacts you chose. Can you deepen the nature of this contrast you showed us with your work?
Yes, I feel this is a tension that is important to note, especially as technology (and machine-learning techniques in specific) becomes a larger and more integrated part of our everyday lives. From a computational point of view, any analysis of an image is “flattened” to a level of data – the content and contexts of the image is lost in this translation, with the possible exception of very basic human-tagged metadata (which, in it of itself, is also treated by the computer as “raw data”). So while a machine-learning algorithm, with the appropriate feedback and reinforcement, appears to exhibit the signs of context-based learning (for instance, it can tell the visual difference between a dog and a cat), its understanding of the concepts (such as “dog” and “cat”) is limited. The algorithm lacks the capacity to self-reflect on its analysis and deepen it, it can only improve in the efficiency of its technical implementation. For it to “change its mind”, it must be re-programmed or re-trained for this purpose by a human being. It exhibits intelligence but it is not aware of the impact its decisions have in the world.The problem here is not the technology, but in the way we as a society privilege apolitical data-oriented approaches as objective and authoritative. The notion of objectivity, of the algorithm as mathematical and precise as opposed to the fallibility and bias of human emotions, is dangerous because while data itself is incapable of conscious bias, it does not mean it does not carry a bias; recent observations of racial bias in facial recognition software is a case in point. And as Franco “Bifo” Berardi notes in a recent essay, such forms of functionally-oriented automatized intelligence – lacking in the human quality of “sensibility”, whether aesthetic or ethic – may already be leading the way towards techno-totalitarianism.The tension between raw data and cultural meaning-making is embodied in the very process of image-making in this project. On the one hand, data-oriented techniques such as machine learning image-analysis tools and aleatoric recomposition algorithms are used as tool to produced variations whose direction and complexity can exceed my artistic imagination. On the other hand, I am making a variety of politically-aware, contextual, and aesthetic decisions to complement, reflect, and challenge these quantitative techniques. The images are created through an iterative process of navigating between these two polarities.
“A Brief History of Western Cultural Production”. The main detail arousing from the title is the geographical frame: what do you intend as the “West” and why did you choose to stress this particular concept in the creation of your artwork?
The emphasis here is geographic but also conceptual, referring to a Western way of thinking about knowledge, technology, history, and progress, and how these concepts intertwine to reinforce once another in a larger cultural narrative.
Prominent museums, including the ones whose databases are used in this project, have a complicated history with their collections. On the one hand, they preserve a history of human production, from ancient artifacts to contemporary artworks. On the other hand, many of the items in their collection – and this is especially true of historical artifacts – are compromised in terms of provenance and ownership: they may have been taken without permission from other cultures, or are not returned to cultural institutions who request their return (the Elgin Marbles controversy had just begun as I was conceptualizing the project). Furthermore, these cultural artifacts are presented and historicized through a Western perspective, with indigenous items de- and re-contextualized into a narrative of Western values which nonetheless appeals to claims of universalism. So in effect, even when the artifact is not Western, it is presented within a Western linear narrative of cultural production.
So in a sense, International Gold Standard 001 is an exercise in subjecting Western European artifacts to the same process of re-contextualization. There is an irony in that this re-contextualization occurs using the conceptual pillars of Western thought: by using technology to indiscriminately collect cherished artifacts which the museum provides freely online as a gesture of open-access or “knowledge democratization”, and subsequently transform them without any particular regard to the specificity of their origins. The idea of creating a pan-European coin, monetary technology which advertise the image of a ruler’s face, in this way seemed poignant given the current politics of the EU.
You said that this is a part of a larger series of artworks you will work on in the future. Have you taken any further steps in this project? Any new development you would like to talk to us about?
The software created for this project has been developed further and is used to generate ongoing series of images based on different thematics. The first completed body of work with this approach is a recent collection of works titled Landscape Past Future, which investigates the ideologies and techniques of landscape paintings and photography. In the series, and similar to the International Gold Standard 001, aggregate images are created by applying metadata-sorting and ML techniques to the collections of major cultural institutions, and the custom “mosaicing” of the results to new artworks. However, the concept of the landscape, or human representation of natural environments in an era of environmental collapse, is the overarching thematic. Several months after the exhibition of prints, thousands of aggregate images created during the months leading up to the exhibition were then used as training dataset for a GAN algorithm to produce a video work Machine Learned Landscape, in an attempt to investigate what kind of understanding of landscapes can be achieved by having an algorithm iteratively learn from its own output.
|
OPCFW_CODE
|
How to Add a New Column to a Table in SQL?
In the world of relational databases and Structured Query Language (SQL), tables serve as the primary means of organizing and storing data.
Over time, the structure of your data might evolve, necessitating changes to your tables. One common operation is adding a new column to a table.
In this article, we'll explore the process of adding a new column to an existing table in SQL, including the SQL commands and best practices.
Table of Contents
- Why Add a New Column?
- Adding a New Column in SQL
- Best Practices
Why Add a New Column?
Before we dive into the technical aspects of adding a new column, it's important to understand why you might need to do this. There are several reasons:
Adding a New Column in SQL
To add a new column to an existing table in SQL, you will generally use the
ALTER TABLE statement.
The specific syntax may vary slightly depending on the database management system (DBMS) you're using. However, the basic structure remains consistent.
Here's the typical SQL command to add a new column:
ALTER TABLE table_name
ADD column_name data_type [constraints];
table_name: The name of the table to which you want to add a new column.
column_name: The name of the new column.
data_type: The data type of the new column, specifying what kind of data it will hold (e.g., INT, VARCHAR, DATE).
constraints(optional): You can define constraints like
CHECKto further control the data that goes into the new column.
Adding a new column to a table is a database operation that should be approached with care. Here are some best practices to consider:
Backup Data: Before making any structural changes to a database, it's wise to create a backup. This ensures you can recover your data if something goes wrong during the column addition process.
Plan Ahead: Carefully plan the addition of a new column. Consider the data type, constraints, and how the new data will be used within your application or queries.
Minimize NULL Values: Avoid allowing unnecessary
NULLvalues in the new column. Use constraints like
NOT NULLwhere applicable to ensure data integrity.
Data Migration: If you're adding a new column with the intention of populating it with existing data, plan and execute a data migration strategy. You might need to update your existing records with appropriate values for the new column.
Test in a Safe Environment: If possible, test the column addition in a development or staging environment to ensure it works as expected without causing issues in your production database.
Document Changes: Maintain documentation regarding any changes made to the database structure, including the addition of new columns. This documentation is invaluable for developers and administrators.
Adding a new column to an existing table in SQL is a common task when managing relational databases. It allows you to adapt your database structure to changing requirements and evolving data.
By following best practices, planning ahead, and carefully executing the
ALTER TABLE statement, you can seamlessly integrate new columns into your tables, enhancing the utility of your database without compromising data integrity.
Remember to back up your data, consider constraints, and test in a safe environment to ensure a smooth transition when adding new columns.
|
OPCFW_CODE
|
In the process of researching web hosting options, I learned what it actually means when they say multi-domain hosting, i.e. almost every time these are add-on domains in a cPanel account.
However, it seems add-on domains are actually creating a terrible file structure, as each add-on domain automatically creates a subdomain of the "primary" site. This means the add-on domain will simply point to a subfolder of your "primary" domain. For example...
Your main domain is domain1.com
Your add-on domain is domain2.com
Adding domain2.com in cPanel means...
There are now at least 3 ways you can access your add-on domain:
Then each one with or without www (http://... or http://www.)
Without doing anything, chances are that eventually all three (or 6) versions of your domain2.com will be indexed by Google, resulting in duplicate content issues and link leakage (for SEO).
You may think if none of your visitors know any of your folder or subdirectory structure, and you do not actually advertise or otherwise publish those links, then what is the problem?
However, you can never be certain enough that a link is not slipping through (what's worse...if you competitor knew what domain your add-on domain is connected to, they could actually harm you by adding your links - as stated under 1. and 2. above- all over the web, so your "primary" domain might suffer page rank eventually --- using WHOIS tools makes guessing quite easy, especially if you have a dedicated IP as this lists all your sites in a list for everyone to see).
As far as I have learned (but not done myself yet), the only way to get around visitors not going through any of the "nasty" (inadvertently created) links, and to prevent Google from indexing your links is to use 301 redirects (adding some code to the .htaccess file for each add-on domain).
But although this can all be done, I just think why do they (i.e. the cPanel guys) just create something that just doesn't make any sense in the first place!!??
You can read a feature request for true multi-domain hosting on cPanel's forum - so it seems there are more people disliking this whole "add-on" thing: True Multi Domain Support (allow multiple SSL certificates and IPs per account) - cPanel Forums
This guy brings it to a point:
I know you can specify a document root outside of the public_html for new addon domains, however, the way cPanel handles addon domains internally is like a "hacked" subdomain solution.
As this "add-on" set up comes with more disadvantages...when it comes to SSL and dedicated IPs. Or think moving only ONE of multiple sites to a new host (i.e. this particular site attracts so much traffic now that it cannot remain in a shared hosting environment, so you want to move it elsewhere, to a VPS for instance). With cPanel, you can only move the whole lot, i.e. ALL your sites and files. There are ways I believe where you then delete on your new host what you don't need, or you do it all manually (not using cPanel) but the bottom line is this: WHY does multi-domain hosting need to be set up by pointing all further domains to a subfolder of the "primary" domain, in the first place..??
So an alternative would be to use a reseller hosting account, where you can set up unlimited cPanel accounts and just use one for each domain.
But do you really want to log into 30 or 40 or 100 websites individually?
So I was wondering what your take on this was, or if you were using a different web hosting control panel (not cPanel) altogether?
Does anybody know where WP (wordpress) files actually go in this scenario?
I know the structure is www.yourdomain.com/wp-content/ but that could be totally different "behind the scenes" on the server using cPanel...?
|
OPCFW_CODE
|
"""
@author: Aleph Aseffa
"""
class Card:
def __init__(self, card_name, color_group, card_cost, house_cost, houses_built, rent_prices, mortgage_amt, owner, mortgaged):
self.card_name = card_name # str
self.color_group = color_group # str
self.card_cost = card_cost # int
self.house_cost = house_cost # int
self.houses_built = houses_built # int
self.rent_prices = rent_prices # int
self.mortgage_amt = mortgage_amt # int
self.owner = owner # str
self.mortgaged = mortgaged # bool
def mortgage(self, player):
"""
Sets the card's mortgaged status to True and updates the player's balance.
:param player: An instance of the Player class.
:return: None.
"""
player.add_balance(self.mortgage_amt)
self.mortgaged = True
def sell(self, player):
"""
Returns ownership of the card to the Bank and updates the player's balance.
:param player: An instance of the Player class.
:return: None.
"""
player.add_balance(self.card_cost)
self.owner = 'Bank'
def purchase_card(self, player):
"""
Gives ownership of the card to the Bank and updates the player's balance.
:param player: An instance of the Player class.
:return: None.
"""
if self.card_cost > player.balance:
print("You cannot afford this card at the moment.")
else:
player.cards_owned.append(self)
player.reduce_balance(self.card_cost)
self.owner = player
def construct_house(self, player):
"""
Updates number of houses that have been built on the card.
:param player: An instance of the Player class.
:return: None.
"""
if self.house_cost > player.balance:
print("You cannot afford a house on this property at the moment.")
elif self.houses_built == 5:
print("You have built the maximum number of houses on this property.")
else:
self.houses_built += 1
print(f"You have built a house on {self.card_name}.")
def locate_card_object(name, board):
for card in board:
if card.card_name == name:
card_object = card
break
return card_object
|
STACK_EDU
|
Note: This corrects some missing pieces of information from the newsletter version of this article.
As users of PocketBible and MyBible you are aware that Laridian licenses books from Christian publishers and publishes them electronically. These include Bibles, commentaries, Bible dictionaries, devotionals and other Bible reference works that aren’t covered by those four general categories.
Since you’re getting this newsletter you are also familiar with our BookBuilder program which allows you to take original content (or content for private use) and turn it into a Laridian electronic book (.lbk). What you may or may not realize is that BookBuilder is the same program that we use internally to develop content for PocketBible and MyBible.
When we license content from Bible publishing houses we take what they give us as “electronic” files and turn it into what you purchase and install on your device. We get lots of different file formats from publishers: text files, pdf files, Quark files, Word documents, etc.
Since we get so many different file types there isn’t a specific program or procedure that we can use to automatically turn them into an .lbk. (Wouldn’t that be nice!?) The process is similar for each title, but not standardized.
What we work towards is to get every title into an html format and then we use TextPad to edit the html file. (An evaluation copy of TextPad is included with the BookBuilder product and also available at http://www.textpad.com/.)
One of the reasons we use TextPad is that it supports Search and Replacing using “Regular Expressions.” Regular Expressions (regexp or regexes) are a (very powerful) way of finding text using pattern matching. So, for instance, I can use regexps to find Bible references in a book file and insert tags around the references in a global manner.
Here’s an example:
To find Gen 2:7 or Romans 3:23 I would use the following regexp:
The [A-Za-z] tells TextPad (or any other program that support regexp) to look for any letter, capital or lower case. The () around the [A-Za-z] is regexp way of telling the program to hold on to that character. The * tells the program that I’m looking for one or more character that matches and allows the program to keep it as a string. In our Bible reference example this would be Gen or Romans.
The next thing that you see in the regexp is a space. (Represented by ␢.) This is important as it helps to establish the pattern for which we are looking.
Understanding what the [A-Za-z] is doing makes it fairly straightforward to see what the next group is doing. [0-9] is looking for any number zero through nine. Again the () and * tell the program that we want to hang on to the string and it may be one or more character. In our Bible reference example this is the 2 or 3 indicating the chapter number.
The semicolon next again helps to establish the pattern. And the repeated ([0-9]*) is asking for the verse numbers. In our example the 7 and 23.
So, now what?
Now that you have the pattern established you can put this into the “Find what” field in TextPad. Making sure that the “Regular Expression” box is checked when you click “Find Next” you will step through your document finding each occurrence of a basic Bible reference.
The next step is to write your “Replace with” expression.
In the Laridian book we show that a Bible reference is a Bible reference using the following tag:
<pb_link format=”bcv | bc | cv | c | v”>…</pb_link>
So we would want the following tags for our examples:
<pb_link format=”bcv”>Gen 2:7</pb_link>
<pb_link format=”bcv”>Romans 3:23</pb_link>
Creating our “Replace with” expression is simple:
The only part of this expression that is regexp syntax are the 1 2 and 3. These indicate the three strings of () that we had collected in our “Find what” expression and indicate to the program where to place each string.
and. These indicate the three strings of that we had collected in our “Find what” expression and indicate to the program where to place each string.Two things to note:
1.) What I’ve just demonstrated here is done automatically in the VerseLinker program so it is rare that you would use this exact regexp in a Search and Replace. However, once you understand this you can use it to insert tags in commentaries that indicate where to place index and sync tags. (Your “Replace with” will be a different tag.) It also lays the ground work for some other powerful regexp that we’ll talk about in a later newsletter.
2.) This example will only find simple straightforward Bible references. You will need to restructure your “Find what” expression to find the following different kinds of Bible references: 1 Samuel 3:7-12; v. 1-3; Num 4; 3 John 8; Jude 3; etc. All of these can be handled with regexp, you just need to figure out how to structure your expressions.
This is a very simple example of what is a very powerful and potentially complex tool that we use to tag our books. Over the next few newsletters I’ll talk about more of the basics of regexp and maybe tackle some higher level expressions. In the course of tagging numerous books there really are only a handful of basic regexp components that are used regularly. I’ll try to cover these to give you what you need to tag just about anything.
Let me know if you have any specific questions or would like clarification on any of this.
|
OPCFW_CODE
|
On December 11, WordPress co-founder Matt Mullenweg traveled to beautiful Madrid, Spain, to deliver his annual State of the Word keynote. It was the first time this event took place outside the United States. Against the backdrop of Palacio Neptuno—an iconic architectural gem and UNESCO World Heritage site—nearly 200 contributors, developers, extenders, and friends of the project came together to hear from Matt, with millions more joining online.
An introduction from the Executive Director
Kicking off the event, Josepha Haden Chomphosy, Executive Director of the WordPress project, spoke about the community’s heart and spirit as what fuels hope for the future, ensuring the freedoms of the open web for all. She invited Matt on stage with a closing statement of confidence that such values and characteristics will move the project forward into the next 20 years as it has for the last 20.
Looking back at 2023
Taking the stage, Matt shared his excitement about the event being the first international State of the Word. He honored the Spanish WordPress community for hosting, citing their past WordCamp accomplishments. From there, Matt jumped right into a reflection of this year’s notable moments. He recalled the project’s 20th-anniversary celebrations, how the software has evolved, and how much more the community came together this year—doubling the number of WordCamps to 70, taking place in 33 countries.
We’re always aiming to learn and improve. Tell us how to make meetups better.
Matt continued with callouts to several resources on WordPress.org: the all-new Events page, the redesigned Showcase, a new WordPress Remembers memorial, and the award-winning Openverse. He also demoed WordPress Playground, a tool allowing users to experiment with WordPress directly in their browsers, as well as the versatile Twenty Twenty-Four default theme.
Collaborative editing and more
Matt recapped the four phases of the Gutenberg project, noting that work has begun on Phase 3: Collaboration before passing the microphone to Matías Ventura, Lead Architect of Gutenberg.
After a quick interlude in Spanish, Matías acknowledged how much progress had been made on the software this year. He spoke about the aim of the Site Editor to become both an exemplary writing environment and a superior design tool while noting improvements to the Footnotes Block and the ease of Distraction Free mode.
While there was no set timeline for collaboration and workflows, Matías was excited to share a working prototype in the Editor. He showcased some of the most interesting aspects of collaborative editing, including establishing a sync engine that allows real-time edits to be visible across sessions. He invited contributors to test the prototype in the Gutenberg plugin and share their feedback in Github.
From there, Matías highlighted other exciting developments, including the emphasis on Patterns and their continued evolution as a powerful tool for workflows, and the ability to connect blocks to custom fields. He was thrilled to speak about performance improvements, noting that work is in progress to make the Editor at least twice as fast. Speaking about front-end performance, he shared what’s to come with a demo of the Interactivity API, showcasing how it can make transitions, search, and other interactions instant—all with standard WordPress blocks and features.
Matías concluded with a look at how the Admin redesign will take cues from the Site Editor, eventually allowing users to shape their WordPress Admin experience based on their unique needs.
AI and Data Liberation
Matt returned to the stage to expand on the future of WordPress, reinforcing his past advice to learn AI deeply. He expressed his excitement about what can be accomplished with the wealth of AI tools available, how contributors are already experimenting with natural language processing and WordPress Playground to create and build.
Finally, Matt introduced an additional focus for the project in 2024: Data Liberation, with the goal to make importing from other platforms into WordPress as frictionless as possible. He spoke about the tendency of content management systems to keep users locked in as part of his motivation to unlock digital barriers. The Data Liberation initiative will work on one-click migration and the export format from WordPress.
More than just tools, Data Liberation reflects the project’s ethos to allow seamless contributions. With that, Matt invited anyone interested to jump into the action, noting a new Data Liberation GitHub repository and forthcoming Making WordPress Slack channels as places to get started.
Questions and answers
Following the presentation, Matt fielded questions from the live-stream and in-person audiences during an interactive question-and-answer session hosted by Jose Ramón Padrón (Moncho).
Additional questions from the live session will be answered in a follow-up post on make.WordPress.org/project. Subscribe to our blog notifications to be sure you don’t miss it. And don’t forget to mark your calendars for next year’s WordCamp Asia (Taipei, Taiwan), WordCamp Europe (Torino, Italy), and WordCamp US (Portland, Oregon, United States).
|
OPCFW_CODE
|
That may be always be declared
The procedure must declared component description shows how to show a procedure manage_students is a role object if no warranties of the. Then component must be included in procedure that can be harmful or procedure must be declared component within alvis xml. Each concurrent statement executes simultaneously with the other concurrent statements in the same architecturebody. Time Zone Name Property Name: TZNAME Purpose: This property specifies the customary designation for a time zone description.
Physician Assistant Program
Although we could easily accomplish this task by simply refreshing the whole tree via AJAX, we would like to find a better and more performant solution for this task. Exception occurred in the customers of the procedure must be declared component has optional, but usually not be declared in. For now, just return the access type itself. The property name is derived from the method name. Sql interactively or that all of procedure must create multiple layers of an administrative bundles for review prior to compute the.
There are correct naming is possible priority is no package, displayingeach batch of doubt you how attribute allows type declared component runtime can be a floating time zone definition of the developers have a balance and constraining content. The product usually flattened resembles a dry salami ring bologna. During a notification, the subject passes the state to Notify, which just passes it in turn to each of the observers in the Update. This procedure to do great care and procedure declared outside your input. Sqlblocks in topics exposed faces shall be forced a host variables must be declared component.
The target of the reference. If the callback element is not present at all, the behaviour is runtime implementation dependent. Act, and the regulations published by the Food and Drug Administration under the Authority of these two laws. Errors PLS-00302 component 'CALLS' must be declared How to solve this Oracle. To reuse the cursor, you must reopen it. The child pointer cannot be allocated before the parent. Dto represents a dry form unless restricted ingredients of the particular service registry, these services for inline or that implementation of a procedure must create components? The entry barrier construct allows an interrupt handler to signal a normal task by changing the state of a component of the protected object and thereby making a barrier true. The first statement identifies the argument values by listing them in the order in which they appear in the procedure specification.
Binders that component implementations in order for the name and web application tier, then make sure they are initialized in procedure must declared component? To precisely which each activity reinvestment requirements: procedure declared quantity of procedure definition of the reference falls below for this algorithm removing the process and chopped and east. It defines the interfaces under which the service can be exported. SQL block also requires a private SQL area inthese SQL statements.
Declared subheading not allowed. It gives you lessonsas quickly as possible, it helps you become proÞcient in writing embedded SQLprograms. Such coloring does theclosed or procedure must be declared component component. It publishes no service so SCR will activate a component configuration immediately. Address the safety requirements related to the item and to the DEMIL processes for the item. Meat component must be declared procedure must be. It can be extended without deactivating a very large and concrete information inline binary value comparisons between web designer can be declared in the possible, and a corresponding symbol in. BERLINER: cooked smoked sausage usually made from coarsely cut cured pork large casings.
Who performs those tasks. Wicket, this classic approach becomes inadequate because it makes custom components hardly reusable. The order above does not work well for Chinese personal names, where the surname precedes the given name. In addition, the XML document is valid if it meets certain further constraints. A lot of files used by the build procedure of HELLO module are located in SALOME. Components do not add license information to processed documents. Sql procedure was not use of component configuration of the attributes or areas seen that procedure must declared component? Simply does not null status variable, component must declared as noted in contrast must also rolled back which the elements in addition the following. Dried apricots are not exempt from component declaration, therefore sulphites must be declared along with all the other components of the dried apricots, regardless of their quantities.
When is a cosmetic also a drug? If you do not want to use a synonym, then you could write a local cover procedure to call the remote procedure. GRAVY AND YANKEE POT ROAST: The product must contain least percent cooked beef. The procedure you centralize data from that service but may substituted or be able to the procedure must contain least percent meat stuffed into wide web applications! In this chapter we will continue to explore the localization support provided by Wicket and we will learn how to build pages and components ready to be localized in different languages. Each XML descriptor is next to the component class it describes. However, these cheeses may not be used satisfy the above cheese requirement.
SHOULD be treated as TEXT. In some circumstances, the time to start CORBA servers may be long and could exceed the timeout. The creator of an annotation is identified but no trace about the involved resources or annotation categories. Items in a list are not individually addressable by NIEM metadata techniques. It is not a tutorial on how to write portlets. Plus to get a list of the errors that were found. The bundle containing the stateless login using protected objects to define the interface and augmentation point at form component must be declared on a particle. Test are discussed with common errors of procedure declared in. If the value is not of the specified type, an exception is raised.
Cobol group or attributes, imitation product through component must be deleted, controlled cured meats are marked for cooked sausage products intended for the command line. In some situations, such as when you want to call a function from within a WHERE clause, the program will also need to have asserted the WNPS purity level. Executing any SQL statement sets the SQLCA variables. Person could be used to define two list when parsing such a list when executed successfully created as desired cured pork skins and procedure must be. The definition needs to one procedure declared on the structure of.
Use it to get parameters from a form in your client, to modify the status of your portlet, etc. An access classification is only one component of the general security system within a calendar application. To make an initial determination, during the acquisition process, of the DEMIL requirements via DEMIL code assignment. Date codes are compatible, must be empty and stateless form of clause if a bind a decimal. If many is not specified, it takes a default value of false. If there is an updated method name, the component must contain a method with that name.
The policy of the reference. NIEM specifications and schemas SHOULD NOT use literal presentation of XML data to people as a design criterion. Each is the reference schema or extension schema for the namespace it defines. This property defines the overall status or confirmation for the calendar component. This contains nothing for the moment. SCR must register a Component Factory service on behalf of the component as soon as the component factory is satisfied. See: Policy Memo dated March METZ SAUSAGE: Cured lean beef and pork and bacon are finely chopped, seasoned, and stuffed into beef middles. Each component declared the procedure can contain least percent percent unless overloading the procedure must be declared component?
When Does a Signature Change? Just like the Java language, property expressions support dotted notation to select sub properties. Technical data component declared locally declared quantity, procedure must be declared component must be. Floating time as necessary step to declared component must be declared procedure. It must register and procedure must declared component declared by applications. TIME value type definition for specific interpretations of the various forms. PLS-00302 Component Must Be Declared Unable to Solve I tried the below program in. Error calling a procedure inside a package of another owner. Technical managersengineers, and support activity personnel. The component within the bound service because scr must be registered separately or be declared component must be modified procedure in. If an immediate component configuration is satisfied and specifies a service, SCR must register the component configuration as a service in the service registry and then activate the component configuration.
Beef Manicotti in Sauce.
If configurations not declared component must be retrieved by the following properties or whitespace formatting and settable references may contain least percent meat used by copyright held by tree. SCR must either notify the component configuration of the change, if the component description specifies a method to be notified of such changes, or deactivate the component configuration and then attempt to reactivate the component configuration using the new configuration information. Logger object which joins with attribute follows: procedure must declared component configuration will be specified. Since you can perform only the operations and access the data structures listed in the package specification, Oracle can make technology available in a highly controlled fashion. As popularity is changing over time and as new standards and technologies are evolving, the platform will have to evolve as well.
There is no need for the content of an included composite to be complete, so that artifacts defined within the using composite or in another associated included composite file can be referenced. This so called Lost In Redirection problem, even if it looks like a wicket bug at first, is rather a result of a default setting in wicket regarding the processing of form submissions in general. Licensing information regarding components can be included within the Argo Descriptor File, however the majority of existing components within Argo do not have this information. This value type is used to identify properties that contain a recurrence rule specification. Although versioning is considered important for all resource types, the practices used for its encoding differ across them.
Thehost group items can be referenced in the INTO clause of a SELECT or a FETCHstatement, and in the VALUES list of an INSERT statement. Full length the product labeling is not required. As the number of functional capabilities increase, the number of potential vulnerabilities increase and a plan has to be implemented to reduce the risk. Do not operationally enforceable manifestation of open also be unregistered at least percent raw or conforming sql engine for component must be declared procedure calls, cost controls are used for.Restraint
|
OPCFW_CODE
|
Home › Forums › 2015 International Workshop on Quantum Foundations › Everett’s theory › Consistency of the Everett ‘world branching’ › Reply To: Consistency of the Everett ‘world branching’
After rereading the reference emphasized in your posts, I can re-emphasize my impression that, at certain point, I do agree with you.
The argument of my previous posts can be re-stated as follows: ‘world branching’ cannot be physically objective [except perhaps in a sense completely strange to me]. In your words: “Nothing ever ‘branches’ objectively in any definition.”.
[To me, this means that the phrases as ‘[dynamically] autonomous worlds’ etc are just a matter of, say, narrative aesthetics—which, according to your ‘moderating remarks’, we have agreed to avoid.]
Then, what might be new/relevant in our cited references?
Two things. First, I think that your position [that I find consistent] is not typical for the Everett MWI community; for this reason we had to go toward ‘emergent worlds’. To this end, our paper addressed the majority of the Everettians and non-Everettians. Second, even in the context of our common view of non-objectivity of ‘world branching’, the findings referenced in my first post imply some fresh and probably interesting observations. Let me briefly focus on the latter.
Consider two bipartitions, 1+2 and A+B, of a closed composite system (the Universe) C; 1+2=C=A+B. Those bipartitions (the DISs from my previous letters) determine the pair of tensor-product-structures (TPSs) of the C’s Hilbert space. Now a universal state (in an instant of the universal time), some Psi [which has never branched], can be decomposed according to these two TPSs (DISs). Our point is that the closed composite system C (the Universe) hosts the mutually independent and autonomous, simultaneously dynamically evolving quasiclassical Worlds pertaining to the 1+2 and A+B DISs; needless to say, those worlds have nothing to do with ‘Everett worlds’. That is, instead of “Appearance of a Classical World in Quantum Theory”, we learn about “Appearance of THE Classical WORLDS in Quantum Theory”. Those non-branching/non-branched, equally (non)objective worlds [as in our QBM-model-analysis] may be mutually irreducible, i.e. not capable of defining any effective, emergent single quasiclassical world in the single Universe C. To the extent the measurement problem has been solved for the 1+2 world, it must have been equally solved also for the alternative A+B world, and vice versa.
Now the Everett world branching should be assumed for the both quasiclassical worlds, 1+2 and A+B, separately, and on the equal footing. In other words: instead of one Everett Multiverse, there are (at least) two mutually independent and irreducible Everett Multiverses. Certainly, this is a new and not yet explored picture of the quantum Universe (though initiated in http://arxiv.org/abs/1004.0148 ).
Many Worlds? Everett, Quantum Theory, and Reality,
Eds. S. Saunders, J. Barrett, A. Kent, and D. Wallace, Oxford 2010.
|
OPCFW_CODE
|
Homology of topological manifolds
Let $X$ be a topological manifold of dimension $n$ (assuming perhaps that there is a countable basis of open sets). Do NOT assume that $X$ is compact, or oriented, or triangulable (so do not assume it to be smooth).
Can we still conclude that the homology groups $H_i(X, \mathbb{Z})$ are finitely-generated?
Can we still conclude that $H_i(X, \mathbb{Z})=0$ for $i > n$ ?
The second point is related to the first, as one can show that $H_i(X, \mathbb{Q})=0$ and $H_i(X, \mathbb{F}_p) = 0$ for all primes $p$, when $i>n$. If $H_i(X, \mathbb{Z})$ were known to be finitely-generated, we would conclude by the universal coefficients theorem.
[To see the claim: if $X$ is $k$-oriented, it is part of Poincaré duality that $H_i(X, k)=0$ for $i>n$, and $X$ is always $\mathbb{F_2}$-oriented. Next, consider $Y \to X$ the canonical 2-sheeted cover with $Y$ oriented, and apply the homology Serre spectral sequence to the fibration $Y \to X \to B\mathbb{Z}/2$; this gives $H_i(X, k)=0$ when $i>n$ for each ring $k$ in which 2 is invertible.]
thanks!
Pierre
A countably infinite discrete set of points violates condition $1$.
Point 1 is false, see @user2520938's comment (for a connected example, take an infinite connected sum of genus 1 surfaces). Point 2 is Proposition 3.29 in Hatcher if $X$ is noncompact, and Theorem 3.26 if $X$ is compact. Also you have to be a bit careful with your spectral sequence argument since the base of your fibration is not connected.
I'm pretty sure that answer on question 2 depends a lot on the choice of homology theory. And it seems to me that "tubular neigborhood" of 2-dimensional Hawaiian earring would have nontrivial homology in degrees higher that 2 (Barrat&Milnor, An example of anomalous singular theory).
@Najib: thanks for the reference! as for the spectral sequence, $BC_2$ is certainly connected, but you probably worry about simple-connectedness; however, page 2 of the sequence is just $H_(C_2, H_(Y))$, the homology of the group $C_2$ with nontrivial coefficients, and this vanishes in positive degrees if multiplication by 2 is an isomorphism on $H_*(Y)$.
@Denis: the example in Barratt-Milnor is not a manifold; are you saying that one could produce a manifold example by taking a tubular neighbourhood? This would violate both Hatcher's result and my spectral sequence argument (since they use rational coefficients in this paper anyway).
Is this useful?
It's already been noted in the comments that 1) is false: take for example an infinite genus surface.
2) is true. For an oriented manifold you have $H_i(M)\cong H^{n-i}_c(M)$, the cohomology with compact supports (see Hatcher Theorem 3.35). For $i>n$, the right side is $0$ trivially. If $M$ is not oriented, you have essentially the same theorem but with twisted coefficients in the cohomology. See page 207 of the Springer edition of Bredon's Sheaf Theory.
I actually prefer the reference to 3.29 from Hatcher's book, pointed out by Najib in the comments.
|
STACK_EXCHANGE
|
Acceptable use policy
A policy is also where you define how ETP handles violations to an acceptable use policy (AUP). ETP includes AUP categories for content that you can block within an enterprise. For descriptions of these categories, see Acceptable use policy categories.
When configuring a policy, an ETP administrator can select to allow or block access to websites in each category. Administrators allow access to a category by deselecting the block action. If you block an AUP category, a warning message appears to the user when they attempt to access content in that category. You can change the look and feel of the message that appears to the user. For more information see Error pages.
With application visibility and control, you can add an AUP category to the policy and select a policy action. You can also see the web applications that are associated with each AUP category. To learn more about AVC, see Application visibility and control.
- Scan requested content with ETP malware engines. If ETP Proxy is configured as a full web proxy, ETP Proxy scans websites that are allowed in the AUP. This means categories that are not blocked are scanned by ETP malware engines. For more information about full web proxy, see Full web proxy.
- Configure an authentication policy. To prompt users to authenticate before accessing an allowed website, you can select the Require or Optional authentication modes. Otherwise, you can select None. For more information, see Authentication policy.
- Select the users and groups that are exceptions to a blocked AUP category. This functionality is available when authentication is required or optional in a policy configuration. Users or groups that are exceptions to a block action are prompted to authenticate. If no threat is detected, these users are granted access to websites in these categories. To select users or groups as exceptions, you must assign an identity provider to the policy.
- Select to bypass an AUP
category. This action allows websites in the associated category
to bypass ETP or if the proxy is enabled, ETP
You may want to select the bypass action for categories that are associated with sensitive information such as the Finance & Investing and the Healthcare categories. This action prevents ETP or ETP Proxy from inspecting this traffic.
- Select a default action when
no action is assigned to an AUP category. The Default Action
menu in the policy settings defines the default action for an AUP category when
no action is assigned in the AUP policy. You can select the Bypass, Classify, or
Block - Error Page actions. Note the following about these actions:
- Bypass. Bypasses ETP Proxy and directs requests to the origin.
- Classify. Directs traffic to ETP Proxy where it’s scanned.
- Block - Error Page. Blocks traffic and shows users an error page.
If the Default Action option in the policy is set to Bypass for the selective proxy, categories that are not blocked are reported as unclassified.
- Anonymizers. Subcategory of the Large Bandwidth AUP category. This category is made up of services that allow users in your corporate network to bypass enterprise security settings. These services may include a personal VPN or an anonymizing proxy.
- File Sharing. Category for file sharing services or applications such as Dropbox, Google Drive, and OneDrive. These services allow users to download and upload a large number of files to your network, potentially creating a backdoor to your organization's network. If you do not want to block File Sharing, ETP provides a policy option that allows you to analyze downloads from these domains. For more information, see Scan file sharing downloads for malware.
If your organization is enabled to use a custom response with the AUP and ETP Proxy is disabled, you can associate a custom response to a blocked action. As part of the block action, traffic to blocked websites is forwarded to the custom response. Information about the machine that made the request is recorded. Keep in mind that this data is not reported in ETP. To learn more about custom responses, see Custom response.
|
OPCFW_CODE
|
Were postal stamp depicting LTTE symbols and its chief, Prabhakaran, released by government of France?
There were news saying that France has issued or released a postal stamp of LTTE chief Prabhakaran. I am wondering if such stamps were issued by the French postal services.
Here are the some sources reporting the story:
Lanca C news
A forum post
India Today
I would like to know
Were postal stamp depicting LTTE symbols and its chief, Prabhakaran, released by government of France?
Is there an online postal stamp bureau published and managed by the Government of France, where we can surf through all postal stamps and their governmental approvals?
About LTTE and Prabakaran:
LTTE - Liberation Tigers of Tamil Eelam and Prabakaran in the leader, mentor and chief of this organization.
"Although there are a lot of extremist and rebel groups in this world, few of the are rebellious but not terror organization like LTTE" - Hillary Clinton.
This organization was started for civil rights that are ran towards democracy in SriLanka. It was supported and funded by Indian government until the death of Rajiv Gandh. Srilanka is not 100% democratic country, like USA, UK and India.
For those not familiar, could you add a note to explain what LTTE is, who Prabhakaran is, and why it would be surprising for France to issue a postage stamp of him?
@NateEldredge, http://en.wikipedia.org/wiki/Liberation_Tigers_of_Tamil_Eelam
Post stamps bearing the image of Prabhakaran and other LTTE symbols were issued in France, not by the French government but by private users using the ability to create personalized post stamps from an image uploaded by a user. The French government has issued an apology for this, as they consider LTTE to be a terrorist organization.
Images of the stamps (from Colombo Telegraph):
The declaration in the site of the French Embassy in Colombo:
Following reports published in different newspapers regarding the issuance in France of stamps bearing pictures and symbols of the LTTE, the Embassy of France in Sri Lanka wishes to inform the media and the Sri Lankan public that no such stamp is part of the official philatelist programme of France, neither on sale in the French post offices. The French Postal services, “La Poste”, offers an online service through which customers can order limited quantities of personalized stamps under their responsibility and under specific conditions. It appears that individuals have used this service to order the above mentioned stamps.
“La Poste” recognized that they failed to detect that the conditions had not been met and that some of these stamps have been printed by mistake.
Having been informed by the French Ministry of Foreign Affairs that LTTE is a movement which has been listed since 2006 as a terrorist organization by the European Union, “La Poste” assures that no such stamp will be printed further.
Since 1991, le “Groupe La Poste”, of which French postal services are a part, is an independent public, industrial and commercial institution.
On the 4th of January 2012, the French Ambassador has reconfirmed the elements of the press release in a TV interview at the SLRC channel , and added :
"It is a very unfortunate and regrettable error for which La Poste apologized to the SL Embassy in Paris.
[...]
I would like to seize the opportunity of this voice cut with Sri Lanka Rupavahini Corporation to reaffirm that France strongly condemns all form of terrorism all over the world and that the French government could in no way have supported an initiative aiming at backing the LTTE. "
The same happened to Norway in 2012, and they also issued an apology to the Sri Lancan government:
From the Sri Lankan Ministry of External Affairs site:
The Norwegian Postal Authority, Posten Norge, has issued an apology to the Sri Lankan Ambassador in Oslo, Mr. E. Rodney M. Perera, for the recent issuance of LTTE stamps in Norway. In his letter addressed to the Ambassador of Sri Lanka, the President & CEO of Posten Norge, Mr. Dag Mejdell, states, inter alia,
It is of utmost importance to us that these stamps [featuring LTTE symbols] are not related to any official stamp issue by Posten Norge. This is related to a production of “personal stamps” which can be ordered through our webshop by anyone, choosing a personal image by themselves. All orders related to personal stamps are checked by us, and it is our policy never to produce stamps showing “illegal, inappropriate or improper” images. Unfortunately, we regret to learn that the order in question passed our quality and assurance control without being stopped.
These stamps were ordered in a very small quantity. The customer has been informed that the order is violating the conditions for ordering personal stamps and the user account has been closed.
Please accept our sincere apologies for the inconvenience caused by having delivered personal stamps with the LTTE image.
And in Canada in 2011 a similar attempt was made but the Canadian postal services identified it and rejected the request, from Daily Mirror:
The original request for a personalised pro-LTTE postage stamp in Canada depicting ‘Tamil Eelam’ had been turned down by the Canadian postal service in May 2011, the Canadian High Commission in Colombo said yesterday.
A spokesperson for the Canadian High Commission when contacted by the Daily Mirror said, “The original request for the personalised stamp in question was submitted to Canada Post through its Picture Postage website in May 2011. The submission was rejected and the payment was returned to the requester. The representative advised that officials of Picture Postage had determined that the image was not appropriate as a Canadian postage stamp.”
“Such a stamp was never in legal Canadian circulation. Further investigation of Canada Post’s internal records confirmed that no other requests of this nature had been received since May 2011,” the spokesperson added.
It would be great if you post either of the proofs. That is either the apology letter of Government of France to Srilanka OR a proof which could reveal its actual release of postal stamp with it's approval.
@Neocortex, see edit
|
STACK_EXCHANGE
|
<EMAIL_ADDRESS>Fix manifest for v9
https://imageglass.org/news/introducing-the-new-imageglass-version-9-88
Breaking changes
User config files are now in the JSON format: igconfig.json. ImageGlass 9 cannot import settings from version 8.
Similarly, theme and language pack also updated to use JSON format. You can download theme packs for version 9 at Download / Theme packs.
For power users:
Version 9 uses Per-user setting model which is different from Per-machine setting of version 8:
The app settings are now saved in the %LocalAppData% folder. Refer to Docs / App configuration for more information.
The app registry uses HKEY_CURRENT_USER for file type associations and app protocol.
igtasks.exe has been removed.
All commands in igcmd.exe have been reworked. Visit Docs / Command line utilities for the updated information.
To skip the "ImageGlass Quick Setup" dialog, set QuickSetupVersion value to any number greater than 9 in the igconfig.json file.
Closes #12313
[x] I have read the Contributing Guide.
/verify
Should we set autoupdate to false in the new config.json during pre_install (like we used to do with the xml)?
Should we set autoupdate to false in the new config.json during pre_install (like we used to do with the xml)?
Yes, it would be a good idea to maintain the original behavior.
After fresh install of version 9 using this manifest, ImageGlass is not starting
Could not load user settings
The user configuration file:
appears to be corrupted or contains an invalid value. Please review the details below to address the issue before proceeding
System.NullReferenceException: Object reference not set to an instance of an object. at ImageGlass.Settings.Config.LoadThemePack(Boolean darkMode, Boolean useFallBackTheme, Boolean throwIfThemeInvalid, Boolean forceUpdateBackground) in D:\_GITHUB\@d2phap\ImageGlass\Source\Components\ImageGlass.Settings\Config.cs:line 1536 at ImageGlass.Settings.Config.Load() in D:\_GITHUB\@d2phap\ImageGlass\Source\Components\ImageGlass.Settings\Config.cs:line 969 at ImageGlass.Program.Main() in D:\_GITHUB\@d2phap\ImageGlass\Source\ImageGlass\Program.cs:line 63
After fresh install of version 9 using this manifest, ImageGlass is not starting Could not load user settings The user configuration file: appears to be corrupted or contains an invalid value. Please review the details below to address the issue before proceeding System.NullReferenceException: Object reference not set to an instance of an object. at ImageGlass.Settings.Config.LoadThemePack(Boolean darkMode, Boolean useFallBackTheme, Boolean throwIfThemeInvalid, Boolean forceUpdateBackground) in D:\_GITHUB\@d2phap\ImageGlass\Source\Components\ImageGlass.Settings\Config.cs:line 1536 at ImageGlass.Settings.Config.Load() in D:\_GITHUB\@d2phap\ImageGlass\Source\Components\ImageGlass.Settings\Config.cs:line 969 at ImageGlass.Program.Main() in D:\_GITHUB\@d2phap\ImageGlass\Source\ImageGlass\Program.cs:line 63
Could you try scoop uninstall -p imageglass? Themes are incompatible.
|
GITHUB_ARCHIVE
|
Data structure for dynamically changing n-length sequence with longest subsequence length query
I need to design a data structure for holding n-length sequences, with the following methods:
increasing() - returns length of the longest increasing sub-sequence
change(i, x) - adds x to i-th element of the sequence
Intuitively, this sounds like something solvable with some kind of interval tree. But I have no idea how to think of that.
I'm wondering how to use the fact, that we completely don't need to know how this sub-sequence looks like, we only need its length...
Maybe this is something that can be used, but I'm pretty much stuck at this point.
Did you mean a contiguous subsequence?
No, I meant a general subsequence, I think contiguous case is much simpler.
I assumed as much, but I wanted to be sure before trying to make heads or tails of valdem's answer
So if I understand you correctly, it should a variable number of variable-length sequences. Each sequence in this data structure may be in any order.
increasing() returns the longest increasing sub-sequence, which may be an entire sequence or may be a portion of one.
'change(i,x)' appends (?) x to the ith subsequence?
So, are there a constant number of sequences? Or is there another operation to create a new empty sequence?
Or.. reading it again, the data structure holds a single variable length sequence and change(i,x) inserts x at the ith position in the single sequence? Which is the behavior you are going for?
change(i, x) changes single variable from n-length sequence (e.g. of numbers, something like a[i]+=x)
increasing() returns length of the greatest length SUBSEQENCE contained in the sequence
aha! So it has constant length n, and change(i,x) adds x to the ith position of the sequence. Got it. Can x be negative?
n is not a constant, this is a variable on which we do our asymptotic analysis. x can be negative.
can you give an example of a few calls to your proposed methods and what you want to get back as a response?
This solves the problem only for contiguous intervals. It doesn't solve arbitrary subsequences. :-(
It is possible to implement this with time O(1) for interval and O(log(n)) for change.
First of all we'll need a heap for all of the current intervals, with the largest on top. Finding the longest interval is just a question of looking on the top of the heap.
Next we need a bunch of information for each of our n slots.
value: Current value in this slot
interval_start: Where the interval containing this point starts
interval_end: Where the interval containing this point ends
heap_index: Where to find this interval in the heap NOTE: Heap operations MUST maintain this!
And now the clever trick! We always store the value for each slot. But we only store the interval information for an interval at the point in the interval whose index is divisible by the highest power of 2. There is always only one such point for any interval, so storing/modifying this is very little work.
Then to figure out what interval a given position in the array currently falls in, we have to look at all of the neighbors that are increasing powers of 2 until we find the last one with our value. So, for instance, position 13's information might be found in any of the positions 0, 8, 12, 13, 14, 16, 32, 64, .... (And we'll take the first interval we find it in in the list 0, ..., 64, 32, 16, 8, 12, 14, 13.) This is a search of a O(log(n)) list so is O(log(n)) work.
Now how do we implement change?
Update value.
Figure out what interval we were in, and whether we were at an interval boundary.
If intervals got changed, remove the old ones from the heap. (We may remove 0, 1 or 2)
If intervals got change, insert the new ones into the heap. (We may insert 0, 1, or 2)
That update is very complex, but it is a fixed number of O(log(n)) operations and so should be O(log(n)).
I think this is looking for contiguous subsequences, like valdem's answer but unlike the question.
LIS can be solved with tree, but there is another implementation with dynamic programming, which is faster than recursive tree.
This is a simple implementation in C++.
class LIS {
private vector<int> seq ;
public LIS(vector<int> _seq) {seq = _seq ;}
public int increasing() {
int i, j ;
vector<int> lengths ;
lengths.resize(seq.size()) ;
for(i=0;i<seq.size();i++) lengths[i] = 1 ;
for(i=1;i<seq.size();i++) {
for(j=0;j<i;j++) {
if( seq[i] > seq[j] && lengths[i] < lengths[j]+1 ) {
lengths[i] = lengths[j] + 1 ;
}
}
}
int mxx = 0 ;
for(i=0;i<seq.size();i++)
mxx = mxx < lengths[i] ? lengths[i] : mxx ;
return mxx ;
}
public void change(i, x) {
seq[i] += x ;
}
}
This doesn't look like the most efficient solution. In an ideal solution, when you're asking for increasing(), you won't solve the problem from scratch every time...
Rather, that information should be easily available somehow, using nontrivial work that change() would do every time it's called.
@qiubit As you didn't specify the constrains for preprocessing time, update time and query time, it is a little bit unfair to say now, that this algorithm is too slow.
I try to explain my idea. It can be a bit simpler than implementing interval tree, and should give desirable complexity - O(1) for increasing(), and O(logS) for change(), where S is sequences count (can be reduced to N in worst cases of course).
At first you need original array. It need to check borders of intervals (I will use word interval as synonym to sequence) after change(). Let it be A
At the second you need bidirectional list of intervals. Element of this list should store left and right borders. Every increasing sequence should be presented as separate element of this list and this intervals should go one after another as they presented in A. Let this list be L. We need to operate pointers on elements, so, I don't know is it possible to do it on iterators with standard container.
At third you need priority queue that stores lengths of all intervals in you array. So, increasing() function can be done with O(1) time. But you need also storing of pointer to node from L to lookup intervals. Let this priority queue be PQ. More formally you priority queue contains pairs (length of interval, pointer to list node) with comparison only by length.
At forth you need tree, that can retrieve interval borders (or range) for particular element. It can be simply implemented with std::map where key is left border of tree, so with help of map::lower_bound you can find this interval. Value should store pointer to interval in L. Let this map be MP
And next important thing - List nodes should stores indecies of corresponding element in priority queue. And you shouldn't work with priority queue without connection with link to node from L (every swap operation on PQ you should update corresponding indecies on L).
change(i, x) operation can be looks like this:
Find interval, where i located with map. -> you find pointer to corresponding node in L. So, you know borders and length of interval
Try to understand what actions need to do: nothing, split interval, glue intervals.
Do this action on list and map with connection with PQ. If you need split interval, remove it from PQ (this is not remove-max operation) and then add 2 new elements to PQ. Similar if you need to glue intervals, you can remove one from PQ and do increase-key to second.
One difficulty is that PQ should support removing arbitrary element (by index), so you can't use std::priority_queue, but it is not difficult to implement as I think.
If I understand correctly, this answer is for contiguous subsequences, which is an easier problem.
Yeah, I tried to parse that answer and it looks like it. What did you mean by "At third you need priority queue that stores lengths of all intervals in you array. So, increasing() function can be done with O(1) time."? As I understand, intervals represent sequences (as you wrote earlier), so on top of the priority queue, there will be length of the longest SEQUENCE, which is not what this problem is about. Or maybe by O(1) you meant something else than ExtractMax() operation, correct me if I'm wrong.
Yes, sorry for unclear answer. I solved contiguous subsequences problem, maybe because question is unclear for me. By O(1) I meant TakeMax() because we don't want extract Max (remove element from queue)
|
STACK_EXCHANGE
|
========================================================================== J/PASP/132/K4102 Beamed and unbeamed emission of gamma-ray blazars (Pei+, 2020) The following files can be converted to FITS (extension .fit or fit.gz) table1.dat ========================================================================== Query from: http://vizier.u-strasbg.fr/viz-bin/VizieR?-source=J/PASP/132/K4102 ==========================================================================
drwxr-xr-x 6 cats archive 134 Mar 4 2021 [Up] drwxr-xr-x 3 cats archive 273 Sep 7 22:56 [TAR file] -rw-r--r-- 1 cats archive 469 Mar 4 2021 .message -r--r--r-- 1 cats archive 4259 Mar 4 2021 ReadMe -rw-r--r-- 1 cats archive 734 Mar 4 2021 +footg5.gif -rw-r--r-- 1 cats archive 6327 Mar 4 2021 +footg8.gif -r--r--r-- 1 cats archive 51316 Feb 24 2021 table1.dat [txt] [txt.gz] [fits] [fits.gz] [html]
Beginning of ReadMe : J/PASP/132/K4102 Beamed and unbeamed emission of gamma-ray blazars (Pei+, 2020) ================================================================================ Beamed and unbeamed emission of gamma-ray blazars. Pei Z., Fan J., Yang J., Bastieri D. <Publ. Astron. Soc. Pac.,, 132, k4102 (2020)> =2020PASP..132k4102P (SIMBAD/NED BibCode) ================================================================================ ADC_Keywords: Active gal. nuclei ; QSOs ; Redshifts; Radio sources Abstract: A two-component model of radio emission has been used to explain some radio observational properties of Active Galactic Nuclei (AGNs) and, in particular, of blazars. In this work, we extend the two-component idea to the gamma-ray emission and assume that the total gamma-ray output of blazars consists of relativistically beamed and unbeamed components. The basic idea leverages the correlation between the radio core-dominance parameter and the gamma-ray beaming factor. To do so, we evaluate this correlation for a large sample of 584 blazars taken from the fourth source catalog of the Fermi Large Area Telescope (Fermi-LAT) and correlated their gamma-ray core-dominance parameters with radio core-dominance parameters. The gamma-ray beaming factor is then used to estimate the beamed and unbeamed components. Our analysis confirms that the gamma-ray emission in blazars is mainly from the beamed component. Description: The Fermi-Large Area Telescope (Fermi-LAT) has revolutionized our view of the gamma-ray sky. They are collated into the latest Fermi catalog, 4FGL, which includes 5065 sources based on the first 8 yr of data. AGNs are the vast majority of the catalog entries and 98% of AGNs are blazars (Abdollahi et al., 2020ApJS..247...33A, Cat. J/ApJS/247/33; Ajello et al., 2020ApJ...892..105A). We have separated the beamed and unbeamed contributions to the total gamma-ray emission for a sample of 584 Fermi-detected blazars with available radio core-dominance parameters (Pei et al., SCPMA, 63, 259511).
|
OPCFW_CODE
|
1. Install Wine2. Prepare Wine environment for IL-2 server3. Folder structure and startup scripts4. Make another server from existing one5. Final words1. Install Wine
Add PPA repository for wine and install it.
sudo apt-add-repository ppa:ubuntu-wine2. Prepare Wine environment for IL-2 server
sudo apt-get update && sudo apt-get install wine
mkdir il2serversMake IL-2 server CRT-2 compatible.
env WINEPREFIX=/home/ante/il2servers/server1 winecfg
Set application il2server.exe to use native windows 2003 msvcrt.dllDownload msvcrt.dll
Copy msvcrt.dll to /home/ante/il2servers/server1/drive_c/windows/system32 Add new drive D: to Wine
Add drive_d in drives section, we will later put there IL-2 server, HyperLobby and FBDj.
Wine maps drive letters to folders, examine /home/ante/il2servers/server1 and you will see
drive_c - this is C: drive, windows system drive and we will leave it as it is
drive_d - this is D: drive, we will use this folder for our stuff 3. Folder structure and startup scripts
Here is example of mine wine environment folder structure. As you can see I put all my stuff in drive_d (hyperlobby, dummy server for hl, fbdj, server).
I made startup scripts which I use for starting those programs. Most important file there is "vars" file that points wine to correct environment folder.
In our case vars contains
# points wine to correct wine environment location
# each server has it's own environment
All other scripts load "vars" file and than start their own programs like HL, il2server etc. Download startup scripts
All left now is to copy your programs to drive_d and to configure them. 4. Make another server from existing one
a) Copy complete folder /home/ante/il2servers/server1 to /home/ante/il2servers/server2
b) change vars file content to point to your new location
c) change parameters for HL, IL2 server like ports, username etc.
Now you have 2 independent installation of IL-2 servers.
1st located in /home/ante/il2servers/server1
2nd located in /home/ante/il2servers/server2
Startup scripts will help you to manage each server easily. 5. Final words
- Your wine environment is now located for server1 in /home/ante/il2servers/server1 and for server2 in /home/ante/il2servers/server2
- This environment is similar to windows installation on PC, look at Wine environment as it is virtual PC, it has drives C: and D:, it is independent of other wine environments on your Linux server
|
OPCFW_CODE
|
Meta Data Repositories: Where We’ve Been And Where We’re Going
By David Marco
I thought that it would be valuable to take a look back and see where meta data management and repositories have come from and where the meta data management and repository industry are headed. As the old saying goes, “We’ve come a long way baby!”
Many people believe that meta data and meta data repositories are new concepts, but their origins date back to the early 1970s, or in more general terms back to the first days of computing. When we first started building computer systems, we realized that there was a “bunch of stuff” (knowledge) that was absolutely necessary for building, using, and maintaining information technology (IT) systems. We learned very quickly that meta data existed throughout all of our organizations (see Figure 1). Meta data is stored in our systems, technical processes, business processes, policies and people. Essentially, we knew that we had no place to put any of this information (meta data). At this point, we realized that we needed data about the data that we were using in our computer systems.
Figure 1: Meta Data Points
Early Commercial Products
When the first commercial meta data repositories appeared in the mid-1970s, they were called “data dictionaries”. These data dictionaries were very “data” focused and less “knowledge” focused. They provided a centralized repository of information about data, such as meaning, relationships, origin, domain, usage, and format. Their purpose was to assist database administrators (DBAs) in planning, controlling, and evaluating the collection, storage, and use of data. One of the challenges that meta data repositories have today is differentiating themselves from data dictionaries. While meta data repositories perform all of the functions of a data dictionary, their scope is far greater. The early meta data repositories (data dictionaries) were mainly used for defining requirements, corporate data modeling, COBOL (common business-oriented language) and PL/1 (programming language one) data definition generation and database support (see Figure 2).
Figure 2: 1970s: Data Dictionaries Masquerading as Repositories
Later a new phenomenon would enter the world of IT and forever change it…the personal computer (PC). When PCs burst onto the business scene, they changed the way companies worked and fueled tremendous gains in productivity. CASE (computer aided software engineering) was one of the productivity gains. CASE tools were software applications that automate the process of designing databases, applications, and software implementation. These design and construction tools stored data about the data (meta data) that they managed (see Figure 3).
It didn’t take long before the users of the CASE tools started asking their vendors to build interfaces to link the meta data from various CASE tools together. The CASE tool vendors were reluctant to build these interfaces because they believed that their own tool’s repository could provide all of the necessary functionality, and, understandably, they didn’t want companies to be able to easily migrate from their tool to a competitor’s tool. Nevertheless, some interfaces were built, either using vendor tools or dedicated interface tools.
In 1987, the need for CASE tool integration triggered the Electronic Industries Alliance (EIA) to begin working on a CASE data interchange format (CDIF), which attempted to tackle the problem by defining meta models for specific CASE tool subject areas by means of an object oriented entity relationship modeling technique. In many ways, the CASE Data Interchange Format (CDIF) standards came too late for the CASE tool industry.
Figure 3: 1980s: CASE Tool-Based Repositories
During the 1980’s, several companies, including IBM, announced mainframe-based meta data repository tools. These efforts were the first meta data initiatives, but their scope was limited to technical meta data and almost completely ignored business meta data. Most of these early meta data repositories were just glamorized data dictionaries, intended, like the earlier data dictionaries, for use by DBAs and data modelers. In addition, the companies that created these repositories did little to educate their clients in the use of these tools. Few companies saw much value in these early repository applications.
In the 1990’s, decision support emerged and soon convinced business managers of the value of a meta data repository, expanding the scope of the early repository efforts well beyond that of data dictionaries.
The meta data repositories of the 1990’s featured a client-server paradigm as opposed to the traditional mainframe platform on which the old repositories operated. The mainframe vendors viewed these new repositories as a threat since they greatly eased the task of migrating from a mainframe environment to the new and popular client-server architecture. The multiplicity of decision support tools requiring access to meta data re-awakened the slumbering repository market. Vendors such as Rochade, RELTECH Group, and BrownStone Solutions were quick to jump into the fray with new repository products. Many older, established computing companies recognized the market potential and attempted, sometimes successfully, to buy their way in by acquiring these pioneer repository vendors. For example, Platinum Technologies purchased RELTECH, BrownStone, and LogicWorks, and was then swallowed by Computer Associates in 1999.
Figure 4: 1990s: Decision Support Meta Data Repositories
Where Are We Headed?
Meta Data Management Moving Mainstream
Currently meta data management and meta data repository development is at a very similar stage as data warehousing was in the early 1990’s. In the early 1990’s, people like Bill Inmon were essential in articulating the value of building data warehouses. At that time companies were beginning to listen and were starting to build their data warehouse investments. Meta data repositories are moving in much the same direction today. In fact at Enterprise Warehousing Solutions (EWS) we are doing more meta data repository development now than at any other point in our company’s history. Companies are beginning to realize that they need to make significant investments in their repositories in order for their systems to provide value.
Meta Data Repositories Providing Knowledge Management
All corporations are becoming more intelligent. Businesses realize that to attain a competitive advantage they need their IT systems to manage more than just their data; they must manage their knowledge (meta data). As a corporation’s IT systems mature, they progress from collecting and managing data to collecting and managing knowledge. Knowledge is a company’s most valuable asset and a meta data repository is the key to managing a company’s corporate knowledge (for more information on this topic see “A Meta Data Repository Is The Key To Knowledge Management”, David Marco).
Maturing Meta Data Integration Products
There has been no tougher critic of the meta data integration vendors than myself and I still believe that these vendors are neglecting their most important user: the business user. With that being said, in the past year I also have seen across-the-board improvements by almost all of the vendors in this area. New vendors like Data Advantage Group are coming onto the meta data integration scene with new and exciting products. In addition, the more traditional meta data repository vendors like Computer Associates and Allen Systems Group have all dramatically improved their product lines.1
Table 1: Meta Data Integration Vendors
Meta Data Repositories/Management Continue Moving Mainstream
To illustrate how far meta data repository development has come, around 9 months ago I was asked to speak for one day to a group of approximately 15 IT senior vice-presidents of banks. their number one technology issue was meta data! When I spoke on meta data many years ago, we were lucky to have 15 IT developers in a talk. In most every Fortune 500 company there exists massive amounts of redundant data (I have experienced that the average company has 4 fold needless data redundancy), needlessly redundant systems, and tremendous data quality problems. Fortunately, executive management are starting to realize that these problems result in a tremendous cost drain for their companies. These same people are looking to control the costs of their IT departments through the use of meta data repositories. As a result, meta data repositories and meta data management are continuing to move up corporations’ IT priority lists.
1 If you are assessing these vendor’s products you may be interested in a third-party evaluation. Information on EWS’ 150-page comparative study of these producor by emailing ts can be found on the EWS website at www.EWSolutions.com or by emailing email@example.com decision support technologies (866) EWS-1100. He may be reached directly at (708) 233-6330 or via email at firstname.lastname@example.org
|
OPCFW_CODE
|
The MATLAB function accumarray seems to be under-appreciated. accumarray allows you to aggregate items in an array in the way that you specify.
Since accumarray has been in MATLAB (7.0, R14), there have been over 100 threads in the MATLAB newsgroup where accumarray arose as a solution.
One of the more recent threads asks how to aggregate values in one list based on another list. Suppose the lists are
group = [1 2 2 2 3 3]' data = [6 43 3 4 2 5]'
group = 1 2 2 2 3 3 data = 6 43 3 4 2 5
and the goal is to sum the data in each group. Let's first create the first input argument. accumarray wants the an array of subscripts of the data pertaining to which output value the data belongs to. Since we're just producing a column vector with 3 values, we just append a column of ones to the group vector.
indices = [group ones(size(group))]
indices = 1 1 2 1 2 1 2 1 3 1 3 1
Since the default function for accumulation is sum, we can use the simplest form of accumarray to get the desired results.
sums = accumarray(indices, data)
sums = 6 50 7
We can instead accumulate the results by adding 2 input arguments to the function call. These are the a size vector for the output array and a function handle specifying the accumulating function.
sums1 = accumarray(indices, data, [numel(unique(group)) 1], @sum)
sums1 = 6 50 7
It's easy to see that the results from the two function calls are the same.
ans = 1
Sometimes, summing the results isn't what I'm looking for. Having puzzled out the 4 input call syntax, I can now simply replace the accumulation function. To find the maximum values in each group, I use this code.
maxData = accumarray(indices, data, [numel(unique(group)) 1], @max)
maxData = 6 43 5
maxData = accumarray(indices, data, [numel(unique(group)) 1], ... @(x)~any(isfinite(x)))
maxData = 0 0 0
data(end) = Inf maxData = accumarray(indices, data, [numel(unique(group)) 1], ... @(x)~any(isfinite(x)))
data = 6 43 3 4 2 Inf maxData = 0 0 0
maxData = accumarray(indices, data, [numel(unique(group)) 1], ... @(x)all(isfinite(x)))
maxData = 1 1 0
John D'Errico made a more general function consolidator, found on the MathWorks File Exchange to allow you to do some extra aggregation. For example, consolidator allows the aggregation of elements when they are within a specified tolerance and not just identical.
Some other obvious accumulation functions you might use include sum, max, min, prod. What functions do you use in situations when you aggregate with accumarray?
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
|
OPCFW_CODE
|
import { languages, Hover, Position, Range, TextDocument } from 'vscode'
import { DirectiveType } from '../directives/parser'
import { LanguageParser } from './parser'
import { register } from './index'
jest.mock('vscode')
describe('Language extension', () => {
describe('Decorations', () => {
const languageParser = () => {
const subscriptions: any[] = []
register(subscriptions)
const langParser: LanguageParser = subscriptions.find(
(subscribed) => subscribed instanceof LanguageParser
)
expect(langParser).toBeInstanceOf(LanguageParser)
return langParser
}
it('sets decorations to directives when updated the active editor', () => {
const parser = languageParser()
const editorMock = { setDecorations: jest.fn() }
const dummyRangeG = new Range(new Position(0, 0), new Position(0, 10))
const dummyRangeL = new Range(new Position(1, 0), new Position(1, 10))
parser.emit(
'activeEditorUpdated',
editorMock as any,
{
directvies: [
{ info: { type: DirectiveType.Global }, keyRange: dummyRangeG },
{ info: { type: DirectiveType.Local }, keyRange: dummyRangeL },
],
} as any
)
expect(editorMock.setDecorations).toHaveBeenCalledWith(
expect.objectContaining({ fontWeight: 'bold' }), // for directive keys
[dummyRangeG, dummyRangeL]
)
expect(editorMock.setDecorations).toHaveBeenCalledWith(
expect.objectContaining({ fontStyle: 'italic' }), // for global directive keys
[dummyRangeG]
)
})
it('clears decorations when dispoed the active editor', () => {
const parser = languageParser()
const editorMock = { setDecorations: jest.fn() }
parser.emit('activeEditorDisposed', editorMock as any)
expect(editorMock.setDecorations).toHaveBeenCalledWith(
expect.objectContaining({ fontWeight: 'bold' }), // for directive keys
[]
)
expect(editorMock.setDecorations).toHaveBeenCalledWith(
expect.objectContaining({ fontStyle: 'italic' }), // for global directive keys
[]
)
})
})
describe('Hover help', () => {
it('registers hover provider to subscriptions', () => {
const mockedHoverProvider = {
dispose: () => {
// test
},
}
jest
.spyOn(languages, 'registerHoverProvider')
.mockReturnValue(mockedHoverProvider)
const subscriptions: any[] = []
register(subscriptions)
expect(subscriptions).toContain(mockedHoverProvider)
})
it('provides hover when the cursor is containing in the range of directive', async () => {
const registerSpy = jest.spyOn(languages, 'registerHoverProvider')
register([])
expect(registerSpy).toHaveBeenCalledTimes(1)
expect(registerSpy).toHaveBeenCalledWith(
'markdown',
expect.objectContaining({ provideHover: expect.any(Function) })
)
// Call providerHover
const docMock: TextDocument = {} as any
const ignoredRange = new Range(new Position(0, 0), new Position(0, 10))
const range = new Range(new Position(1, 0), new Position(1, 10))
const getParseDataSpy = jest
.spyOn(LanguageParser.prototype, 'getParseData')
.mockResolvedValue({
directvies: [
{ info: {}, range: ignoredRange },
{ info: {}, range },
],
} as any)
const { provideHover } = registerSpy.mock.calls[0][1]
const hover = await provideHover(docMock, new Position(1, 0), {} as any)
expect(getParseDataSpy).toHaveBeenCalledWith(docMock)
expect(hover).toBeInstanceOf(Hover)
})
it('does not provide hover if the parsed document data is not provided', async () => {
const registerSpy = jest.spyOn(languages, 'registerHoverProvider')
register([])
jest
.spyOn(LanguageParser.prototype, 'getParseData')
.mockResolvedValue(undefined)
const provideHover: any = registerSpy.mock.calls[0][1].provideHover
const hover = await provideHover()
expect(hover).toBeUndefined()
})
})
})
|
STACK_EDU
|
Did Alan Turing Invent The Computer?
Alan Turing is considered by many to be the father of modern computing. But did he actually invent the computer? The answer to that question is a bit complicated.
Turing was a mathematician and cryptanalyst who worked for the British government during World War II. He was responsible for breaking the German Enigma code, which helped the Allies win the war. After the war, Turing turned his attention to computing. He developed a number of theories about how computers could work and how they could be used to solve problems.
But Turing didn’t actually build the first computer. That honor goes to Charles Babbage, who designed and built a mechanical machine called the Analytical Engine in the early 1800s. The Analytical Engine was the first machine that could be programmed to perform a specific task.
Turing’s biggest contribution to computing was his development of the concept of the stored-program computer. This is the basic design of all modern computers. Turing also came up with the idea of the universal Turing machine, which is a theoretical machine that can be programmed to do anything that a real computer can do.
So while Turing didn’t actually invent the computer, he did make some major contributions to the field of computing that have had a lasting impact.
Did Alan Turing create computers?
Alan Turing is considered to be the father of modern computing, but did he actually create the first computers?
Turing was born in 1912 and studied mathematics at Cambridge University. He worked as a cryptanalyst during World War II, and is credited with breaking the German Enigma code. After the war, Turing turned his attention to computing, and in 1950 he published a paper that outlined the theory of a theoretical machine, now known as a Turing machine.
Although Turing’s paper didn’t actually create a working computer, it did lay the groundwork for their development. In fact, some historians believe that Turing’s work was actually more important than the work of John von Neumann, who is typically credited with creating the first modern computer.
Despite his contributions to computing, Turing was persecuted for his homosexuality, and in 1952 he was convicted of “gross indecency” and sentenced to chemical castration. He committed suicide in 1954.
Despite his short life, Turing made a significant impact on the development of computing, and his work continues to be studied and used in modern computing.
Did Alan Turing invented the computer or Charles Babbage?
There is no simple answer to the question of who invented the computer – Alan Turing or Charles Babbage? The truth is, there were a number of people who made significant contributions to the development of the computer, and it is difficult to say definitively who deserves the most credit.
Alan Turing is often considered the father of modern computing, due to his pioneering work on artificial intelligence and machine learning. However, Charles Babbage is also considered a key figure in the history of computing, thanks to his work on the first programmable computer.
So, who deserves the most credit for the invention of the computer? It is difficult to say for sure, but both Turing and Babbage made significant contributions to the development of this revolutionary technology.
What device did Alan Turing invent?
Alan Turing is famous for his work on cracking the Enigma code during World War II, but what many people don’t know is that he also invented a device that is now known as the Turing machine.
The Turing machine is a theoretical device that can be used to calculate any mathematical problem that can be solved by a human. It works by reading symbols from a tape, and then printing new symbols on the tape based on a set of rules.
The machine was first proposed by Turing in 1936, and although it was never built, it has been used to help develop modern computers.
Who actually invented the computer?
The computer is one of the most important inventions in history. But who actually invented it?
There is no one answer to this question. Computers have been around for centuries, and have been developed by many different people.
One of the earliest computers was the abacus, which was first developed in Babylonia in the 6th century BC. The abacus is a simple device that can be used to perform basic calculations.
In the early 1800s, Charles Babbage designed a machine called the Analytical Engine, which could perform more complex calculations. However, the machine was never completed.
In 1937, John Atanasoff and Clifford Berry developed the first electronic computer, called the Atanasoff-Berry Computer. However, this machine was not actually built until 1973.
In 1941, Konrad Zuse designed and built the first programmable computer.
In 1945, John Von Neumann developed the first general-purpose computer.
So, who actually invented the computer? There is no one answer to this question. It is a collaborative invention, and has been developed by many different people over centuries.
What is Alan Turing most famous for?
Alan Turing is most famous for his work on artificial intelligence and his contributions to the development of the modern computer. He also played a key role in the code-breaking efforts of the Allied forces during World War II.
Who really broke the Enigma code?
On 18th of May, 1941, the German battleship Bismarck was sunk by the British. This was a significant victory for the British, as the Bismarck was the most powerful battleship in the world. However, the British victory would not have been possible without the help of the codebreakers at Bletchley Park.
The Enigma code was a code used by the Germans to encrypt their messages. It was considered to be unbreakable, but the codebreakers at Bletchley Park were able to break it. The codebreakers at Bletchley Park were led by Alan Turing, and their work was crucial to the Allied victory in World War II.
Despite their success, the true identity of the codebreaker who broke the Enigma code has never been revealed. There have been many candidates proposed over the years, but there is no definitive answer. It is possible that it was a team of codebreakers rather than a single individual.
Whatever the case may be, the work of the codebreakers at Bletchley Park was instrumental in the Allied victory in World War II. Without their efforts, the course of the war may have been very different.
Who is the real father of modern computer?
The father of the modern computer is a matter of debate. Some say it is Charles Babbage, while others claim it is John Atanasoff. However, most experts agree that the real father of the modern computer is Alan Turing.
Charles Babbage is often credited with designing the first mechanical computer, the Analytical Engine. However, this machine was never completed. John Atanasoff is also sometimes credited with designing the first computer, the Atanasoff-Berry Computer. However, this machine was not actually a computer, but rather a precursor to the computer.
Alan Turing is generally considered to be the father of the modern computer. In 1936, he published a paper, On Computable Numbers, which laid the foundations for modern computer science. He also designed the first programmable computer, the Turing Machine. This machine was able to compute any mathematical problem, which made it a foundational machine for modern computing.
Turing was also responsible for cracking the German Enigma code during World War II, which helped the Allies win the war. After the war, he was persecuted for his homosexuality, which led to his suicide in 1954. However, his work on the modern computer remains his most significant contribution to history.
|
OPCFW_CODE
|
// Each scope gets a bitset that may contain these flags
/* eslint-disable prettier/prettier */
/* prettier-ignore */
export const enum ScopeFlag {
OTHER = 0b000000000,
PROGRAM = 0b000000001,
FUNCTION = 0b000000010,
ARROW = 0b000000100,
SIMPLE_CATCH = 0b000001000,
SUPER = 0b000010000,
DIRECT_SUPER = 0b000100000,
CLASS = 0b001000000,
STATIC_BLOCK = 0b010000000,
TS_MODULE = 0b100000000,
VAR = PROGRAM | FUNCTION | STATIC_BLOCK | TS_MODULE,
}
/* prettier-ignore */
export const enum BindingFlag {
// These flags are meant to be _only_ used inside the Scope class (or subclasses).
KIND_VALUE = 0b0000000_0000_01,
KIND_TYPE = 0b0000000_0000_10,
// Used in checkLVal and declareName to determine the type of a binding
SCOPE_VAR = 0b0000000_0001_00, // Var-style binding
SCOPE_LEXICAL = 0b0000000_0010_00, // Let- or const-style binding
SCOPE_FUNCTION = 0b0000000_0100_00, // Function declaration
SCOPE_OUTSIDE = 0b0000000_1000_00, // Special case for function names as
// bound inside the function
// Misc flags
FLAG_NONE = 0b00000001_0000_00,
FLAG_CLASS = 0b00000010_0000_00,
FLAG_TS_ENUM = 0b00000100_0000_00,
FLAG_TS_CONST_ENUM = 0b00001000_0000_00,
FLAG_TS_EXPORT_ONLY = 0b00010000_0000_00,
FLAG_FLOW_DECLARE_FN = 0b00100000_0000_00,
FLAG_TS_IMPORT = 0b01000000_0000_00,
// Whether "let" should be allowed in bound names in sloppy mode
FLAG_NO_LET_IN_LEXICAL = 0b10000000_0000_00,
// These flags are meant to be _only_ used by Scope consumers
/* prettier-ignore */
/* = is value? | is type? | scope | misc flags */
TYPE_CLASS = KIND_VALUE | KIND_TYPE | SCOPE_LEXICAL | FLAG_CLASS|FLAG_NO_LET_IN_LEXICAL,
TYPE_LEXICAL = KIND_VALUE | 0 | SCOPE_LEXICAL | FLAG_NO_LET_IN_LEXICAL,
TYPE_CATCH_PARAM = KIND_VALUE | 0 | SCOPE_LEXICAL | 0,
TYPE_VAR = KIND_VALUE | 0 | SCOPE_VAR | 0,
TYPE_FUNCTION = KIND_VALUE | 0 | SCOPE_FUNCTION | 0,
TYPE_TS_INTERFACE = 0 | KIND_TYPE | 0 | FLAG_CLASS,
TYPE_TS_TYPE = 0 | KIND_TYPE | 0 | 0,
TYPE_TS_ENUM = KIND_VALUE | KIND_TYPE | SCOPE_LEXICAL | FLAG_TS_ENUM|FLAG_NO_LET_IN_LEXICAL,
TYPE_TS_AMBIENT = 0 | 0 | 0 | FLAG_TS_EXPORT_ONLY,
// These bindings don't introduce anything in the scope. They are used for assignments and
// function expressions IDs.
TYPE_NONE = 0 | 0 | 0 | FLAG_NONE,
TYPE_OUTSIDE = KIND_VALUE | 0 | 0 | FLAG_NONE,
TYPE_TS_CONST_ENUM = TYPE_TS_ENUM | FLAG_TS_CONST_ENUM,
TYPE_TS_NAMESPACE = 0 | 0 | 0 | FLAG_TS_EXPORT_ONLY,
TYPE_TS_TYPE_IMPORT = 0 | KIND_TYPE | 0 | FLAG_TS_IMPORT,
TYPE_TS_VALUE_IMPORT = 0 | 0 | 0 | FLAG_TS_IMPORT,
TYPE_FLOW_DECLARE_FN = 0 | 0 | 0 | FLAG_FLOW_DECLARE_FN,
}
export type BindingTypes =
| BindingFlag.TYPE_NONE
| BindingFlag.TYPE_OUTSIDE
| BindingFlag.TYPE_VAR
| BindingFlag.TYPE_LEXICAL
| BindingFlag.TYPE_CLASS
| BindingFlag.TYPE_CATCH_PARAM
| BindingFlag.TYPE_FUNCTION
| BindingFlag.TYPE_TS_INTERFACE
| BindingFlag.TYPE_TS_TYPE
| BindingFlag.TYPE_TS_TYPE_IMPORT
| BindingFlag.TYPE_TS_VALUE_IMPORT
| BindingFlag.TYPE_TS_ENUM
| BindingFlag.TYPE_TS_AMBIENT
| BindingFlag.TYPE_TS_NAMESPACE
| BindingFlag.TYPE_TS_CONST_ENUM
| BindingFlag.TYPE_FLOW_DECLARE_FN;
/* prettier-ignore */
export const enum ClassElementType {
OTHER = 0,
FLAG_STATIC = 0b1_00,
KIND_GETTER = 0b0_10,
KIND_SETTER = 0b0_01,
KIND_ACCESSOR = KIND_GETTER | KIND_SETTER,
STATIC_GETTER = FLAG_STATIC | KIND_GETTER,
STATIC_SETTER = FLAG_STATIC | KIND_SETTER,
INSTANCE_GETTER = KIND_GETTER,
INSTANCE_SETTER = KIND_SETTER,
}
|
STACK_EDU
|
Make not seeing python files
So I have this code:
CELLMECH=../cellmech
ALL: \
png/01.png \
png/02.png \
png/03.png \
png/04.png \
png/05.png \
png/06.png \
png/07.png \
png/08.png \
png/09.png \
png/10.png pickles/cellMech.10.pickle.gz
include ${CELLMECH}/png.make
pickles/modLink.01.pickle.gz : stretched-plane.pickle.gz
mkdir -p pickles
${CELLMECH}/modLink.py -i $< -o $@ -c conf
pickles/cellMech.01.pickle.gz : pickles/modLink.01.pickle.gz
${CELLMECH}/cellMech.py -dt 0.001 -nmax 10000 -i $< -o $@ > /dev/null
.
.
.
This is a .make extension file that I exceute with: "make -f 10.make"
Then I get an error:
modLink.py -i stretched-plane.pickle.gz -o pickles/modLink.01.pickle.gz -c conf
make: modLink.py: No such file or directory
make: *** [10.make:19: pickles/modLink.01.pickle.gz] Error 127
I have tried everything, direct path to files, changing names, remaking the files, nothing worked. The make program can find the files with .make extension, but no matter what I do, it somehow cannot see the .py extension files.
Any help would be greatly appreciated.
You can't have shown us the output from the makefile you provided. Your makefile recipe has ${CELLMECH}/modlink.py ... but the output you show just says modlink.py .... It's impossible for that recipe to generate that output. There must be something else going on here. Please be sure you are cutting and pasting exactly the makefile rule and output you get, not paraphrasing.
Afaik that part is correct, as if I just change the modLink.py to any other .make file it finds it. Also there is the "include ${CELLMECH}/png.make" part, which runs fine.
I'm saying textually, it's not possible. The recipe says ${CELLMECH}/modlink.py but the output you show is just modlink.py. That can't happen. Even if the variable CELLMECH is empty the invocation MUST be at least /modlink.py. There's no possible way that the / can be omitted.
I played around a little and now I get make: ../cellmech/modLink.py: No such file or directory back. It can still find the png.make file, just not the .py extension ones.
I don't understand comments like: It can still find the png.make file, just not the .py extension ones I fear it results from a fundamental confusion. What is "it" in this statement? You can't compare png.make, which is a makefile, with .py extension files, which are python scripts. They are completely different. If you type ../cellmech/modLink.py at your shell prompt, don't you get the same error? What's the output from ls -l ../cellmech/modLink.py What's the first line of the ../cellmech/modlink.py file say?
I am reffering to GNU make, which runs the make files. To answer your questions:...\examples>ls -l ../cellmech/modLink.py gives back: -rwx------+ 1 budai budai 6280 2017 ápr. 29 ../cellmech/modLink.py and the first line is: #!/usr/bin/python -u then importing librarys and just usual python stuff.
I'm assuming that you're logged in as user budai and that running this command from the command line works. In that case i recommend you edit the recipe and add in information to show the path etc., such as pwd; ls -l ${CELLMECH}/modlink.py etc. to see what's going on. You can also try using /usr/bin/python -u ${CELLMECH}/modlink.py in your recipe directly and see if that helps or gives any other information.
Thank you for your help, in the end simply typing python before those lines were the solution.
Maybe that means that the interpreter /usr/bin/python doesn't exist on your system. In that case, the first line of your file using #!/usr/bin/python is wrong.
Okay, after some consultation I found a solution: inserting "python" before the files, so they are given to the python interpeter directly. So the code runs in this form:
.
.
.
pickles/modLink.01.pickle.gz : stretched-plane.pickle.gz
mkdir -p pickles
python ${CELLMECH}/modLink.py -i $< -o $@ -c conf
pickles/cellMech.01.pickle.gz : pickles/modLink.01.pickle.gz
python ${CELLMECH}/cellMech.py -dt 0.001 -nmax 10000 -i $< -o $@ > /dev/null
.
.
.
|
STACK_EXCHANGE
|
This is the third in an N part series on rewriting my podcast grabbing application. Here are the links to parts one and two. In part two, I promised to get into a common way of synchronizing media files between media stores.
Delivering on that promise, here is my SyncManager:
from sets import Set class SyncManager(object): """This is a concrete implementation of a syncronization manager which is intended to be subclassed if necessary. A SyncManager connects two mediaStores with filters and processing steps. It should be able to copy files from the fromStore to the toStore, exclude any files which were filtered out, and execute any processingSteps along the way. """ def __init__(self, fromStore, toStore, copyFilters, deleteFilters, preProcessingSteps, postProcessingSteps): self.fromStore = fromStore self.toStore = toStore self.copyFilters = copyFilters self.deleteFilters = deleteFilters self.preProcessingSteps = preProcessingSteps self.postProcessingSteps = postProcessingSteps self._init() def _init(self): pass def getCopyList(self): """return a list of files we need to copy from the fromStore and to the toStore""" copySetAll = Set(self.fromStore.list()) copySet = Set() for filter in self.copyFilters: copySet = filter.filter(self.fromStore, self.toStore).union(copySet) ##a poorly written filter could feasibly add things to the list that weren't ##initially there. Use set arithmetic to return all things which the filters ##returned which were also in the original set. copySet = copySet.intersection(copySetAll) return copySet def getDeleteList(self): """return a list of files that we need to delete from the toStore""" deleteSetOriginal = Set(self.toStore.list()) ## set deleteSet to a null set - we'll build up a set of what to delete. deleteSet = Set() for filter in self.deleteFilters: deleteSet = filter.filter(self.fromStore, self.toStore).union(deleteSet) ##a poorly written filter could feasibly add things to the list that weren't ##initially there. Use set arithmetic to return all things which the filters ##returned which were also in the original set. deleteSet = deleteSet.intersection(deleteSetOriginal) return deleteSet def setOverrideCopyList(self, copyList): pass def syncCopy(self): for mediaFile in self.getCopyList(): for preProcessingStep in self.preProcessingSteps: mediaFile = preProcessingStep.process(mediaFile) self.toStore.addFile(mediaFile) for postProcessingStep in self.postProcessingSteps: mediaFile = postProcessingStep.process(mediaFile) def syncDelete(self): pass def syncAll(self): ##do delete first because that is often nicer to lower-memory ##media devices self.syncDelete() self.syncCopy()
This was nearly the final piece to the puzzle for creating a simple automated RSS download manager. I introduced several new concepts to the mix, including copy and delete filters, and pre and post processing steps. The copy filter(s) help determine which files to copy, or more to the point, which files not to copy, from the source media store. I’m not using delete filters yet, but they would determine which files to remove from the source media store when I get the “sync to MP3 player” functionality working.
Processing steps were how I decided to handle the problem of keeping track of which files have been downloaded. This is also how I plan on accommodating changing of certain ID3 tags upon download or sync-to-MP3-player. After copying a file from one media store to another, the post processing steps are called. If you notice in the syncCopy() method, the processing steps, both pre and post, return the mediaFile and re-bind the returned media file to the same name we were using in processing it.
def syncCopy(self): for mediaFile in self.getCopyList(): for preProcessingStep in self.preProcessingSteps: mediaFile = preProcessingStep.process(mediaFile) self.toStore.addFile(mediaFile) for postProcessingStep in self.postProcessingSteps: mediaFile = postProcessingStep.process(mediaFile)
This will allow the ability to chain processing steps and manipulate the file with each successive step. I actually had more things in mind that just podgrabber for this. I’m seriously considering removing the virtual filesystem functionality from podgrabber and spinning off another open source project. But one thing at a time, right?
So, we finally have a working podgrabber. Here is a script which will run podgrabber in an unattended way and keep track of what it has downloaded:
from podgrabber import syncManager from podgrabber import mediaStore from podgrabber import filter from podgrabber import processingSteps import os dl_base = 'download' db_filename = 'podgrabber.db' rss_list = ( ##Name, rss url, dl directory ('Buzz Out Loud', 'http://www.cnet.com/i/pod/cnet_buzz.xml', 'buzz'), ('News.com Daily', 'http://news.com.com/2325-11424_3-0.xml', 'news.com'), ) db_copy_filter = filter.DbCopyFileFilter(db_filename) update_processing_step = processingSteps.UpdateDBStep(db_filename) sm_list = copy_filters = [db_copy_filter] delete_filters = pre_proc = post_proc = [update_processing_step] for name, url, directory in rss_list: fromStore = mediaStore.RSSMediaStore(url) toStore = mediaStore.FileSystemMediaStore(os.path.join(dl_base, directory)) sm = syncManager.SyncManager(fromStore, toStore, copy_filters, delete_filters, pre_proc, post_proc) copy_list = list(sm.getCopyList()) print copy_list sm.syncAll() sm_list.append(sm) #for f in copy_list[:-3]: # print 'Updating', f # update_processing_step.process(f)
This is a working, but limited script. There is no GUI. There are no status updates on how far along the download has progressed. This is a totally single threaded process. The single threadedness is probably the biggest stinker among the limitations. I can either handle threading at the very top level and spin off each syncManager into its own thread, which would be the easy way, or introduce something like a syncTaskManager which the syncManagers would pass off a task to and it would manage a thread pool and execution of those tasks. Hmmmm….a task manager isn’t a half bad idea, come to think of it.
OK - so for next time, I’ll either start re-implementing the GUI for podgrabber, or I’ll introduce a task manager for nicer, cleaner threading. What am I talking about? I don’t even have threading yet!
|
OPCFW_CODE
|
Are you interested in knowing more about this endangered species of ball python? Do you want to know about its habitat and distinctive traits? Do you think you can cross breed a Burmese python with a ball python? Carry on reading to know about these
Due to the demand for the skin of ball pythons, there was rampant killing that was once done. Nowadays most of the ball pythons visible are captive bred.
The diet of these ball pythons is mostly grassland rodents. Ball pythons love to prey on zebra mice, cane rats, gerbils, Nile rats, jerboas, etc. These pythons also love to consume shrews and birds that nest on the ground. Sometimes when a ball python is taken captive, it becomes difficult to find proper meals as it often does not like consuming commercially available rats and mice. These creatures develop a liking for a certain species of prey in the wild and hence find it difficult to accustom to the commercially available rodents when captured.
Ball pythons are found in those areas which are subject to a prolonged hot and dry weather. The python can stay in these regions even if the availability of food is scarce. These pythons often go into long periods of inactivity and fast in underground shelters and mammal burrows. The females of this species of snakes typically lay 6 to 7 big eggs on a yearly basis or even less frequently. The common places where these eggs are laid are rodent burrows or abandoned aardvark. After a three month incubation period, the eggs are hatched. Along with other members in the family, the female pythons also remain wrapped around the eggs in this period. This is done to provide protection and to control the ambient humidity/temperature.
The female pythons are also capable of elevating the temperature levels by remaining coiled around the eggs and shivering. During times of incubation, both Burmese pythons and ball pythons are capable of performing this activity and raising the temperature by about 7 degrees Fahrenheit. Field research that has been conducted with ball pythons has revealed that the main purpose of incubation in this species is conservation of egg weight by avoiding water loss.
The typical size of a ball python when it is young is between 7 to 10 inches after hatching. Later when the period of sexual maturity is reached after 3 or 4 years, it becomes almost 3 feet long. These pythons are harvested for the leather trade too. Nowadays captive-bred stock is extensively available, but previously large volumes of this snake were collected for the pet trade. Then these were exported to Japan, Europe and U.S.A. Although the cross-breeding of a female Burmese python with a male ball python has been done the results have not been too positive. The results are often unpredictable and the offspring(s) is/are quite weak and shriveled.
|
OPCFW_CODE
|
WildFly 8/JBoss: General way to debug Java EE classloader linkage errors
what is the right way to debug those issues?
In my case I have massive troubles with a LinkageError within my Java EE web project:
Problem
I included JSF API (jboss-jsf-api_2.2_spec-2.2.5.jar) into my wildfly modules directory, i.e. it will be loaded by Wildfly classloader.
I have external libraries which also depend on that JSF implementation (e.g. Primefaces and OmniFaces).
Additionally, to let the build process run without errors I have to add the library as an separate EAR library.
The strange manner is when adding a bean that implements function with faces event paramaters, e.g.
public void myValueChangeListener(ValueChangeEvent e) {
// do sth.
}
implementing those functions results in ...
10:22:18,571 WARN [org.jboss.modules] (MSC service thread 1-1) Failed to define class javax.faces.event.ValueChangeEvent in Module "javax.faces.api:main" from local module loader @468a169f (finder: local module finder @13d344e7 (roots: /home/user/app-server/wildfly8/modules,/home/user/app-server/wildfly8/modules/system/layers/base)): java.lang.LinkageError: loader constraint violation: loader (instance of org/jboss/modules/ModuleClassLoader) previously initiated loading for a different type with name "javax/faces/event/ValueChangeEvent"
... starting point for my SEVERE
10:22:18,578 SEVERE [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-1) Critical error during deployment: : java.lang.LinkageError: loader constraint violation: loader (instance of org/jboss/modules/ModuleClassLoader) previously initiated loading for a different type with name "javax/faces/event/ValueChangeEvent"
(removing the parameter allows to build the project without errors)
Question
How to cope with such problems? I would need an overview about the order the classes are loaded. Probably there is a way to show an entire classloader tree or any profiling tool which will do the same.
You can enable debug or trace logging on JBoss Modules, but that's bound to be too much information.
In your case, why do you add a module (i.e. jboss-jsf-api_2.2_spec-2.2.5.jar) which is contained in WildFly anyway (exactly the same version in 8.0.0.Final)? That's why the module loader is complaining about duplicate classes.
There is a list of implicit module dependencies for WildFly deployments (which includes jsf-api for JSF deployments). If you need additional dependencies from the WildFly distribution, you should simply declare that dependency instead of duplicating the module.
See Class Loading in WildFly for more details.
Well, you are right not to add the library manually into the project. Another issue was a missing separation between business logic and web tier. Now I removed the particular jsf implementation but when I try to catch my event within my action listener implementation, my bean is not instantiated during deployment because of a class loading error: Type javax.faces.event.ValueChangeEvent from [Module "deployment.myApp.ear.appCore.jar:main" from Service Module Loader] not found.
|
STACK_EXCHANGE
|
Hey Smart Marketers,
Sure, there are tons of knowledge spread amongst the community… but Trailhead remains the first stop to learn new things.
If you’re willing to become a Salesforce Marketing Cloud Engagement Developer, I curated for you the 8 best modules available in Trailhead: covering everything from email and mobile marketing to automation and personalization.
Whether you’re a beginner or an experienced developer, these modules will help you improve your Marketing Cloud expertise and take your campaigns to the next level.
This module is an introduction to developers’ tools in Salesforce Marketing Cloud.
It’s a quick tour about:
- SFMC APIs
- SFMC SDKs
- SFMC Programmatic Languages
You’ll get to know how a Marketing Cloud account is structured and how to create a package.
The final unit would be your first API.
What’s the point of having a marketing automation tool if you do not automate tasks?
Automation Studio helps Marketing Cloud users automate queries, exports or API requests.
As an SFMC developer, this module will help you learn about Automation Studio activities, how to run SQL queries, how to trigger automations and how to troubleshoot your automated events.
Some would say there’s nothing you can’t do using Marketing Cloud APIs.
This module will introduce you to Marketing Cloud REST and SOAP APIs.
When to use REST APIs. What you can do in Marketing Cloud apps.
How to use SOAP APIs with your emails.
In the last unit, you’ll find interesting links to expand your knowledge.
Your email marketers keep asking for the same use-case over and over when building their emails?
Does It seem like something is missing in your content block library?
Create your own custom Content Block!
This module is all about how you can Build, Test and Deploy a Custom Content Block.
You’re probably already familiar with AmpScript.
What do you know about GTL?
In this module, you’ll get to know about programmatic languages you can use with Salesforce Marketing Cloud.
Now what about a deep dive in your preferred personalization solution: AMPscript?
This module will guide you from simple personalization, to using math functions as well as loops or lookups.
You’ll also learn about some best practices.
Don’t confuse AMPscript which is specific to Marketing Cloud with AMP for Email which is an open-source web component framework.
If you want to add real-time updates or interactivity to your emails, the latter will be of the greatest help.
Cherry on the cake!
Did you know you can build your own Custom Activities in Journey Builder?
Imagine the brand owns a proprietary channel. Or you want to record specific logs regarding your customer journeys.
This is how you can do it.
Hope this helps you.
See you next week!
Other ways I can help you:
|
OPCFW_CODE
|
We want to encourage all students to get involved with the development of this course. The materials here are mostly developed and curated by the staff team but we admit that we may miss things, or occationally get them wrong. We've put together some guidelines that should help you get started with making contributions.
Thanks for helping improve this course for everyone!
Questions, concerns, or issues
If you've got a question or problem that you think others may have, then please read this first.
The GitHub issue tracker is our preferred channel for dealing with these questions. It keeps questions public and allows others to pitch in with potential solutions, it also means everything is documented for the future. This should not be used for personal questions or issues unfit for public discussion, in those instances we encourage you to email the member of staff responsible for this course.
Before submitting a new issue, use the GitHub search to see if it has already been submitted before. If it hasn't, then go ahead and create a new one. When creating a new issue, please spend some time creating an identifiable and specific title, then provide as much detail as possible in the main description.
Examples of bad titles:
- Problem with lab material for this week
- Question about the course
- Just a thought...
- Can't compile lab 4
Examples of good titles:
- Lecture 4 broken recommended reading link
- Assignment 3 submission of 30th Feburary is incorrect
- Lab 4 main.c stack overflow exception
Ammendments, corrections, and contributions
Contributions are so greatfully received. If you've noticed an error, or would like to add additional resources then please use a pull request. To ensure your pull request is accepted as quickly as possible, please take a moment to check the following guidelines. Once a pull request has been submitted, it can be reviewed by fellow students and staff. When a staff member things it looks good, it'll be accepted and merged in.
- Ensure any added resources are properly cited like similar material. This may be adding a link to the resource on a slide, or appending a full citation to an assignment.
- If you're updating / correcting statistics then please provide a link to resources you used in your PR so we can ensure they're reliable.
Please squash your commits into either one, or a few well seperated commits. It's important that commits be atomic to keep the history clean. Below is a short guide on squashing commits into a clean history. Please read through and follow Chris Beams's - How to write a git commit message.
git log --oneline master..your-branch-name | wc -lto see how many commits there are on your branch.
git rebase -i HEAD~#where # is the number of commits you have done on your branch.
- Use the interactive rebase to edit your history. Unless you have good reason to keep more than one commit, it is best to mark the first commit with 'r' (reword) and the others with 's' (squash). This lets you keep the first commit only, but change the message.
|
OPCFW_CODE
|
That message is not well worded for this situation. It means that instead of web sockets for the changes feed, the replicator is falling back to HTTP(s) transport (HTTP or HTTPS depending on the endpoint the replication was given).
Is there an easy way to verify the replication is happening over HTTPS? We restarted the Sync Gateway from command line to watch its output and didn’t see anything that immediately indicated HTTPS over normal HTTP, and then we actually saw a few “TLS handshake error” messages every so often. These seem to be intermittent at best: several GETs and POSTs will all go through fine but then suddenly a PUT fails with the TLS handshake error, then some more GETs and successful PUTs.
I can try to put together a cleaned-up log if you think it would help, but otherwise just some way for us to confirm things are working as we think they are would be great. If not, no worries as we can probably just Wireshark it and figure it out that way.
Do you happen to know which endpoints are being called that cause the TLS handshake errors? That usually happens when a replicator is setup with HTTP against an HTTPS backend, but I think it could also happen if the client in question doesn’t understand TLS 1.2. If all requests are coming from the same client then this is not possible (it won’t intermittently fail) but one situation I’ve been in is where I forgot to shut down a separate client and it was causing issues without me realizing.
EDIT Just to be clear about the message about falling back to HTTP, all that happens is that the replicator shuts down and starts back up with the same settings, except UseWebSocket will be switched to false.
We have Sync Gateway configured for HTTPS and we have the iOS code using the NSURLSession-based handler which should get us TLS 1.2.
I should have mentioned sooner we are doing cookie-based authentication in case that makes a difference.
One of our developers discovered that if we use SetCookie with the Secure parameter set to false we no longer see the WebSocketSharp.doHandshake() exception in our app’s log, but we still see the TLS handshake errors in the Sync Gateway log (coming from our app). Is this bad to have SetCookie use “Secure = false” despite pointing to an HTTPS endpoint?
A section of the Sync Gateway log output can be seen here:
Note there is one changes feed set to “normal” and then others are “websocket”.
The Secure=true property of a cookie just tells the client not to send that cookie over a non-SSL connection. This is useful if the cookie contains sensitive data. Secure=false is a no-op AFAIK, since that’s the default.
In any case, cookies won’t have any effect on the SSL handshake, since that happens well before cookies are sent.
|
OPCFW_CODE
|
Baserock reference system definitions
Baserock is a system for developing embedded and appliance Linux systems. For
more information, see <http://wiki.baserock.org>.
These are some example definitions for use with Baserock tooling. You can fork
this repo and develop your own systems directly within it, or use it as a
reference point when developing your own set of definitions.
The systems listed in the elements/systems/ directory are example systems
that build and run at some point. The only ones we can be sure
that still build in current master of definitions are the ones that
we keep building in our ci system; they are listed in `.gitlab-ci.yml`.
It is possible to use BuildStream to build Baserock definitions. The
BuildStream build tool uses a different definitions format, so a conversion
needs to be done first.
Run the `convert` script from the root of the repository and you will be
rewarded with an autogenerated set of .bst files in the elements/ subdirectory
which should be buildable using BuildStream.
To run `convert`, you will need defs2bst and ybd. The following commands, run
from the root of the repository, should be enough to do a conversion:
git clone https://gitlab.com/BuildStream/defs2bst/
git clone https://gitlab.com/baserock/ybd/
You can then build e.g. a devel-system using BuildStream:
bst build systems/devel-system-content.bst
Some things are not supported by the BuildStream conversion tool, and will need
to be manually dealt with.
* the build-essential stratum isn't automatically converted, the
elements/gnu-toolchain.bst element is its equivalent and needs to be
manually kept in sync with any changes
* .configure and .write extensions need to be rewritten as BuildStream
* non-x86_64 systems aren't automatically converted; BuildStream supports
"arch conditionals" which should make it easier for a single tree of .bst
files to support many platforms, but it's not easy to automatically achieve
this in places like linux.bst where Baserock currently has many different
BuildStream requires an initial set of sysroot binaries to "seed" its build
sandbox. We produce suitable binaries using the `bootstrap/stage3-sysroot.bst`
element, and host them at <https://ostree.baserock.org/releases>. These are
then pulled by the `gnu-toolchain/base.bst` element.
I expect that the sysroot binaries will rarely need to change as they are only
used to bootstrap the `gnu-toolchain.bst` stack, and that stack is then used to
build everything else. However there will occasionally need to be updates; the
process is documented below.
There are also detailed instructions on how to produce binaries for new platforms
The release process needs to be done for each supported architecture. To get an
idea of what we consider supported, look in `gnu-toolchain/base.bst`. Where you
see `$arch` in these instructions, substitute the corresponding Baserock
architecture name for the current platform.
1. Build and checkout the sysroot binaries:
bst build bootstrap/stage3-sysroot.bst
bst checkout bootstrap/stage3-sysroot.bst ./sysroot
2. Add the provenance metadata:
scripts/baserock-release-metadata > ./sysroot/metadata
4. Clone the releases OSTree repo locally.
ostree init --repo=releases --mode=archive-z2
ostree remote add --repo=releases \
origin https://ostree.baserock.org/releases/ --no-gpg-verify
ostree pull --repo=releases origin stage3-sysroot/$arch
5. Commit the binaries to the correct branch.
ostree commit --repo=../releases \
6. Push to the releases repo. You will need an SSH key that is authorized to
connect as email@example.com -- contact firstname.lastname@example.org if
you need access. You will also need to have installed our version of
ostree-push --repo=./releases \
Once this is done you can update `base.bst` to pull in the new version of the
binaries (the `bst track` command can help with this). Note that the new version
won't be noticed by `bst track` until the summary file on ostree.baserock.org has
been updated, which may take up to a minute.
In future we should would GPG sign the release binaries, and we would automate
the whole process in GitLab CI.
|
OPCFW_CODE
|
Sendsmaily OÜ (Smaily) uses third parties to support the availability of our Services. Some of these third parties are engaged as “subprocessors” to host or process customer data, which may include personal data.
Prior to engaging any third party that accesses production infrastructure or processes customer data, Smaily performs due diligence reviews of each third party's information security program, privacy practices, confidentiality commitments and, if needed, puts additional appropriate contractual terms in place to ensure sufficient guarantees that the processing will meet the high standards required by Smaily and applicable laws (e.g GDPR).
A list of subprocessors currently used to provide our Services is set out below. Smaily will update this page when necessary. Please review the information frequently for any updates and to view the most current subprocessor list.
|Subprocessor||Subprocessor activity||Subprocessor location||Subprocessor website|
|EDIS GmbH||We use EDIS private server infrastructure to host portions of our SaaS environment.||Austria, EU||https://www.edis.at/en/home/|
|Elkdata OÜ||We use Elkdata virtual hosting infrastructure for client-specific developments environment.||Estonia, EU||https://www.veebimajutus.ee/|
|Hetzner Online GmbH||We use Hetzner private and cloud server infrastructure to host portions of our SaaS environment.||Germany, EU||https://www.hetzner.com/|
|OVH Hosting Ltd||We use OVH private server infrastructure to host portions of our SaaS environment.||Ireland, EU||https://www.ovh.ie/|
|Zone Media OÜ||We use Zone Media VPS and virtual hosting infrastructure to host portions of our SaaS environment and client-specific developments environment.||Estonia, EU||https://www.zone.ee/en/|
|Asana, Inc||We use Asana to plan and track in-house and client developments.||US||https://asana.com/|
|Dropbox, Inc||We use Dropbox to share and host temporarily needed files and for internal backup for administrative use.||US||https://www.dropbox.com/|
|Facebook Ireland Ltd||We use Facebook and Instagram for retargeting campaigns and Messenger as an alternative customer support channel.||US||https://www.facebook.com/|
|Freshworks, Inc||We use Freshdesk Helpdesk as firstname.lastname@example.org etc and live-chat customer support channel.||EU and US||https://freshdesk.com/|
|Google, Inc||We use Google G Suite email service (Gmail) for internal and external email communication. Alternatively, we share some internatl administrative documents with Google Drive.||US||https://gsuite.google.com/|
|Logit.io Ltd||We use Logit.io to capture and analyze server and other various logs.||United Kingdom, EU||https://logit.io/|
|Sentry Ltd||We use Sentry for error tracking and monitoring of possible crashes in UI.||US||https://sentry.io/|
|Skype, Inc||We use Skype as an alternative customer support channel.||US||https://www.skype.com/|
|Slack Technologies, Inc||We use Slack for internal communication.||US||https://slack.com/|
|
OPCFW_CODE
|
As I often perform when visiting a brand-new nation for the first time, I asked a variety of folks I complied with in Kiev what the main thing ideal defines Ukrainians. Without exception, they all proclaimed, “& ldquo; We desire to be complimentary. & rdquo; My first foray into the town hall improved what I’& rsquo;d been informed. At Independence Square, a giant landscape declared “& ldquo; Freedom is our Religion!” & rdquo; The Square, which citizens refer to as “& ldquo; Maidan, & rdquo; was actually ground zero for the 2013/14 Ukrainian Reformation that ousted the degrading Communist regime. Not simply possessed the liberty boxers did well, but Maidan had been offered a shiny brand-new skin. Of the damage and also fires that had actually raved through the square three years earlier, not a solitary sign remained.
I was impressed. Ukrainians seemed to be passionately devoted to the search of independence. But my blister burst when I took a seat in an adorable little bit of French cafe only steps off the Maidan. Within minutes, the twenty-something Ukrainian lady seated at the adjoining table started marketing her soul. Her lunch buddy, a Cuban-American man who had actually taken a trip to Ukraine to satisfy what are actually often pertained to as mail order brides, invested the bulk of the upcoming hour speaking with the lady. Was she brought in to him? Could she see herself in a committed partnership along with him? She threw her long fire-engine reddish hair as well as responded to flippantly, “& ldquo; No,” & rdquo; and, & ldquo; No. & rdquo; Carlos( certainly not his real name), understanding this was certainly not the partner he “had expected, devoted the balance of the & ldquo; partner & rdquo; inquiring his partner to discuss what she knew about the mail order brides con in Ukraine. Intrigued, I was all ears as she required.
The on-line dating hoax is secondhand. A Google.com seek the words “& ldquo; Ukrainian brides to buy & rdquo; gains almost 300,000 web sites. Agencies use a wide variety of plans for would-be other halves, however there are some common denominators. Mostly all web sites permit consumers to enroll in free of charge, make an account, upload photos, and check out the accounts of the ladies who are actually ostensibly trying to find an other half. However real exchange any of the girls demands a charge.
After the red-head departed, I launched myself to Carlos as well as talked to if he would certainly be willing to discuss his adventure with me. Over the upcoming month, he revealed what he’& rsquo;d found out in a set of e-mails:
“& ldquo; It lacks an uncertainty a hustle. I possessed correspondence with that said gal if I may phone her that for months, and also properly, you listened to the discussion. They are going to demand me 75 dollars to comply with each of the 3 women. Yes I can hear you assuming why spend for this? However I’& rsquo; m currently below and have absolutely nothing else to carry out therefore I may too see it right through.
After this, if the times don’& rsquo; t pot through I will certainly never ever possess just about anything to do with international dating. Once you satisfy the lady and endorse the kind, the organization loses all type of their earnings. Yet by then they have actually bled you completely dry. However individuals like me and also two different gents I met at the office, both from Fla by the way, performed it a bit smarter than others that go to the excursions. While I pay a huge for airfare, 5 hundred on accommodation, the letters and all the other much smaller stuff, these fellas expend thousands for these tours. Method to pricey for me. Don’& rsquo; t know that fares better, or even if anybody does.”
& rdquo; The letters that Carlos referred to are actually called the “& ldquo; pay every letter (PPL)dating sham. & rdquo; Stating that the females put on’& rsquo; t talk English(a lot of all of them really perform ), the organizations charge $10 for linguists to manage each character sent or received. Messages, content, and video clip chats are also readily available on a pay-per-minute manner. An image of the girl will definitely cost a love-struck male $3. PPL websites also schedule gifts like flowers, sweet, electronics, and also English trainings to become delivered to the females. The guys possess no concept that the females, that promote these “& ldquo; gifts, & rdquo; share in the benefit from their sale. Several of the men inevitably choose to take a trip to Ukraine to comply with the ladies with whom they’& rsquo; ve been actually being consistent. This was actually the 4th trip for Carlos.
“& ldquo; When I initially started this Search, I participated in a site phoned blue sapphires. Yes you spent for the letters and also it was an organisation, yet that agency was certainly not as intense as the organizations currently. (They) have become so much more loan grubbing from the one I to begin with began with. These gals intend to write almost daily. I sufficed to when a week. The reddish crown stated it was actually a repair, and she was part of it. I paid for one hundred plus for an opera ticket for her, but “& ldquo; B & rdquo; (another man client) told me the tickets in Kiev cost about twenty US bucks. An additional increase. When I to begin with started I concerned Kiev and went to the dating workplace as well as they would establish me up with girl after girl totally free on my stay. Right now they demand you seventy 5 dollars to call every female at the firm when you’& rsquo; re
certainly there. You asked which was less expensive, my means or even through a group excursion. My method is certainly cheaper, though I still spent a whole lot. What I learned from “& ldquo; B”& rdquo; as well as & ldquo; M & rdquo; & hellip; is actually to not write the girls until a week approximately just before your vacation. This way they know with you, but you shelter’& rsquo; t currently spent fifteen hundred in document. Smart method I carried out not consider. Still, I spent & hellip; for the plane tickets, the resort, meals, and added things. These guys pay out 3 splendid instantly, plus whatever added they require for the excursion. I don’& rsquo; t like the suggestion of cattle call appointments, plus three marvelous up front is actually way too much for me & hellip; I hear a lot of all of them don’& rsquo; t do well, yet hey, not either carried out I. Bottom line, it is a fraud of kinds. However there are actually women wanting to acquire married. I presume men my grow older as well as more mature need to have to bear in mind that the younger females gained’& rsquo; t would like to marry much older guys, unless as the reddish scalp mentioned you’& rsquo; re abundant or even resemble Johnny Depp, however that will wish such a female. The red scalp performed me a major support and opened my eyes rather.”
& rdquo; As unsatisfactory as this all noises, Kiev may’& rsquo; t hold a candle to what happens in Odessa. Certainly, within 24-hours of my delivery in Odessa, I was actually noting a mail order brides date at Tagliatelle Italian Dining Establishment. The waiter and also I murmured and chuckled concerning the condition. “& ldquo; Occasionally the very same man will definitely come into the bistro 4 or even five times in a row, each time with various lady,” & rdquo; she mentioned. & ldquo; I wait on him three opportunities a day yet he always act like he doesn’& rsquo; t recognize me. One-time, a man arrived below to eat breakfast, lunch time, as well as dinner for seven days. He has a different girl each opportunity, but still he act like he not recognize me.” & rdquo; She also stated the guys always happen “& ldquo; along with a good friend, possibly a sibling, that helps them talk with the women.” & rdquo; I explained that these are actually not friends. They are explainers, who are actually demanded to be existing throughout the conference, despite the fact that most of the ladies communicate completely excellent English. Naturally, the men are charged a hefty charge for this “& ldquo;
company. & rdquo; A lot of the women have actually ended up being pros at milking guys who are driven as a lot by solitude as longing. They approve the inevitable marital relationship plan yet assert they may certainly not leave behind Ukraine till some activity occurs. Her mom is in the health center and she requires loan to pay the bill. She can not leave her moms and dads, who are actually poor, unless their house is settled. She needs to have cash wired to spend for her visa and also air travel. Some even pretend to pilot to the USA or even Europe, then contact us to say they have been refused entrance.
The longer I remained, the more documentation I viewed of what should be one of Ukraine’& rsquo; s largest home industries. It appears paradoxical that Ukrainians, that were for decades bilked of their natural deposits and independence by a corrupt routine, have actually picked to focus thus exuberantly on a hoax that bilks others of their hard-earned loan. It was actually a blemish that made it complicated to cherish the otherwise wealthy culture of the nation.
|
OPCFW_CODE
|
A subspace is said to be invariant under a linear operator if its elements are transformed by the linear operator into elements belonging to the subspace itself.
The kernel of an operator, its range and the eigenspace associated to the eigenvalue of a matrix are prominent examples of invariant subspaces.
The search for invariant subspaces is one of the most important themes in linear algebra. The reason is simple: as we will see below, the matrix representation of an operator with respect to a basis is greatly simplified (i.e., it becomes block-triangular or block-diagonal) if some of the vectors of the basis span an invariant subspace.
Table of contents
Definition Let be a vector space and a linear operator. Let be a subspace of . We say that is invariant under if and only if for any .
In other words, if is invariant under , then the restriction of to , denoted by , is a linear operator on (i.e., ).
Example Let be the space of vectors. Let be the subspace spanned by the vectorIn other words, all the vectors of have the formwhere is a scalar. Suppose that a linear operator is such thatThen, whenever , we have . Therefore, is an invariant subspace under .
The kernel of a linear operator is the subspace
Since and all the elements of are mapped into by the operator , the kernel is invariant under .
The range of a linear operator is the subspace
Since , any is mapped by into . Therefore, the range is invariant.
Let be the space of vectors. Let be a matrix. We can use the matrix to define a linear operator as follows:
Suppose is an eigenvalue of and is the subspace of containing all the eigenvectors associated to (so-called eigenspace).
By the definition of eigenvector, we havefor any . Since is a subspace, . Therefore, the eigenspace is invariant under .
There is a tight link between invariant subspaces and block-triangular matrices.
In order to understand this link, we need to revise some facts about linear operators.
Let be a finite-dimensional vector space and a basis for .
Remember that any operator has an associated matrix, called matrix of the operator with respect to and denoted by , such that, for any , we havewhere and are respectively the coordinate vectors of and with respect to .
We have previously proved that the matrix of the operator has the following structure:
We are now ready to state the main proposition in this lecture.
Proposition Let be a finite-dimensional vector space and a linear operator. Let be a subspace of and a basis for . Complete so as to form a basis for . The subspace is invariant under if and only if has the block-triangular structurewhere the block is , is , is and denotes a block of zeros.
We first prove the "only if part", starting from the hypothesis that is invariant. Denote by the -th entry of . Since is invariant, then, for , belongs to and, as a consequence, it can be written as a linear combination of (and enter with zero coefficient in the linear combination). Therefore, for , the coordinate vector of isAs a consequence, when is invariant, the matrix of the operator isWe now prove the "if part", starting from the hypothesis that has the assumed block-triangular structure. Any vector has a coordinate vector of the formwhere is and is . Then,Therefore, . Since this is true for any , is an invariant subspace.
We can also writewhere is the matrix of the restriction of to with respect to the basis .
Remember that is said to be the sum of subspaces , in which case we writeif and only if
Direct sums of invariant subspaces have the following important property.
Proposition Let be a linear space. Let and be subspaces of such thatLet and be bases for and respectively (as a consequence, is a basis for ). Let be a linear operator. Then, and are both invariant under if and only if has the block-diagonal structurewhere the blocks and are and respectively.
We first prove the "only if" part, starting from the hypothesis that and are both invariant under . By the properties of direct sums, any vector has a unique representationwhere and . Moreover, has a unique representation in terms of the basis (for ). Therefore, any can be written as a linear combination of the vectors of . In other words, is a basis for . The first columns of areSince is invariant under , implies that . Therefore, for , can be written as a linear combination of the vectors of and the first columns of areSimilarly, we can demonstrate that the remaining columns of areThus,which is a block-diagonal matrix with the structure described in the proposition. We now prove the "if" part, starting from the hypothesis that is block diagonal. Since is block-upper triangular, is invariant by the proposition above on block-upper triangular matrices. Moreover, any vector has a coordinate vector of the formwhere is and is . Then,Therefore, . Since this is true for any , is an invariant subspace.
The previous proposition can be extended, by applying it recursively, to the case in which and all the subspaces are invariant.
Proposition Let be a linear space. Let , , ..., be subspaces of , with bases , ..., , and such thatso that, as a consequence, is a basis for . Let be a linear operator. Then, all the sets (for ) are invariant under if and only if has the block-diagonal structure
What are the practical implications of everything that we have shown so far? In particular, what happens when we are dealing with linear operators defined by matrices? We provide some answers in this section.
Let be a matrix. Let be the space of all column vectors.
We consider the linear operator defined by the matrix , that is,for any .
Suppose that we have been able to find two invariant subspaces and such that
In other words,and is linearly independent from whenever , and the two vectors are non-zero.
We can choose bases and for and respectively and we know that is a basis for .
Define the following matrices by adjoining the vectors of the basis:
Note that where are vectors that are guaranteed to exist because for by the invariance of , which means that can be written as a linear combination of the basis of (i.e., ). In order to match the notation used in the propositions above, we defineso that
Similarly, we can find a matrix such that
As a consequence, we haveorwhere is invertible because its columns are vectors of a basis, which are by definition linearly independent.
Thus, the process of similarity transformation of a matrix into a block-diagonal matrix (generalized here to the case of more than two invariant subspaces) works as follows:
we identify invariant subspaces such that
we find bases for the invariant subspaces and we use them to construct the matrixwhere the columns of are the vectors of the basis of (for );
we perform the similarity transformationand the matrix turns out to be block-diagonal. In particular, there are blocks on the diagonal and the dimensions of the blocks are equal to the number of columns of the matrices (i.e., the number of vectors in each of the bases).
This is one of the most important workflows in linear algebra! We encourage the reader to solidly understand and memorize it.
We have explained above that the eigenspace associated to an eigenvalue of is an invariant subset.
Denote by the distinct eigenvalues of and by their respective eigenspaces.
As explained in the lecture on the linear independence of eigenvectors, when is not defective, we can form a matrixwhere the columns of are a basis for and the all the columns of together are a basis for the space of all column vectors. As a consequence,
Thus, we can use the matrix of eigenvectors to perform a similarity transformation and obtain the block-diagonal matrix
Actually, in the lecture on matrix diagonalization, we have proved that is a diagonal matrix having the eigenvalues on its main diagonal.
Below you can find some exercises with explained solutions.
Define the matrix
Verify thatis an invariant subspace under the linear transformation defined by .
Any vector takes the formwhere is a scalar. Then,As a consequence, and is an invariant subspace.
Let be the space of column vectors. Define the matrix
By simply inspecting , can you find two subspaces and such thatand and are invariant under the linear transformation defined by ?
Note that is a block-diagonal matrix:whereTherefore, two complementary invariant subspaces are
Please cite as:
Taboga, Marco (2017). "Invariant subspace", Lectures on matrix algebra. https://www.statlect.com/matrix-algebra/invariant-subspace.
Most of the learning materials found on this website are now available in a traditional textbook format.
|
OPCFW_CODE
|
A collection of 12 posts
0. Peeing out a book...Mozart style
To watch this video on YouTube, click here.
It's been a while since I've posted a new video, so I figured it was time to provide y'all with a quick channel update, which you can find on YouTube here. Below is the image I took waaaaayyyyy too long to create for this
9. From source card to idea card
Watch as I turn notes on a source card into an idea card. So. Much. Fun. (Not really.)
8. Folgezettel - why creating them in your Zettelkasten is important
Learn about why I think it is valuable to create Folgezettel (sequences of notes).
7. Don't collect quotations—instead, handwrite reminders on source cards 📽
When I first went down the rabbit hole of looking into various ways of taking notes digitally, I found myself tempted to add to whatever note-making app I was using at the time a whole bunch of quotations that I never got around to stating in my own words.
6. Digesting information by putting it in your own words 📽
When it comes to working with ideas and developing compelling lines of thinking, I think we are better off to have as our aim not so much the creation of so-called original ideas but instead trying to make the ideas of others in some sense our own.
Why I stopped using folder cards in my analog Zettelkasten 📝
For the first few months of building a Zettelkasten, I used what I called "folder cards." For a while, I recommended that others use them as well. Not any more.
5. Where to put non-continuation cards 📽
In this video, I’m going to show you what to do when you create a new card that doesn't amount to a continuation of an already-existing card.
4. Stop creating new folder cards 📽
Are you making the mistake of thinking that you have to find the supposedly one-and-only best place for each card you create?
3. Addresses for continuation cards 📽
In this video, I’m going to show you how to decide where to put a new card and what address to put on it when that card amounts to a continuation of a card that already exists in your old-school Zettelkasten.
2. What kind of addresses should you put on your Zettelkasten cards? 📽
If you’re creating an old-school Zettelkasten, you need to give some thought to the unique identifiers, or unique IDs, that you put on your Zettelkasten cards.
1. The very first step in creating an analog, old-school Zettelkasten 📽
Just now starting to build a Zettelkasten? Watch this, but ignore the stuff about folder cards.
Page 1 of 1
|
OPCFW_CODE
|
Search RPD Archives
[rpd] IPv4 Resources in exchange for IPv6 milestones
owen at delong.com
Fri Jun 17 20:07:11 UTC 2016
I’ve been thinking about this concept a bit since Alain proposed simply forcing people to take v6 resources in order to get more v4 resources.
I think we can do better.
How about something like this…
Once we hit the last /11 (Soft Landing Phase 2), the following would apply:
1. No requestor may receive more than a /22 per request.
2. Each request must demonstrate 90%+ utilization of all previously issued space.
3. During Soft Landing Phase 2 (and any subsequent phases which may be created), the
following requirements shall be added to requests.
A. Each organization may make one request for IPv4 space so long as they have
either previously received or simultaneously apply for and receive an
amount of IPv6 space sufficient to support their organizations 5 year projected
growth and migration of all existing infrastructure and customers to IPv6.
B. In order to make a second request, each organization must show that they have
IPv6 peering established with a minimum of the lesser of 100% of all neighbor
autonomous systems or two neighbor autonomous systems.
C. In order to make a third request, each organization must show that they have
fully deployed IPv6 throughout their backbone network and that they are now
capable of transporting IPv6 datagrams to at least one router in each and
every point of presence, datacenter, or other site where the organization
D. In order to make a fourth request, each organization must sow that they
have deployed IPv6 on their key infrastructure (mailservers, nameservers,
web servers, etc.) (Note, this does not include web servers exclusively for
customer use,but the web servers that serve the pages for the organization
E. In order to make a fifth request, each organization must show that they
are providing native IPv6 capabilities to at least 10% of their customer
F. In order to make a sixth request, each organization must show that they
are providing native IPv6 capabilities to at least 50% of their customer
G. In order to make a seventh request, each organization must show that they
are providing native IPv6 capabilities to their entire customer base.
H. In order to make an eighth request, each organization must show that they
have converted at last 50% of their management and provisioning systems
I. In order to make a nineth or subsequent request, each organization must
show that the only remaining IPv4 dependencies on their network are related
to providing IPv4 services to customers and reaching external entities who
lack IPv6 capabilities.
I realize this isn’t a formal proposal by posting it here. I would like to see what the community thinks of the idea. If there seems to be some general support, then I will write it up and submit as a formal proposal.
More information about the RPD
|
OPCFW_CODE
|
Trade Agreements Approval Process EU vs UK
According to The Trade Justice Movement, the UK's current trade negotiation legislation (which hasn't been relevant for some time) means that only government approval (that is the executive) is required for negotiation and sign off of a trade deal rather than Parliamentary approval.
Parliament is eventually asked to ratify the agreed final deal, but in practice the procedure is a nominal one and MPs are not even guaranteed a vote on whether to approve or reject trade deals.
This contrasts to the EU process where the European Parliament also has to agree the deal.
In the final stages, after the European Parliament gives its consent, the Council adopts the decision to conclude the agreement.
At a high level is the Trade Justice Movement correct that once the UK leaves the EU there will be less oversight of trade negotiations by elected officials outside the Executive? How does this align with the principle of Parliamentary Sovereignty in the UK?
This is a general problem - the UK has (by legislation) gradually transferred a lot of power to the executive. The Brexit bill giving ministers power to rewrite legislation in formerly-EU areas is another recent example.
I haven't read what the Trade Justice Movement says, but what you say doesn't seem to mention ratification, which is different from signing. Can you quote from the TJM the relevant bit?
@Fizz added in what seemed to be the most relevant, which I admit I missed first time through the linked page. What form exactly ratification can take without a vote I don't know...
They don't mention any law so it might be one of those unwritten UK constitutional arrangements. Royal prerogative?
See https://politics.stackexchange.com/questions/41657/was-parliamentary-ratification-of-trade-treaties-necessary-in-the-uk-before-joi
Brythan covered the 2nd part (how does Parliamentary sovereignty interact with that alleged rule which limits initial parliamentary security if it exists). And it does exist, but only to some extent
Part 2 of the Constitutional Reform and Governance Act 2010 requires the Government to lay before Parliament most treaties it wishes to ratify, along with an Explanatory Memorandum. This gave statutory form to part of a previous constitutional convention on parliamentary involvement with treaties (the Ponsonby Rule).
The 2010 Act also for the first time gave parliamentary disapproval of treaties statutory effect, and effectively gave the House of Commons a new power to block ratification. The process is this:
The Government may not ratify the treaty for 21 ‘sitting days’ (ie days when both Houses were sitting) after it was laid before Parliament.
If within those 21 sitting days either House resolves that the treaty should not be ratified, by agreeing a motion on the floor of the House, the Government must lay before Parliament a statement setting out its reasons for nevertheless wanting to ratify.
If the Commons resolves against ratification – regardless of whether the Lords did or not – a further 21 sitting day period is triggered from when the Government’s statement is laid. During this period the Government cannot ratify the treaty.
If the Commons again resolves against ratification during this period, the process is repeated. This can continue indefinitely, in effect giving the Commons the power to block ratification.
Neither House has yet resolved against ratification of a treaty under these provisions, and there are limited options for how they can do so.
Despite looking like a major change, the provisions of the 2010 Act have several exclusions and limitations [Lists some types of treaties that are excluded from the 2010 Act, but this list doesn't include trade agreements.]
No requirement for debates or votes
Although the 2010 Act puts on a statutory footing Parliament’s opportunity to scrutinise treaties, it does not require Parliament to scrutinise, debate or vote on them (and it rarely does so).
There have been some calls for a process that results in more debates and votes on treaties, perhaps involving the committees, but Parliament has so far been reluctant to set up new mechanisms for treaties.
This is in contrast to many other countries where parliamentary approval is required at least for certain defined categories of treaty. Even some other ‘dualist’ countries have incorporated some kind of parliamentary scrutiny of treaties, for example Australia which has a dedicated Joint Standing Committee on Treaties.
Parliament can only oppose (or tacitly accept) a treaty in full – it cannot amend treaties.
There is no general requirement or mechanism for parliamentary scrutiny of (non-EU) treaties while the Government is negotiating them. So Parliament is not usually involved at the stage when changes could still be made to the text of a treaty.
This is fairly typical; the US is rare in allowing the Senate Committee on Foreign Relations to propose amendments to treaties.
There have been several proposals for parliamentary involvement before signature, to minimise disagreements when it comes to ratification, but there is also considerable opposition to such ideas.
However, Brexit has re-awakened the debate on how Parliament should be involved with treaties. [...]
So the Trade Justice Movement criticism is fairly correct in how it describes the facts. Of course, as Brythan notes, the Parliament can change its mind and pass a different law for how it is supposed to deal with international/trade agreements.
Also, the "dualist" issue needs expanding as it affected Brexit:
The corollary of the Government’s dominant role in making and ratifying treaties is the fact that treaties cannot change UK domestic law.
The UK is a ‘dualist’ state, which means that treaties are seen as automatically creating rights and duties only for the Government under international law. When the Government ratifies a treaty – even with Parliamentary involvement – this does not amount to legislating. For a treaty provision to become part of domestic law, the relevant legislature must explicitly incorporate it into domestic law.
The Miller judgment
This constitutional feature was central to the Supreme Court’s January 2017 judgment in the Miller case (about whether the UK Government needed the prior authority of Parliament in order to trigger the UK’s notification of withdrawal from the EU Treaties). The majority judgment set out ‘two features of the United Kingdom’s constitutional arrangements’:
… The first is that ministers generally enjoy a power freely to enter into and to terminate treaties without recourse to Parliament … The second feature is that ministers are not normally entitled to exercise any power they might otherwise have if it results in a change in UK domestic law unless statute, ie an Act of Parliament, so provides.
The ruling made it clear that the Government cannot make or withdraw from a treaty that amounts to a ‘major change to UK constitutional arrangements’ without an Act of Parliament:
We cannot accept that a major change to UK constitutional arrangements can be achieved by ministers alone; it must be effected in the only way that the UK constitution recognises, namely by Parliamentary legislation.
Applying the principle to this case, the judgment held that the UK Government could withdraw from the EU Treaties only if Parliament ‘positively created’ the power for ministers to do so. This was because the EU Treaties are a source of domestic law and domestic rights that ministers cannot alter using the prerogative alone.
Treaty provisions that are not incorporated into domestic law can have only indirect domestic legal effect at best. For example, where legislation is capable of two interpretations, one consistent with a treaty obligation and one inconsistent, then the courts will presume that Parliament intended to legislate in conformity with the treaty and not in conflict with it. [Gives some examples, but they are not from trade].
But usually, before the UK Government ratifies a treaty, it seeks to ensure that any domestic legislation needed to implement it is already in place.
Given how extensive trade treaties can be these days (e.g. covering non-tariff barriers), it's somewhat dubious that the UK government can implement them fully by itself, i.e. with no domestic legislation.
A more articulate criticism along the TJM lines (but not entirely convincing given how Commons opposed the Withdrawal Agreement), which expands on what I wrote in the previous paragraph is that
once the government has concluded a treaty which, in most cases, cannot easily be re-negotiated: the treaty is at that stage effectively “take it or leave it,” and those [government majority] MPs may well be reluctant to humiliate their government by telling it to leave it. [...]
the view that the traditional model is unsatisfactory has been gaining ground. The heart of the problem is that international treaties concerning trade are far removed from the Cobden-Chevalier treaty (the 1860 tariff reduction treaty between the UK and Second Empire France). It is no longer a matter of negotiating tariff reductions on wine and agricultural produce in return for tariff reductions on manufactures in the course of a few lunches and an audience with the Emperor. Modern trade treaties are vast documents including very large numbers of commitments on sensitive matters of supposedly domestic policy, ranging from food standards to data protection to immigration rules to public procurement. If the Crown can simply produce one of these vast treaties at the end of a negotiating process and say to parliament “here it is: take it or leave it,” in a context where neither rejection nor amendment is realistic, then effectively parliament has handed over to the Crown the power to legislate in a vast range of areas.
Criticism of the traditional model is particularly powerful in the case of the biggest trade treaty of them all, namely the hoped-for EU withdrawal agreement and any subsequent deep and comprehensive free trade agreement with the EU.
This was written in September 2018, before Commons rejected several times the Withdrawal Agreement.
How does this align with the principle of Parliamentary Sovereignty in the UK?
Parliamentary Sovereignty just says that if Parliament votes, its vote is binding until it votes differently. So if Parliament voted to delegate power to the executive, then the executive has that power. Parliamentary Sovereignty says that Parliament can revoke that power later, not that it can't delegate. So Parliament could, at any time, choose to review a trade agreement and make a binding decision, even one that overruled the government. Because Parliament is sovereign, that would be legal in the United Kingdom.
It might not be legal internationally. The other participants may consider the agreement binding. They may act on that. If the agreement offers an enforcement mechanism, they could use that. But they could not sue in UK courts, as there is no basis in UK law for binding decisions on Parliament. Parliament can always vote to set aside previous decisions of Parliament. Parliament may normally choose not to abrogate international agreements that way, but Parliamentary Sovereignty says that they could.
Note: I don't know whether Parliament has delegated that power. I'm just saying that it could have done so without violating Parliamentary Sovereignty. If it delegated, then it could revoke that delegation. If the position is unclear, then it could pass explicit legislation choosing to allow or block delegation.
It seems as far as parliament is concerned delegation to the Executive or to the EU is much of a muchness. Though I imagine in principle it is less complicated to recall that delegation from the Executive.
@Jontia I suspect you mean more complicated in practice, rather than in principle.
In practice I don't care to speculate, @origimbo, but in principle return power from the executive only involves UK politics, whereas returning power from the EU includes a second body with a separate agenda, making it I assume more difficult. Though Article 50 is itself straightforward in principle. At this stage, 3 years on, who knows what principle and practice even mean anymore?
|
STACK_EXCHANGE
|
High School Students Spearheading the Movement for Girls in Technology
A #WomensHistoryMonth post by Taylor Fang, high school student & co-founder of the Allgirlithm blog.
Last summer, 32 girls from around the world had the opportunity to participate in Stanford AI4ALL, a summer artificial intelligence camp especially for girls. While there, they completed their own research projects, focused on humanistic applications of AI like poverty or disaster relief, talked with and heard from a variety of AI programmers and advocates, and saw artificially-intelligent robots and machines at work.
As one of the participants, I was especially elated to meet so many other girls interested in the same things as I. While talking with two other alumni, we realized we had all grown up as one of only a handful of girls in our STEM activities. Additionally, the three of us being from Utah, Arkansas, and Arizona, we didn’t normally get access to opportunities like Stanford AI4ALL. During the last few days of camp — as a way to try and change this situation — we started Allgirlithm, a collaborative blog geared towards empowering women and girls in tech. We post news, resources, and opportunities, have several peer blog post writers, and partner with numerous other diversity initiatives, including CreAIte and PixelHacks. The collaborative spirit of Allgirlithm allowed me to glimpse the power of youth initiatives for computer science education across the US. Programs like Stanford AI4ALL serve as catalyzing forces for the next generation of computer advocates.
“high school students are a crucial part of the movement to make tech an inclusive and empowering place” — She++
Through participation in AI4ALL, and the year round support I receive as a member of the NCWIT Aspirations in Computing Community of 1000s of young women and girls aspiring in tech, I’ve seen many inspiring student-led initiatives and programs. Below I’ve detailed just a few student-led efforts to encourage more girls in tech, including my own project, Girls Explore Technology, or GET.
GET consisted of a five-week series of after school workshops for 24 middle school girls in Logan, Utah in collaboration with the Boys & Girls Club of Northern Utah and funded by AI4ALL’s Community Impact Grant. Each week, participants learned about and explored different aspects of technology, from computer science to artificial intelligence. GET aimed to help inspire and encourage participants to pursue technological fields through topic lectures, videos, hands-on projects, and guest speakers from the local university as well as computing fields. The girls had a great experience, as you can see by the attendee feedback:
“I have realized that there is so much more to computers than just programming.”
“I used to think that it wasn’t very fun but I learned it’s so fun.”
“I am more interested in it now and it seems a lot more creative.”
CreAIte runs artificial intelligence and art workshops for girls. Started in the Bay Area by three young women, they host workshops across the country, including in NYC, Austin, and Atlanta. Their workshops give an introduction to artificial intelligence and neural art, as well as featuring guest speakers from the tech industry. By exploring the combination of AI and art, they emphasize the diversity of AI’s applications and the creative thinking behind computer science.
Coder Girls is an international nonprofit dedicated to educating and enabling female K-12 students in computer science, started by Briana Berger. They have implemented chapters and curriculum with 85 Girl Scout councils and 350 schools to reach over 500,000 girls. Currently they teach coding to girls across the country and have chapters around the world, and also host a video competition to connect coding to other pursuits.
PixelHacks is the Bay Area’s first student-run, all-female high school hackathon. Catherine Yeo, PixelHacks’ founder and director, says that she was inspired to start PixelHacks “to inspire young girls and incite their interest in technology.” They have held two hackathons, in 2017 with 70 women and in 2018 with 120 women. Yeo was inspired to run PixelHacks II after “observing the tangible impact of PixelHacks on so many girls and seeing their newfound interest in programming grow.” Both events ran for 24 hours, and focused on building a software project to address a real-world problem, with prizes judged by a panel of professional women in tech fields.
These are just some of the different ideas and initiatives of high school students reaching out to girls in the community. Through my own experiences in these areas as well as through talking to others, I’ve seen that youth can be a force for spreading technology to other youth, and they often create a cycle of change in their communities. These students represent the frontier of tech inclusivity and accessibility, and for Women’s History Month especially, deserve to be celebrated for their dedication to spreading computer science to the next generation of female leaders and advocates.
Taylor Fang is a high school student in Utah. As STEM ambassador for the state of Utah, she volunteers to promote STEM to K-12 students, and is passionate about spreading these pursuits to more youth, especially girls. She was recently recognized by the National Center for Women & Information Technology (NCWIT) as an Aspirations in Computing National Honorable Mention, and selected as a Stanford she++ #include fellow. Besides STEM, she enjoys piano, tennis, and debate.
Read about her work on the Allgirlithm blog or get in touch at email@example.com
|
OPCFW_CODE
|
(I'm unsure as to whether this question can be regarded as calculus, however, I am currently studying Differential Calculus at university as a first year subject so, I just made the assumption there - this question came from my course's textbook.)
"Find all solutions of z^4 = 8 + 8√3i, and plot them in the complex plane."
First of all, I'm unsure with how you would be able to find the solutions for the equation. I understand that there are 4 roots to the equation, one is for linear equations in z and two for quadratics in z^2 and so on. I know that you need to express z^4 = 8 + 8√3i into the form r(cost + isint) and then use de Moivre's theorem from there. So, from doing that - how would you be able to get the points that need to be plotted? Although, I'm still confused as to how you would work out the entire question. This question was given as an example from my textbook however, I wasn't able to follow on with it quite clearly. What I am most particularly confused about is how you would plot the points. Apparently you can find one solution given that the other solutions of z^n = k form a regular polygon around the origin (which confuses me even more...) So, is anyone able to provide a helpful step-by-step procedure with how to get the answer?
Any help would be greatly appreciated. Thank you.
I was able to figure that when k = 0, 2(cis(pi/12)).
However my answers for the other values of k:
k = 1, 2(cis(7pi/12))
k = 2, 2(cis(13pi/12))
k = 3, 2(cis(19pi/12))
So, none of my other roots matched up as being 2(cis(5pi/12)). So, I'm pretty sure I'm doing something wrong there. What I did was basically sub-in the values of k from FernandoRevilla's answer: 2(cos(pi/12 + 2kpi/4) + isin(pi/12 + 2kpi/4)) Another similar example from my textbook did the same thing and its answers were correct. So, I'm unsure with what I did wrong. Or, could it be something that k = 1 is 2(cis(5pi/12))? Although then again, I don't know how that would work out with the other values of k.
Apparently the values plotted on a complex plane lie at the corners of a square. However, I really don't have a clue with how can I possibly plot these values on the complex plane. Would I have to change it back into cartesian form to be able to plot all the values on the complex plane then?
|
OPCFW_CODE
|
Stemming from our love of pets and unhealthy junk food consumption, we wanted to create an application that promotes a healthy diet incentivized by sustaining the life of a virtual companion. We think we can promote mobile health to all audiences this way.
How Do We Support the Octocat?
Octocat provides a fun and responsible way to track your dietary health! By snapping a picture of what you eat on the app, you will also feed your pet with it. Octocat will then respond depending on how healthy the food is. If unhealthy, Octocat will gag. If healthy, Octocat will beam! Be careful though, a cat only has nine lives and Octocat can’t take too much punishment. Happy eating!
Who Supports Its Home Base?
We primarily utilized the Android Studio IDE to build Octocat. Using Java and XML files, we connected our app to the Android phone’s camera for picture-taking capability. Then, we utilized image labeling from Google’s Firebase machine learning kit and visual API to process and label images. Finally, we brought in our Octocat from Giphy via the Transposit API.
How Hard Was It To Bring It Home?
We ran into the issue of displaying the GIFs on the image view of the emulator’s camera. Eventually, we were able to construct the proper code for inserting the gifs into the interface of our app. Another challenge we encountered was figuring out how to determine if food was unhealthy or not. Given the time constraint, we opted to use an in-app small database as criteria to determine healthy versus unhealthy.
It Was Well Worth It!
We’re proud we managed to embed GIFs onto our Android app. It was difficult but very exciting when we succeeded! We would like to thank Transposit for their support in implementing their API in our project. The ability to capture and process images as a primary mechanic of our application was something we are proud of, as we could truly experience first-hand the power of our code.
How Did Our Octopets Inspire Us?
At its core, we learned how to use Android Studio to build an app from the ground up. It was a completely new experience and we were challenged with editing XML files, familiarizing ourselves with the documentation, installing Firebase as a machine learning back end, and using the Transposit API all for the first time. While we could have done more, we found that creating an Android app with little to no experience was already a huge accomplishment in itself for us. By the way, the Octocats really liked those loaded pizzas!
What's Next for our Octopets?
We send Octopet to space! And then we find out if they can survive without gravity. I guess it depends on what my next meal going to be. owoctowo uwu
Log in or sign up for Devpost to join the conversation.
|
OPCFW_CODE
|
The purpose of this list is to collect formulations for this
context as my brain uses them.
This multi-meaning is here-now as I shall try to be consistent with my usages.
(This will grow. To plant a seed, contact me using the link at the bottom of the page below.)
Some preliminary cautions. My perspective on knowledge, which I believe fully consistent with general semantics, can be traced back to the ancient skeptics, and my screen name is "diogenes" after Diogenes of Sinope (the cynic), some of whose proclivities I sympathize with.
In the act of perception a figure object is distinguished from that which is not figure, called the background in a composite structure. As such the figure object is an abstraction from the composite figure and background structure. Neither figure nor background can be null or empty. Note that this illustration is just a metaphor mapping any sensory input (including from memory) to the visual. Using this metaphor we can speak of auditory "images", tactile "images", olfactory "images", even memory "images", etc., each an abstraction range (map) mapped from the corresponding domain (territory).
Abstraction - the output of a process of one or more transformations and/or transductions. See Abstraction. Alternatively, the selection of a figure object from a composite structure.
Atomic - primitive - cannot be distinguished into figure and background.
Background - the remainder of the structure from which an object is abstracted. Alternatively, that which is not figure.
Compare - evaluate two structures with respect to the appropriate binary distinction - same or different for two atomic structures and similar or different for two general structures.
Different - a primitive or higher level evaluation - see same, similar, identity.
Distinction - a binary crossing - alternatively the division of an object into figure and background. see Form of the Distinction
Domain - the mathematical term equivalent to "territory" in a mapping; the source "domain" of the mapping.
Equal - Equivalent under a reflexive, symmetric, and transitive relation - having the same value..
Essence - a high level abstraction - see essence.
Event - a hypothetical structure indexed by time said to be the "physical" cause of an object.
Fact - fact - a putative event or situation in the presumed external world - distinct from a "statement of fact".
False - false - one of two values that may be ascribed to statements, the other being true.
Figure - the object of attention abstracted from a composite structure.
Identity - an evaluation response in brains. See The Neurological Basis of Identity
Meaning - the auto-associative brain response to similar stimuli, consisting of external and internal sensory abstraction, recalling prior similar stimuli together with their associated prior responses and subsequent results. See meaning.
Not - the crossing of a binary distinction from a figure object to that which it is distinguished from.
Object - a reaction in brains.
Same - one of two primitive evaluations when comparing atomic structures - see identity, different.
Semantic reaction - response of a person to symbols and events in terms of meanings to the person. See Semantic Reaction.
Similar - one of two possible higher level evaluations when comparing two composite structures - see different.
Structure - a composition of one or more objects. Alternatively, a composition of one or more events. A structure can be atomic - consist of a single object or event - or composite - composed of two or more subordinate structures. A structure is said to subordinate to another structure if the later can be distinguished into figure and background. Either or both figure and background so formed are subordinate structures.
Time-binding - The practice of recording the past and writing plans for the future using language, enabling each new generation to build on the accomplishments of the past, avoid repeating the mistakes of the past, and communicate new knowledge to future generations - characterized by an exponentially increasing store of information. See Time-binding - the general notion.
True - true - one of two values that may be ascribed to statements, the other being false.
|This page was updated by Ralph Kenyon on 2009/11/16 at 10:54 and has been accessed 1817 times at 6 hits per month.|
|
OPCFW_CODE
|
https://youtu.be/jNPb_eo8BDA The DC-3 of Aeroplane Heaven has been one of the most popular among the new aircraft added in Microsoft Flight Simulator since the release of the 40th Anniversary Edition. To help the virtual pilots who may need help and guides, the aircraft designers recommended the two videos embedded here. https://youtu.be/-pEjEUbV_qw
https://youtu.be/dlBo_rkRkTM If you like closely to the advanced users performance in MSFS, you will notice some impressive camera moves, and spectacular fly-by footage. Sometimes they achieve that from a fixed point of view, other times they can also control the camera position. Thanks to Q8Pilot, you will learn basics and more advanced features of the camera controls made possible in Microsoft Flight Simulator so you can enjoy a bit more your aircraft model and the scenery around : create custom camera views, manage the drone camera and more. Watch his video tutorial posted at YouTube and embedded right above here.
https://youtu.be/Smja0oCRIJg In the above video, Navigraph explains how you can export your SimBrief flight plan to Fly The Maddog X, the MD-82 advanced simulation in MSFS. Take advantage of the efficient and free webservice, and of the Electronic Flight Bag directly at your fingertip in the 3D virtual cockpit. What are the supported files, where are the necessary options, and what are the directories do you need to select ?
https://youtu.be/-ELWChEiJYk When I watched streamers flying in MSFS, I was surprised by one of them switching between cockpit views from the menu, using the camera button in the tool bar. It does the job and gives also access to external or drone views. But, in the manner of Active Camera in the old Flight Simulators, you can also your own cockpit view presets, that you can edit and call with quick shortcuts on the keyboard. That’s the topic of this tutorial provided by Navigraph and set up in Fly The Maddog MSFS. You can follow the same procedure in any…
https://youtu.be/JNfbV78N8EM Update May 25th : all of the 6 videos have been now published. Part 5/6 covers Takeoff minimum, and the last part 6/6 is about Alternate Airport minimums. Jason of Navigraph team is doing again a nice work of explaining how to read the airport charts for beginners. He started a series of 6 videos, and we can watch now the new part 4 about the additional runway information. Previous parts were dedicated to the main info and chart identification, the second part was about the radio communication and VHF frequencies shown in the table of the chart, and…
https://youtu.be/Gl9Rrqujasg WIth the depth of systems simulation found in Fly The Maddog X in MSFS, programming the FMC needs some knowledge. Thanks to the youtuber Jonathan Beckett, the beginners will see how to program the FMC in the MD-82. Navigate through its different pages and learn more about the data to insert to prepare your flight.
https://youtu.be/QOaon9_lsHc Some of the MSFS airports published by Aerosoft support an advanced and custom functional VDGS module : Mega Airport Brussels, Paderborn, Trondheim and Cologne. This module is basically made to guide the aircraft on the parking spot with visual instruction for the parking alignement and distance to stop. If you learn how to set it up correctly with the videos embedded here, it can also display the flight destination, target off block time and the local temperature. https://youtu.be/WLm-MdWuPok https://youtu.be/RNf6_k4ydTI https://www.youtube.com/watch?v=7iSkJnUqQZ4
https://www.youtube.com/watch?v=o1Nj9kzJjwI Thanks to Jonathan Beckett, we found on YouTube his nice tutorial about Fly The Maddog MSFS, particularly well done for beginners. This is a starting guide from Cold and Dark cockpit, to show you during 16 minutes how to initiate onboard systems, and to prepare the aircraft for engines startup. Below is a much longer video, 58 minutes long, from AirborneGeek. It’s an “Ultimate Quick Start Guide” to cover more topics like an introduction to the Electronic Flight Bag, SimBrief flight plan import, the normal flows check-lists, and FMC programming. https://youtu.be/iuMrYu6TGQ0
https://youtu.be/JNfbV78N8EM About two years ago, the Navigraph community maanger published several videos to introduce and learn how to use airport charts in the context of flight simulation and I embedded them below. But the first video above is a brand new presentation that begins a series with the basics, to first understand the different parts of navigation charts layout, and the basic information printed on them. In 2020, Navigraph taught us how to use SID, STAR and ILS charts. Flying a Standard Instrument Departure procedure in PMDG 737 https://youtu.be/U7e-FQs40Jc Flying a Standard Instrument Arrival procedure in PMDG 737 https://youtu.be/IlzcyBp_M7c Fly…
https://www.youtube.com/watch?v=nK7unC-9y5Q&list=PLGEyJ3_yISBtktA6Hv-QyUszEeGJa03iV With 11 new tutorial videos about flying the DC-6 in MSFS, PMDG already open the flightschool while the aircraft add-on should be ready this year. It will be followed by the 737 NGX, and even the entire products line 747, 777 will arrive into MSFS. Nevertheless, that doesn’t mean PMDG stops P3D development as they still have “updates pending for the 737, 777, 747 and DC-6 product lines and new products still on the development agenda” thanks to the Prepar3D “enterprise licensing to commercial ventures”.
|
OPCFW_CODE
|
I removed Ubuntu from my system yesterday; I’ve already got problems with memory and decided I didn’t want it cluttering up my already sparse memory (1Gb!). One gigabyte isn’t enough? Don’t get me started…
Anyway, I removed it, and it was interesting to see what went with it:
These are good apps, but I don’t need another runtime environment cluttering up my sparse (sparse??) memory. There are a lot of other applications: the Mono folks have compiled a list, and the folks campaigning against Novell (and Mono) have a list also.
Most of these I never use (except F-Spot and Gnome Do) but I won’t miss them. Ubuntu has officially replaced F-Spot with Shotwell, and Gnome Do is not quite as good as the original Quicksilver (I’ve a Mac Mini with Quicksilver installed).
I’m already using some massive memory-abusing apps. For example, consider Google Chrome with a gazillion tabs, or NetBeans, or Gnome itself. I can’t replace NetBeans (unless I want to use the massive Eclipse instead…) but sometimes I use Midori instead of Google Chrome, or WindowMaker instead of Gnome (all very nice and highly recommended!). It also appears that the Google Chrome extension Too Many Tabs will free up memory when you “suspend” a tab; fantastic!
Try some of these lightweight items and see if you won’t have a snappier system!
When you expand OpenVMS memory, there are a number of other parameters you may wish to revisit. If you increase your memory dramatically, you will certainly have to change these SYSGEN parameters. You can also look each parameter up using HELP:
HELP SYS VMS_MAX_CACHE
(The parameter SYS is short for SYS_PARAMETERS.)
Some parameters to consider changing are the following:
- GBLPAGES. If you don’t increase this, you’ll be getting warning messages when you try to take advantage of all that memory. In short, this parameter sets the amount of memory that the kernel can keep track of; if you use too much this parameter is a limiting factor.
- GBLPAGFIL. The page file needs to be able to take all of the pages that it might be called upon to reserve; increase this parameter.
- VCC_CACHE_MAX. If you’ve not tuned your cache (XFC) then you’ll find half of your memory to be taken by the cache. This is almost certainly not what you want; modify this parameter to reduce the amount of memory your cache is allowed to take. Even so, do remember that your cache will decrease and increase dynamically in any case – but if you scale it back, then you’re not wasting memory so much.
- MAXPROCESSCNT. This sets the maximum number of process slots – in essence, the maximum process count (which is what the parameter is called, after all). If you have a lot more memory, you’ll want to use it to run more, right? That’s not any good if you use too many processes and can’t run any more.
- BALSETCNT. If you set MAXPROCESSCNT, you should set BALSETCNT to the same amount minus two – and never higher.
These changes can be made in the SYS$SYSTEM:MODPARAMS.DAT file and then use the AUTOGEN command to configure the sysetm. The MODPARAMS.DAT file uses a simple format; for our purposes, you can use something like this:
In place of
ADD_* you can also use
MIN_*. You can see more examples in
HELP AUTOGEN MODPARAMS.DAT. AUTOGEN is described in the HELP; be careful using it! You don’t want to muck up the system so bad you have to reboot or to reinstall.
Powered by ScribeFire.
How much memory is in this machine?
It would seem that answering this question ought to be easy; it is – but every system has the answer in a different place. Most put an answer of some sort into kernel messages reported by dmesg (AIX apparently does not).
Most systems have a program for system inventory which reports a variety of things, including memory.
Rather than go into great detail about each one, we’ll just put these out there for all of you to reference. Each environment has multiple commands that give available memory; each command is listed below.
Without further ado, here are a few answers to this burning question:
dmesg | grep mem
prtdiag | grep Memory
prtconf -v | grep Memory
lsattr -E1 sys0 -a realmem
dmesg | grep Physical
/opt/ignite/bin/print_manifest | grep Memory
machinfo | grep Memory
dmesg | grep Memory
grep -i memtotal /proc/meminfo
show mem /page
dmesg | grep memory
grep memory /var/run/dmesg.boot
sysctl -a | grep mem
|
OPCFW_CODE
|
Failing to push to heroku for nodejs web3 - it is trying to find<EMAIL_ADDRESS>postinstall
I had pushed a node.js react with DApp with<EMAIL_ADDRESS>yesterday (2/6/19) and it worked. Since this morning, when I try to push the same codes it comes up with
<EMAIL_ADDRESS>postinstall
C:\BaandaDev\baandadev-03\client\node_modules\web3
node angular-patch.js module.js:549
throw err;
^ Error: Cannot find module 'C:\BaandaDev\baandadev-03\client\node_modules\web3\angular-patch.js'
(Please disregard baanda ... those are my directory names but the
error is emerging from node modules)
Why is it looking for<EMAIL_ADDRESS>when I am not even asking for it?
I have reduced the version till<EMAIL_ADDRESS>and heroku still looks for beta.42 angular patch (I am not even using angular). The worst thing is ... it worked yesterday.
The only thing I can think of is, Ethereum released<EMAIL_ADDRESS>yesterday. But, it does not explain why heroku would look for a patch for something else.
Interestingly, when I clone the one that is working in heroku in my local machine using something like heroku git:clone -a baandadev03-t2 and then run npm install to re-instead it in my local machine, it throws the same problem in my local machine. However, if I npm i --save<EMAIL_ADDRESS>manually, it does deploy without a hitch.
Suspect: npm install (generic that install node-modules from package.json is broken for web3 somehow). That is why it is breaking in heroku as well as in local. But, that's a suspect only.
$ git push heroku master (and heroku should push it in and not look for modules I am not asking for).
Likely it is because of web3 npm is breaking. They (web3) have release version web1.0.0-beta.40 to 43 in 3 days. But, I also realized that generic npm install, used for generating node-modules from package.json (node.js), is breaking on web3 on both side. In heroku after push when they try to gen the node-modules. And, when I clone the version from heroku into my own laptop, it cry at the same place (while creating node module at web3). Both places are looking for angular library that it cannot find (I do not know why they would do that - a bug?). And, even when I specify much lower version of web3, it still looks for it that it never did in the past.
I have reported the bug and hope they fix it quickly. I removed all web3 reference from my application and it went into heroku nice and fine. But, that is not good when app depends on smart contracts ... right :)?
|
STACK_EXCHANGE
|
Amazon Recruiting Freshers as Software Development Engineer @ Bangalore
Company Name : Amazon
Location : Bangalore
Experience : 0 – 3 yrs.
The Seller Services (Seller Experience) division is one of the fastest growing businesses within Amazon.
There are over 2.4 million merchants, rangingstudents selling their text books to nation-wide retail chains, selling on Amazon’s international marketplaces. Collectively, these merchants are responsible for selling about 1/3rd ofitems orderedAmazon.com world-wide.
The revenue and free cash flow generated by the Seller Services business are at the same proportions.
This is an already-large business that is continuing to grow at a high rate year-over-year and we have several new initiatives that will further accelerate this trend.
The technology platform is the cornerstone for our growth.
The team is investing heavily in building a highly scalable global platform that can handle millions of transactions every month worth billions of dollars while maintaining buyer’s and seller’s trust on the platform.
The team aspires to provide sellers with state-of-the-art tools using technologies such as Machine Learning,
Natural Language Processing for managing their business.
This unique combination of technologies has enabled the team to create a niche for the team within Amazon.
The development team in Hyderabad, India has theter to build this exhaustive platform for Sellers.
We are looking for a development engineer for the Hyderabad team who is passionate about global development and building highly scalable and extensible platforms.
The engineer will need to have strong focus on engineering excellence and performance metrics.
He or she will also have very strong communication skills.
0-3 years of experience in product development working on highly scalable systems
Passion for building scalable, global, complex systems to solve problems with proven ability to deliver high quality software.
Solid understanding of Object-Oriented design and concepts.
Expert knowledge of Java, C/C++ is a must. Knowledge of Perl/Python would be an added advantage
Should be comfortable with working on Unix/Linux based operating systems
Self-directed and capable of working effectively in a highly innovative and fast-paced environment.
BS or MS in Computer Science or in a relevant Engineering discipline.
Understanding of databases solutions and previous record of having worked on high performing solutions would be a big plus
Experience in developing new frameworks and an inclination of developing a product instead of a customized application
· Candidates must demonstrate technical leadership, strong verbal and written communication skills
· Candidate must have a strong customer focus
· Technical aptitude to quickly grasp complex technical issues and communicate directly with technical teams
· Proven track record of taking ownership and driving results on technical projects
· 0-3 years of experience in product development working on highly scalable systems
· Passion for building scalable, global, complex systems to solve problems with proven ability to deliver high quality software.
· Solid understanding of Object-Oriented design and concepts.
· Experience of Test Driven Development methodology is an advantage
· Expert knowledge of Java or C/C++ is a must. Knowledge of Perl & Python would be an added advantage
· Ability to innovate and think out of box solutions
|
OPCFW_CODE
|
Having an XML database of 60,000+ car parts and accessories products, need to create/convert file into an Amazon and/or eBay compatible file like csv, tdb, etc... also to having them easy to manage and edit, synchronize with many ebay user ID and one place. Who understands and knows how to do that, please leave whats app for private chat, need this
...script basically can convert a CSV in turbolister format in magmi csv format. I need 1. To optimize memory performance. Now the script load the whole csv into an array, I want it to process che csv line by line, to avoit php memory limit problems 2. import ebay category 1 path into categories field: to get full path of ebay category we need to call
Hello I have a csv file from turbolister that I need to convert in a magmi csv to import configurable products in magento. You don't need to have skills in magmi or magento, just strong skills in csv and XML file manipulation. I have a csv file from turblister and I need to do the following operation on it. At the end we will have a new csv like specified
...software that will at scheduled intervals convert CSV files from eBay and Amazon and input them via API into our hosted inventory management software. Files will be emailed to a central email address at scheduled intervals, the software needs to get the files, convert them into a properly formatted XML file, and make the appropriate API calls to our
Convert attached xml file to use as a category selector for a form, must return final category id number to form category input, should open with main categories and then chose subcategories, sub sub categories, etc until the category is reached and then the ability to accept that cat id and input into the category form input, this should all be shown
Jax Music Supply is an online based guitar and accessories retailer. Started in 2007, we market on a variety of web based platforms including eBay, Amazon, and our website. We are in the process of upgrading our tools to allow easier automation. To that end, we need a bulk upload function for one of our marketplaces. Generally speaking, the software
...*PROJECT DETAILS* I have a menu in PSD format that I need to convert to a flash based dropdown menu WITH XML file. The XML file needs to be able to update and add new menu content, URL's for the menu and the images for each of the category items (parent level). The background for the SWF file needs to be TRANSPARENT. As you can see the link belo...
I have a PSD that I need to convert to a flash based dropdown menu. The flash file would need an external XML file that can be easily updated by the client to update the menu navigation content and the URL's. If the background for each image can also be updated via XML that would be better. The dropdown effect can be simple, nothing fancy - it needs
...script that can work with the EBAY API to extract all 'Awaiting Postage' information from 'Selling Manager' and convert this information into an simple XML file. ## Deliverables Attached is the XML file format, only the fields in red need populating with data from ebay or the current date. The extraction of data from eba...
...take the stock items and convert it to a cvs file, another gaf made a conversion tool to transfer this to a ebay turbo lister file. however i want a more basic program it works but it can edit a well which i want removed, it also dont handle ebay catergorys well. so want a program that will convert ione way and also convert turbo lister fil...
...download a text file Which is then converted to a turbo lister file? The turbo lister than picks this up and I am having to go and change all the categories before it can be loaded into eBay Which is pretty time consuming? What I need is software that will take all the data from my site or from my easy populace text file generated from my
We need someone to convert our xml file that ebay provide of our shop listings to enable upload to google base. A very quick and simple job for someone with the know how, we would like the succesful bidder to have done this type of work before and be able to provide proof of this. We consider this a " test " job as we run a succesful e-commerce
...of) clone of the eBay "Sell Item"-Form. My Problem are the Category specific Informations. I have a XML-File, where all specific Informations are listed. Let me explain by an Example: If someone wants to sell a Car, eBay asks for Number of Doors, Time of first use, HP, ... If someone wants to sell a Computer, eBay asks for HD-Size,
...encode/decode XML in Ebay's Turbo Lister CSV (Comma Separated Value) file format. When exporting listings from Turbo Lister in CSV format, there are four fields that seem to be made up of garbled information ('Attributes', 'ShippingServiceOptions', 'InternationalShippingServiceOptions', 'Ship-To Locations'). In fact these fields are enc...
...currently integrate with eBay using the following technologies: Java, XML, XML Schema, JAXB. We generate the request XML using a string template. The project goal is to convert this code to SOAP. In addition, create test cases to cover the revised functionality. The current functionality includes: list inventory item on eBay (add/modify/delete), get
...We currently have access to the eBay API's for the test environment. What I need is a program to be able to check for new feedback and automatically respond to positive feedback. The whole process is a very, very simple one; it is basically a case of using the API to gather the total number of feedbacks for a given eBay account, compare this number to
|
OPCFW_CODE
|
Plugin MIDI Filtering and Automation
Each plugin (both instruments and insert effects) contains a MIDI Routing map. The MIDI Routing map changes MIDI input events before they enter the plugin. Examples of routing and filtering include masking out MIDI input, re-channelizing it, layering across channels, limiting key ranges, transposing keys, and many advanced operations on MIDI CC events.
|•||Each plugin has a full set of MIDI Routing options|
|•||Each plugin's MIDI routing is independently configurable for each scene|
Open the plugin console for the plugin and select the MIDI Routing tab.
The MIDI Routing tab configures channel-specific MIDI input settings.
Each MIDI input port can be either enabled or disabled. If enabled, each input channel can be remapped and layered. In the above picture channel 6 is layered to channel 5 and channel 6. Each MIDI event (e.g. notes) on channel 6 is duplicated and sent to the instrument both on channel 5 and channel 6.
Right click a row in the To column to make changes to this mapping. Click the triangle icon to expand or collapse portions of the channel display.
Right click the To column on the port row to enable or disable the entire port, disable all its channels, reset channels to a 1-1 map (unity), or map all channels to channel 1.
For each MIDI input port and channel from/to pair you can independently configure:
|•||What MIDI note ranges a instrument will respond to (splits and layers configuration)|
|•||How incoming MIDI notes will be transposed (after note range filtering)|
|•||How incoming MIDI continuous controller data is remapped to different controller numbers.|
Use the Quick MIDI Routing button to quickly set up basic channel routing.
The menu item “Full Reset” resets all port mappings and then applies the selected port/channel. The menu item “Port Reset” only changes the port selected while leaving other ports unaffected.
Use the Advanced MIDI Channel Routing dialog to configure channel from/to pairs using a convenient matrix view:
Right click on a MIDI port to alter how events are filtered for this plugin.
|•||Copy/Paste enables you to copy the MIDI routing for one port to another|
|•||Copy to all ports on this plugin enables you to duplicate the configuration of one port to all ports on this plugin only.|
|•||Copy port to all scenes copies the port configuration to all other scenes for this plugin and this port|
Click Save to save the MIDI Routing of the currently selected port.
Clock Open/Load to load a MIDI Routing into the currently selected port.
Press Save to save a configuration. This includes for each MIDI input:
This enables frequently used MIDI configuration parameters to be applied to other Instrument Modules or stored for future use. When a MIDI Configuration is loaded you may optionally load or ignore specific portions of the configuration.
Tip: Each MIDI Configuration file stores information about a single MIDI port. If you save a file, it will reflect the configuration of the currently selected port (if a channel from/to pair is selected beneath it, it will still save the parent port info.) When you load a MIDI Configuration file, it is loaded into the current port. This makes it useful to save MIDI Configuration files that are “device-specific” because devices are attached to ports.
A last-used MIDI configuration is always saved with an Instrument Module. Usually, this MIDI configuration will be reloaded automatically on any future Instrument Module using the same instrument. However, if you select a MIDI configuration to be the default, it will be used instead. This ‘paperclip’ icon will be displayed with a green color if a default exists for this instrument module. Double clicking the button erases any existing default.
|
OPCFW_CODE
|
Get started with app deployment in Microsoft Intune
Updated: August 20, 2015
Applies To: Microsoft Intune
Before you manage and deploy apps in Microsoft Intune, use the information in this section to learn about:
The Apps workspace in the Microsoft Intune administrator console provides information about detected and managed apps. The following pages are available:
Apps workspace page
Lists app deployment alerts and the current status of your cloud storage.
Shows the software inventory data that Microsoft Intune detects on managed computers. Software inventory is only available for computers. Therefore, Intune does not detect or list apps for managed mobile devices. You can perform the following tasks from the Detected Software workspace:
The Apps page is where the apps that you want to deploy to your managed computers and mobile devices is uploaded, deployed, and managed on an ongoing basis. You can perform the following tasks:
The Microsoft Intune Software Publisher starts when you add or modify apps from the Microsoft Intune administrator console. From the publisher, you select a software installer type that will either upload apps (programs for computers or apps for mobile devices) to be stored in Intune cloud storage, or link to an online store or web application.
Before you begin to use the Microsoft Intune Software Publisher, you must install the full version of Microsoft .NET Framework 4.0. After you install the .NET Framework, you might have to restart your computer before the Microsoft Intune Software Publisher will open correctly. For details, see Microsoft .NET Framework 4 (Web Installer).
All apps that you deploy must be packaged and uploaded to Microsoft Intune cloud storage. Before you deploy apps, make sure there is enough storage space available to upload the app. A trial subscription of Intune includes 2 gigabytes (GB) of cloud-based storage that is used to store managed apps and updates. A paid subscription includes 20GB, with the option to purchase additional storage space at 1GB increments by using the Microsoft Intune Extra Storage Add-on. The following rules apply to purchasing additional cloud-based storage for Intune:
You cannot purchase additional storage during any Intune pre-release or trial period.
You must have an active paid subscription in order to purchase additional storage.
Only billing administrators or global administrators for your Microsoft Online Service can purchase additional storage through the Intune account portal. To add, delete, or manage these administrators, you must be a global administrator and sign in to the Intune account portal.
If you are a volume licensing customer who has purchased Intune or the Microsoft Intune Add-on through the enterprise agreement, contact your Microsoft Account Manager or Microsoft Partner for pricing information and to purchase additional storage.
In the Microsoft Intune account portal, in the left pane under Subscriptions, click Manage.
On the Billing and subscription management page, click the Intune subscription for which you want to purchase additional storage.
On the Subscription details page, by Optional add-ons, click Add more.
On the Select number of licenses page, under Optional add-ons, enter the additional number of GB of storage that you want to buy for the Microsoft Intune Extra Storage add-on. For example, if you currently have 20 GB, and you want to add another 10 GB, enter 10.
On the Review important information page, review the order summary information, and if it is correct, click Place order.
The Order confirmation page opens to display the order details. Click Finish to complete the process.
Mobile devices and computers must run a supported operating system to install apps that you deploy by using Intune.
For the complete list of supported operating systems for mobile devices, see Mobile device management capabilities in Microsoft Intune.
For the complete list of supported operating systems for managed computers, see Computer management capabilities in Microsoft Intune.
Intune lets you deploy the following software installation types:
Use the Software Installer installation type to:
Upload a signed app package to Microsoft Intune cloud storage and make the app available to users through the Microsoft Intune company portal.
Upload apps that will be deployed to computers that run the Intune computer client.
Install apps on managed mobile devices from an installation file, bypassing the app store (known as side loading).
Use the following table to help you understand the different software installer file types:
Software installer type
Windows Installer (*.exe, *.msi)
App Package for Android (*.apk file)
The App Package for Android is not available as a software installer type until you set the Mobile Device Management Authority to Microsoft Intune.
For details, see Set up Android management with Microsoft Intune.
App Package for iOS (*.ipa file)
For details, see Set up iOS management with Microsoft Intune.
Windows Phone app package (*.xap, .appx, .appxbundle)
Before you distribute a Windows Phone 8 or Windows Phone 8.1 app package, you must set the mobile device management authority to Microsoft Intune, set up users, and obtain an enterprise mobile code-signing certificate. For details, see Set up Windows Phone management with Microsoft Intune.
Windows app package (.appx, .appxbundle)
The Windows appx package for Windows RT and enrolled Windows 8.1 devices is not available until you set the mobile device management authority to Microsoft Intune, provision users, obtain a code-signing certificate, and sideloading product activation keys. For details, see Set up Windows Phone management with Microsoft Intune.
Use the External Link installation type when you have a:
URL that lets users download an app from the online store. This installation type is supported by the following device platforms:
Windows 8 and later
Windows Phone 8 and later
Link to a web-based app that runs from the web browser.
This installation type is available to all devices supported by Intune.
External links are made available to users through the Microsoft Intune company portal.
Use the Managed iOS app from the app store installation type to manage and deploy iOS apps that are free of charge from the iOS app store. You can deploy this installer type as a required install to make it mandatory on managed devices (which also makes it available to install from the mobile web portal), or deploy it as available to let users download it from the mobile web portal. You can also associate mobile application management policies with compatible apps and review their status in the administrator console. Managed iOS apps are not stored in your Intune cloud storage space.
This section provides an overview of the app deployment process in Intune.
Ensure that the app you want to deploy is supported by Intune.
Create groups of users or devices to which you can deploy the app.
Publish the app to Microsoft Intune cloud storage.
Deploy the app to mobile devices or computers using the deployment action you require.
Monitor the app deployment.
When you deploy apps, you can choose from one of the following deployment actions:
Required install – The app is installed onto the device, with no end-user intervention required.
For iOS devices that are not in supervised mode, the user must accept the app offer before it is installed. For information about using supervised mode in Microsoft Intune, see Use iOS configuration policies to manage device settings with Microsoft Intune.
For Android devices, the user must accept the app offer before it is installed.
Available install – The app is displayed in the company portal, and end-users can install it on-demand.
Uninstall – The app is uninstalled from the device.
Not applicable – The app is not displayed in the company portal, and is not installed to any devices.
Windows app package (deployed to a user group)
Windows app package (deployed to a device group
App package for mobile devices (deployed to a user group)
App package for mobile devices (deployed to a device group)
Windows Installer (deployed to a user group)
Windows Installer (deployed to a device group)
External link (deployed to a user group)
External link (deployed to a device group)
Managed iOS app from the app store (deployed to a user group)
Managed iOS app from the app store (deployed to a device group)
When two deployments, with the same deployment action are received by a device, the following rules apply:
Deployments to a device group take precedence over deployments to a user group. However, if an app is deployed to a user group with a deployment action of Available and the same app is also deployed to a device group with a deployment action of Not Applicable, the app will be made available in the company portal for users to install.
The intent of the IT admin takes precedence over the end-user.
An install actions takes precedence over an uninstall action.
If both a required install and an available install are received by a device, the actions are combined (the app is both required and available).
|
OPCFW_CODE
|
package wecare.backend.service;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import wecare.backend.model.*;
import wecare.backend.model.dto.ConsultedPatientsCount;
import wecare.backend.model.dto.DoctorDataCard;
import wecare.backend.repository.ClinicDateRepository;
import wecare.backend.repository.NurseRepository;
import wecare.backend.repository.PatientClinicProfileRepository;
import java.time.*;
import java.time.format.DateTimeFormatter;
import java.util.*;
@Service
public class NurseDashboardService {
@Autowired
private PatientClinicProfileRepository patientClinicProfileRepository;
@Autowired
private NurseRepository nurserRpo;
@Autowired
private ClinicDateRepository clinicDateRepo;
//Data cards in Doctor Dashboard
public List<DoctorDataCard> getNurseCardDetails(Integer nurseId) {
Optional<Nurse> nurse = nurserRpo.findById(nurseId); //find the nurse by id
Nurse resultNurse = nurse.get(); //store the nurse instance in result nurse
Integer clinicId = resultNurse.getClinic().getId(); //get the nurse's clinic id
System.out.println(resultNurse);
List<DoctorDataCard> nurseDataCardList = new ArrayList<>(); //this is the object array return finally
//Total Registered Patients In Clinic Card
DoctorDataCard patientCountObject = new DoctorDataCard(); //make an instance to store the patient count in the clinic
Integer patientCountInClinic = patientClinicProfileRepository.getPatientCountInClinic(clinicId);
patientCountObject.setName("Total Registered Patients In Clinic");
patientCountObject.setValue(Integer.toString(patientCountInClinic));
nurseDataCardList.add(patientCountObject); //push the instance to the object array
//Patients Registered In Last Week
DoctorDataCard patientCountLastweekObject = new DoctorDataCard();
LocalDate toDay = new Date().toInstant().atZone(ZoneId.systemDefault()).toLocalDate(); //convert the current date to a simple format
LocalDate weekBeforeToday = LocalDate.now().minusDays(7); //get the date week before today
Integer patientCountLastweek = patientClinicProfileRepository.getPatientCountInLastweek(clinicId, toDay, weekBeforeToday); //get the last week registered patient count to the clinic
patientCountLastweekObject.setName("Patients Registered In Last Week");
patientCountLastweekObject.setValue(Integer.toString(patientCountLastweek));
nurseDataCardList.add(patientCountLastweekObject);
//Next clinic Date Card
List<NurseSchedule> nurseSchedule = resultNurse.getNurseSchedule(); //get doctor schedule details of the doctor
ArrayList scheduleDays = new ArrayList(); //make an array to store the schedule days
for (int k = 0; k < nurseSchedule.size(); k++) {
scheduleDays.add(nurseSchedule.get(k).getClinicSchedule().getDay().toUpperCase()); //[MONDAY,WEDNESSDAY,FRIDAY]
}
DoctorDataCard nextClinicDateObject = new DoctorDataCard(); //make an instance of datacard
LocalDate checkDate = toDay; //generate today date
outerloop:
//this is use for break nested for loop
for (int j = 1; j < 7; j++) {
LocalDate nextDate = checkDate.plusDays(j); //get the tomorrow date
DayOfWeek day = nextDate.getDayOfWeek(); //get the day from tomorrow date - ( 2021-9-1)=>WEDNESSDAY
for (int i = 0; i < scheduleDays.size(); i++) {
if (day.name().equals(scheduleDays.get(i))) { //check if the next day is included in the schedule array
nextClinicDateObject.setName("Next clinic Date");
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-LL-dd");
String formattedString = nextDate.format(formatter);
nextClinicDateObject.setValue(formattedString); //convert date into string (because doctor data card values are not in same data type)
nurseDataCardList.add(nextClinicDateObject); //add the instance to the array which will be return at the end
//get number of patients in next clinic
DoctorDataCard patientCountInNextClinicObject = new DoctorDataCard();
ZoneId systemTimeZone = ZoneId.systemDefault();
ZonedDateTime zoneDateTime = nextDate.atStartOfDay(systemTimeZone);
Date utilDate = Date.from(zoneDateTime.toInstant());
Integer noOfPatientInNextClinic = clinicDateRepo.findFirstByClinicSchedule_ClinicIdAndDate(clinicId, utilDate).getNoPatients();
patientCountInNextClinicObject.setName("Patients in Next Clinic");
patientCountInNextClinicObject.setValue(Integer.toString(noOfPatientInNextClinic));
nurseDataCardList.add(patientCountInNextClinicObject);
break outerloop;
}
}
}
return nurseDataCardList;
}
//Consulted Patients data
public ArrayList<ConsultedPatientsCount> getConsultedPatientsData(Integer nurseId) {
ArrayList<ConsultedPatientsCount> consultedPatients = new ArrayList<>(); // this is the array which return finally
Optional<Nurse> nurse = nurserRpo.findById(nurseId);
Nurse resultNurse = nurse.get();
Integer clinicId = resultNurse.getClinic().getId(); //get the clinic id
List<NurseSchedule> nurseSchedule = resultNurse.getNurseSchedule(); //get nurse schedule details of the nurse
ArrayList scheduleDays = new ArrayList(); //make an array to store the schedule days
for (int k = 0; k < nurseSchedule.size(); k++) {
scheduleDays.add(nurseSchedule.get(k).getClinicSchedule().getDay().toUpperCase()); //[MONDAY,WEDNESSDAY,FRIDAY]
}
LocalDate toDay = new Date().toInstant().atZone(ZoneId.systemDefault()).toLocalDate(); //convert the current date to a simple format
LocalDate MonthBeforeToday = LocalDate.now().minusDays(30); //get the date month before today
for (int i = 0; i < 30; i++) {
LocalDate nextDate = MonthBeforeToday.plusDays(i);
DayOfWeek dayOfWeekNextDay = nextDate.getDayOfWeek(); //get the next from day which past 30 days- ( 2021-9-1)=>WEDNESSDAY
for (int j = 0; j < scheduleDays.size(); j++) {
if (dayOfWeekNextDay.name().equals(scheduleDays.get(j))) { //check if the next day is included in the schedule array
//convert LocalDate to Date
ZoneId systemTimeZone = ZoneId.systemDefault();
ZonedDateTime zoneDateTime = nextDate.atStartOfDay(systemTimeZone);
Date utilDate = Date.from(zoneDateTime.toInstant()); //utilDate means Date type of nextClinicDate
//get the consulted patients of the perticular date
Integer consultedCount=clinicDateRepo.getConsultedPatients(clinicId,utilDate);
ConsultedPatientsCount consultedPatientsCountObject= new ConsultedPatientsCount();
consultedPatientsCountObject.setClinicDate(nextDate);
consultedPatientsCountObject.setCount(consultedCount);
consultedPatients.add(consultedPatientsCountObject);
}
}
}
return consultedPatients;
}
}
|
STACK_EDU
|
Feedback on the Hackers Handbook series.
There's been reaction to the start of the Hackers Handbook already. Some of it's expectedly dismissive ('we are the most secure operating system in the universe') but some of it - eg at Mac4Ever - raises good points.
Breaking the Chain
'I find the issue of the virus is more and more recurring', writes Rompod. 'I don't know what's reality but I am beginning more and more to be on my guard against these new tricks.'
'But they say it because it is true', writes Fuzzi. 'To be able to propagate a virus needs a good base install of machines for the Windows viruses, I mean the real ones which can propagate and which arrive have to send copies of themselves to other machines without needing permissions. It's enough that only one machine is Mac or a Linux PC and the chain is broken. So worms won't be propagating quickly tomorrow on Linux or the Mac. Whereas Windows with its superb park of installed machines is a privileged target.'
And that's true - sort of: on 5 May 2000 when the Love Bug whacked the world almost all personal computers ran Windows and the great majority of them ran Outlook as well.
But not all were running Windows and far from all Windows users had Outlook installed - and yet the Love Bug did a good job of propagation anyway. As the Love Bug propagated itself to up to fifty new machines for each corrupted machine things built up rapidly, no matter a few 'chains' were 'broken'.
It comes down to the percentages. With a 95% demographic only a few machines won't be affected; ceteris paribus and with a 5% demographic only a few machines will be affected. It all comes down to how tightly knit the 'Mac community' is. What's the percentage of OS X users in the typical OS X address book? That's the critical issue.
As Charlie Miller said, it's going to take a bigger demographic for this to get interesting, for the worm authors to see a point in it. Currently they're working with a market demographic of 95% and that gives them a good saturation and good reason to keep concentrating on Windows to the exclusion of OS X.
But as #1) Apple are currently outselling PC OEMs; and #2) it's relatively simple to create an OS X worm things will change rapidly when the market reaches that tipping point.
VD is a bit troubled. 'I don't understand anything', he writes. 'Disk Utility reads the files in /Library/Receipts to determine the correct permissions. Thus if I am an admin I can indeed corrupt file permissions by putting tricks in /Library/Receipts. But if I am an admin I can also type 'sudo chmod' and enter my password to corrupt file permissions. Thus I wouldn't call this a security flaw.'
Poor VD's a bit confused: he forgets that in the one case he needs the password and fortuitously has it whilst in the other case the worm doesn't have the password and thanks to Apple doesn't need it either.
Arnaud de Brescia sails out into dangerous waters.
'Why do we get these trolls when it comes to OS X security? I tested all the POCs available at the MOAB site on OS X 10.4.10 with QuickTime 7.2 and the Security Update 2007-007 - and my system's invulnerable to all of them!'
If CVEs are not fixed by Apple and still open then there are security holes. de Brescia simply lacks the chops to see them. As regards MOAB #15 it's definitely wide open. MOAB #15 also has its CVE, has recently again 'encore une fois' been recognised by Apple as being 'a known issue', and very much will own any OS X 10.4.10 QuickTime 7.2 Security Update 2007-007 system out there today.
Perhaps part of the task of assessing the efficacy of the MOAB exploits is grasping the point behind them. You can't just run one and expect your computer to explode like a ginormous firework.
The Hackers Handbook — Foreword
|
OPCFW_CODE
|
Introduction: 10 Steps to Making Shrinky Circuits
Shrinky Circuit is a way to rapidly prototype circuit boards without the conventional PCB /chemical etching procedure involved. More Instructables on this method to come later.
Step 1: (Optional) Color
Color the Shrinky Dinks with colored pencil. Color on a single side should be enough if you are using the translucent shrinky dinks. Color both sides if you are using the white shrinky dinks. Note: that the color will look a lot darker than the colors before shrinking (See image for comparison).
Step 2: Circuit Design and Testing
It is often helpful to test your circuit and components on a breadboard before you insert component into the substrate because once the circuit is shrinked and you find that the circuit is breaking because of a faulty component, there's no going back.
Step 3: Trace, Then Cut Out the Pattern Along the Outer Traced Pattern
Print / trace pattern of your circuit design and the outline for the circuit shape with a pen.
Notes for surface mount LED (Optional) :
If you are using a surface mount LED instead of a through hole component, then you don't need to dig the holes on the component ends. Also, when tracing circuit pattern, the space that you leave for the LED should be 3 times the size of your actual LED to accommodate for shrinking. Using the through hole component enables more flexibility since the wires can bend.
It may be helpful to draw how the LED should be arranged by marking where the tick of the LED (negative end) should be faceing. In this case, they should all be facing in the same direction
(Image Source: phenoptix)
Step 4: Dig Holes for Through-hole Components
Using an exacto knife, dig a hole through where the components embeds into the circuit (i.e. where the lines connects to the components) . Make sure you remove the excessive plastic around the hole so that it doesn't clutter up during heating.
Step 5: Conductive Tracing
Trace the conductive line for the circuit with conductive pen leaving blank space for the surface mounts LED. Give it some time to dry. For powerpads and component holes, draw a round circle around the hole so that during heating a solder blob is formed around the component.
Step 6: Insert Components
Step 7: Heating
Oven bake (Preheat 5 minutes at 275F) or use a heat gun to shrink the substrate. Flatten the substrate before cooling. Generally, oven baking creates a flatter substrate than heat gun due to a more uniform heating. But heat gun can be used to generate more complex substrate shape because you can work the substance while heating.
Step 8: (Optional) Add a Switch
Place a conductive stripe of wire, conductive thread/ tape, aluminum or any conductive thing across the area denoted by the "Switch" component. Then the circuit is done!
Step 9: Test With Power Supply
Test with power supply on the powerpads. You could also insert a battery holder in the two powerpads before shrinking as a power supply.
Step 10: (Optional) Debugging
If your circuit is not working, it could be that the conductive pattern is not working somewhere in the circuit. Test sections on circuits with a multimeter for conductivity. If you find some region that is disconnected, you can patch with some conductive epoxy. Using the conductive epoxy, connect the LED to both sides of the conductive pattern. Leave the circuit sitting for 24hr for the Epoxy to dry up.
|
OPCFW_CODE
|
h a l f b a k e r y
Oh yeah? Well, eureka too.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
or get an account
I bought a few "few gig" hard drives at a garage sale the other day for 25 cents each. I know modern hard drives are 100 GB or more, but that would still be a useful amount of space, if you could magically add it to your existing storage. Why can't we build machines with tons of cables coming out with
adaptors of all sizes, and just plug in every old hard drive we can dig out of the garbage? The system would automatically take care of compatibility, combining them into one virtual drive, RAID-type redundancy to take care of drive failures, etc. It would save money for the user and prevent a lot of hardware from being thrown out and the raw materials wasted. The idea could be extended to any type of hardware. Huge banks of 286s working in parallel to achieve the functionality of a web server or something. Each one running constant checks of their own hard drives to watch for failure while the other hard drives are being accessed.
I don't know. This is probably completely unrealistic. I just want to spend $6.25 at garage sales for 25 4 GB laptop hard drives and be able to use them all in one desktop.
A project that uses cheap computers and networks them together to make a supercomputer using linux. [Worldgineer, Oct 04 2004]
(?) Floppy Disk Raid
If this is possible, then perhaps a generic x-usb converter would allow your idea to get baked. [zen_tom, Nov 28 2004]
Network Attached Storage [BunsenHoneydew, Dec 19 2006]
NAS Linux distros
Direct link to subsection of above [BunsenHoneydew, Dec 19 2006]
||Well, an old 6MHz 286 could run at 0.9MIPs and the better Core 2 Duos run up at 20,000, so you would need 22.2k old 286 processors and a lot of wire and electricity to do anyhthing useful.
||On the other hand, a cheap/free 486 tower with a network card, a floppy linux distro and a few IDE cards could make a cheap NAS. [link] The motherboard IDE would support 2 to 4 drives, and each IDE/ATA card another 2 or 4. And you can throw in a SCSI card (7 drives per) or two, and set them all up as one or more RAID volumes.
||You'll spend more on power and burn more carbon though.
|
OPCFW_CODE
|
PATH being overridden
I'm currently running the agent in a Docker container and modifying the PATH environment variable in my Dockerfile follows:
ENV PATH "$GOPATH/bin:/usr/local/go/bin:$PATH"
If I /bin/bash in to the container, I can verify that the PATH is set correctly. However, the actual build tasks themselves seem to be using a different PATH and overriding what my image sets for PATH. The overriding path is as follows as viewed from the Agent Capabilities listing:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
This requires me to use the fully-qualified path to any executales I run in a Build task.
I'm using v2.99.0-0428 of the agent but this also occurred with v0.7.
Thanks
Agent or tasks doesn't set or override path. Are you running interactively or as a service?
Interactively
Ignore PATH in the capabilities. That does not represent run time. It's config time and we actually shouldn't be registering PATH as a cap
A more interesting test is to add a command line task and run env or echo $PATH. Either that or a shell script task that does the same
So here's the PATH that gets displayed when I run env in a command line task:
2016-05-01T20:24:53.1095510Z PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
And here is the PATH when I run env directly from within the same container that is executing the build:
PATH=/opt/buildagent/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So I'm not sure where the first PATH is coming from...which is what is overriding my desired PATH behavior.
I just verified my build using same approach as @anweiss and the PATH output in build process does match my PATH setting.
I am running agent as a service on OS X
Any suggestions to have the correct PATH value picked up by build process?
Hey @danbridgellc, so does your PATH output in the build process indeed match that of your intended PATH setting? Your last statement seems to contradict this...not sure if you meant to say "does not match"...
sorry you are correct - it DOES NOT match! sorry for typo
so digging into things, I just found that the agent uses a .plist file (at least when it is set to run as a service) and it has Environment variable section...I set the PATH env variable to see if it affected the build and it in fact does! So while not ideal, I can set the PATH in that .plist file and my build now executes completely!
The path to the .plist file for me is $HOME/Library/LaunchAgents
Hope that might help for now...
@danbridgellc - that's correct. when you say yes to svc, we simply create osx service plist and call launchctl to start. That plist has the end. That's why I asked @anweiss if running as svc but he said interactive.
In node agent, we would snapshot your current env into the plist. Not perfect since it doesn't get updates. I will make that change in this agent and output to console when configuring as a svc where to change.
But, that still doesn't explain what @anweiss is seeing. Investigating ...
I've been trying to figure out how to set the path as well since the Xamarin.iOS build step can't find nuget. Adding a Command Line step env outputs PATH=/usr/bin:/bin:/usr/sbin:/sbin. I'm running as a service.
@dbeattie71 - yes, that is how to manage the path when running as a svc. You then need to stop the service (./svc.sh stop ... wait for red in admin console) and then start it (./svc.sh start). Did that work for you?
Yes it did, thanks.
Cool. I'll use this issue to populate the initial plist with the current env to avoid that in most cases.
@anweiss - qq - you mentioned docker. Do you have the issue if you run interactive outside of docker (from plain cli, no service). Trying to tease the issues apart here. thx
@anweiss I could not make a repro using the info from your first post. I tested with an interactive agent (not running as a service) built from the latest code on master and run it on Ubuntu 14.04. I made a build with two tasks: a "Command line" and a "Shell script". In the task scripts I called "env" that prints an environment identical with the agent environment.
We need more info to narrow down. Is it possible Docker is relevant to the issue? Maybe you can try just plain Ubuntu to see if the issue will go away.
hey @bryanmacfarlane and @stiliev ... after a bit more digging, I found that when running the agent in interactive mode on the host itself, it picks up the updated PATH variable without issue. So it seems to be specific to how the agent reads the PATH from within a Docker container.
A docker inspect shows the PATH as I intend it. And more so, if I display the PATH from within a Bash shell in the container, it displays the PATH correctly. However, when the agent runs as part of a build task and displays the PATH, it is different than what docker inspect shows.
To summarize:
PATH has not been overridden for interactive agents running on OSX, Linux, Windows.
we added ability to set the PATH for OSX, Linux agents running as a service with PRs https://github.com/Microsoft/vsts-agent/pull/118 https://github.com/Microsoft/vsts-agent/pull/99
There is an issue with PATH overridden when agent is running in Docker. We do not test or officially support Docker for now, so I'm not able to track this issue down.
Just to clarify - it's not that we don't support Docker. It's that we aren't providing a Docker solution. You are free to create one. Whats clear here is we aren't overriding the PATH and we're persisting in the plist correctly.
|
GITHUB_ARCHIVE
|
The Basics of Python Vector Database
Introduction to Python Vector Database
Python vector database refers to a powerful tool that enables efficient storage, retrieval, and manipulation of vector data in Python. With its array of features and versatility, it has revolutionized the way developers handle geometric and spatial data. Whether you are working on geo-analytics, machine learning, or any project involving vectors, understanding and harnessing the potential of a Python vector database is crucial.
Under the hood, a Python vector database utilizes data structures that efficiently organize and index vector data, providing lightning-fast retrieval and query capabilities. It allows you to perform complex spatial operations, such as distance calculations, spatial joins, and nearest neighbor searches, with ease. Let’s delve deeper into the intricacies of Python vector database technology.
Key Features and Benefits
Python vector database offers a multitude of features that make it an indispensable asset for developers working with vector data. Here are some key features and benefits to look out for:
- Efficient Storage: Python vector database structures optimize the storage of vector data, minimizing memory footprint while ensuring fast access to information.
- Fast Retrieval: Thanks to its clever indexing mechanisms, a Python vector database enables speedy retrieval of vector entities based on various criteria, such as location, attributes, and geometric queries.
- Spatial Operations: Python vector database systems provide built-in functionalities to perform spatial operations, offering capabilities like distance calculations, intersection checks, and topological analysis, which simplify complex spatial analysis tasks.
- Flexibility: With its compatibility with various data formats like Shapefiles, GeoJSON, and PostGIS, Python vector database can seamlessly integrate with existing geospatial workflows and accommodate diverse data sources.
- Scalability: Python vector database systems are designed to handle large-scale vector datasets efficiently, making them suitable for applications ranging from small projects to enterprise-level solutions.
The Applications and Integration of Python Vector Database
Geospatial Data Analysis
In the realm of geospatial data analysis, a Python vector database serves as a fundamental tool for extracting insight and understanding patterns from spatial data. Using Python libraries like GeoPandas and PySAL together with a vector database, you can perform spatial analysis, visualize results, and gain valuable insights from even the most intricate geospatial datasets.
Geospatial data analysts and scientists can leverage Python vector databases to tackle complex spatial problems, such as land-use planning, urban growth modeling, transportation network analysis, and natural resource management. The ability to handle and process large volumes of spatial data efficiently in a Python vector database empowers analysts with unprecedented capabilities.
Machine Learning and AI
A Python vector database can seamlessly integrate with machine learning and artificial intelligence workflows, enabling developers and data scientists to enhance their models with spatial context and analyze complex relationships between features. By combining traditional machine learning techniques with the power of spatial data, Python vector databases become valuable assets for applications like image classification, object recognition, and anomaly detection.
Whether you are building a recommendation system, autonomous vehicle technology, or predictive maintenance models, incorporating a Python vector database helps in training, validation, and inferencing of machine learning models enriched with geospatial attributes. It unlocks a world of possibilities by fusing domain knowledge with machine learning techniques.
FAQs about Python Vector Database
Q: What is a Python vector database?
A: A Python vector database is a specialized database system designed to efficiently store, query, and manipulate geometric and spatial vector data within the Python programming language.
Q: Which Python libraries are commonly used for working with vector databases?
A: There are several popular Python libraries for vector database operations, including GeoPandas, PySAL, and Fiona. These libraries provide a wide range of functionalities for data manipulation, visualization, and spatial analysis.
Q: Can Python vector databases handle large-scale datasets?
A: Yes, Python vector databases are designed to handle large-scale vector datasets efficiently. Their indexing mechanisms and optimized storage structures allow for fast retrieval and query capabilities, even with vast amounts of data.
Q: Can a Python vector database be used for real-time spatial analysis?
A: Yes, some Python vector databases offer real-time capabilities for spatial analysis. These databases leverage efficient indexing techniques and parallel processing to enable speedy analysis and query execution, making them suitable for real-time applications.
Q: How can Python vector databases be integrated with existing GIS workflows?
A: Python vector databases often support popular data formats used in GIS, such as Shapefiles, GeoJSON, and PostGIS. This compatibility allows seamless integration with existing GIS workflows, enabling users to leverage the power of a vector database within their established spatial data pipelines.
Q: Are there any commercial or open-source options for Python vector databases?
A: Yes, there are both commercial and open-source options available for Python vector databases. Some popular open-source choices include PostGIS, SQLite/Spatialite, and MongoDB, while commercial offerings include Oracle Spatial and Microsoft SQL Server with spatial extensions.
Conclusion: Unlock the Potential of Python Vector Database
Python vector databases have revolutionized the way developers handle geospatial and vector data. The efficient storage, retrieval, and manipulation capabilities of these databases coupled with their integration with Python libraries have opened up endless possibilities in fields like geospatial analysis and machine learning.
To harness the power of Python vector databases effectively, explore the multitude of available libraries, experiment with the provided functionalities, and keep up with the latest advancements in the field. Never miss an opportunity to unlock the potential of Python vector databases and propel your projects to new heights.
For more insights on spatial data analysis, advanced GIS techniques, and data-driven decision-making, check out our other informative articles:
|
OPCFW_CODE
|
[Sandbox] IO Flow
Application contact emails
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Project Summary
IO Flow is a Cloud-Native, Microservices enabled workflow engine that facilitates creation, execution, monitoring & optimisation of complex business processes..
Project Description
IO Flow presents a remarkable opportunity for CNCF's prestigious sandbox program as it embodies cloud-native innovation and fulfils the demand for efficient workflow management solutions. This cutting-edge, microservices-enabled workflow engine is designed to significantly impact the cloud-native ecosystem by enhancing the creation, execution, monitoring, and optimisation of complex business processes.
Built with a clear focus on openness and industry standards, IO Flow seamlessly integrates with existing CNCF projects through interoperability libraries. Its compatibility with industry standards like MACH (Microservices-based, API-first, Cloud-native, and Headless) ensures seamless communication between various cloud-native tools, fostering a cohesive environment where projects complement each other's functionalities. Moreover, IO Flow adheres to OpenAPI 3.0, promoting a standardised approach to API definition, while embracing BPMN and OCEL to highlight its commitment to industry recognised standards and open collaboration within the cloud-native ecosystem.
IO Flow takes a novel approach to address the unfulfilled needs of workflow orchestration and management by offering a comprehensive feature set that aligns perfectly with CNCF values. With real-time process monitoring and detailed execution logs, it embodies CNCF's focus on observability and data-driven insights. The platform's modular and reusable workflows demonstrate commitment to extensibility and interoperability, supporting CNCF's goal of building a cohesive and collaborative cloud-native landscape. IO Flow's dynamic UI for user tasks enhances the user experience, aligning with CNCF's emphasis on improving user-centric design. The pre-built connector ecosystem further contributes to interoperability, making IO Flow a robust solution that empowers organisations to succeed in cloud-native environments.
IO Flow's clear roadmap includes workflow mining capabilities, visual workflow mining, KPI dashboards, benchmarking, workflow variant detection, conformance analysis, and workflow graphs/models. By integrating these features with its workflow management system, IO Flow goes beyond mere monitoring, facilitating seamless end-to-end workflow management and optimising workflows effectively. With its dedication to innovation and alignment with CNCF values, IO Flow emerges as a standout candidate for CNCF's sandbox program, destined to make a meaningful contribution to the cloud-native community.
Org repo URL (provide if all repos under the org are in scope of the application)
https://github.com/iauroSystems
Project repo URL in scope of application
https://github.com/iauroSystems/io-flow-core
Additional repos in scope of the application
No response
Website URL
https://www.iauro.com/io-flow
Roadmap
https://github.com/orgs/iauroSystems/projects/1
Roadmap context
No response
Contributing Guide
https://github.com/iauroSystems/io-flow-core/blob/master/CONTRIBUTING.md
Code of Conduct (CoC)
https://github.com/iauroSystems/io-flow-core/blob/master/CODE_OF_CONDUCT.md
Adopters
No response
Contributing or Sponsoring Org
https://www.iauro.com/
Maintainers file
https://github.com/orgs/iauroSystems/people
IP Policy
[X] If the project is accepted, I agree the project will follow the CNCF IP Policy
Trademark and accounts
[X] If the project is accepted, I agree to donate all project trademarks and accounts to the CNCF
Why CNCF?
We are eager to contribute IO Flow to the CNCF because of the exceptional value it offers to our project and the broader cloud-native ecosystem. Joining the CNCF provides us with access to a diverse and knowledgeable community, fostering collaboration, knowledge sharing, and technical advancements. The CNCF's emphasis on openness & interoperability aligns perfectly with our project's core principles, making it an ideal fit for seamless integration and collaboration.
By being part of the CNCF, IO Flow gains exposure to industry leaders, experts, and potential contributors, enabling us to refine and strengthen our project through valuable feedback and insights. The CNCF's sandbox program offers a valuable platform for showcasing IO Flow's capabilities, driving adoption, and encouraging community engagement.
The CNCF's sandbox program provides a launching pad for IO Flow, allowing us to demonstrate its unique features and foster meaningful connections within the cloud-native community. Overall, joining the CNCF will strengthen our project and enable us to make a valuable contribution to the cloud-native ecosystem.
Benefit to the Landscape
IO Flow brings significant benefits to the CNCF landscape, presenting a versatile workflow management solution with clear differentiators from existing projects like Argo Workflows. While Argo is specifically designed for Kubernetes, IO Flow extends beyond Kubernetes to cater to various cloud-native environments. Its adoption of BPMN for standardised workflow modelling sets it apart, enhancing compatibility and communication between cloud-native tools. With comprehensive features, real-time monitoring, dynamic UI, and scalability, IO Flow empowers organisations with actionable insights and improved user experiences. Its contribution enriches the ecosystem, optimising cloud-native workflows and empowering organisations in their cloud-native journey.
Furthermore, IO Flow's comprehensive roadmap includes workflow mining capabilities such as visual workflow mining, KPI dashboards, benchmarking, workflow variant detection, conformance analysis, and workflow graphs/models. By seamlessly integrating these features with its workflow management system, IO Flow offers seamless end-to-end workflow management and optimisation. This unique differentiator positions IO Flow as a powerful and innovative solution to address workflow challenges within the CNCF landscape, complementing and advancing cloud-native practices and fostering collaboration within the community.
Cloud Native 'Fit'
No response
Cloud Native 'Integration'
No response
Cloud Native Overlap
No response
Similar projects
https://github.com/camunda
https://github.com/Netflix/conductor
https://github.com/kissflow
Landscape
No
Business Product or Service to Project separation
N/A
Project presentations
No response
Project champions
No response
Additional information
No response
Thanks for this submission @mayur-yambal! I see parallels with Argo Workflows (as you mention) as well as Serverless Workflow.
Since this project enables users to develop and deliver workflow-style processes and applications I think it fits TAG App Delivery like those other two projects.
Could you present IO Flow at an upcoming TAG App Delivery general meeting? Our next opening is likely Sept 16, here's our running agenda/notes doc: https://docs.google.com/document/d/1OykvqvhSG4AxEdmDMXilrupsX2n1qCSJUWwTc3I7AOs/edit
I'll shortly open a tracking issue in our TAG repo too.
cc @thschue
Hi Josh,
Thanks for the update.
Argo Workflows and serverless workflow look similar, but Argo workflows focuses more on Kubernetes flows. And serverless workflow is based on serverless functions.
IO Flow is microservices based process automation targeting any business process orchestration.
Happy to present IO Flow in TAG App Delivery general meeting. Looking forward to joining.
Thanks,
Mayur Yambal
Chief Platform Officer
+91<PHONE_NUMBER> | LinkedInhttps://www.linkedin.com/in/mayuryambal/https://www.linkedin.com/in/yogesh-iauro/
[https://isignimages.s3.ap-south-1.amazonaws.com/Logo.png]
iauro systems Pvt. ltd.
[https://isignimages.s3.ap-south-1.amazonaws.com/Web.png] https://www.iauro.com/
www.iauro.com
[https://isignimages.s3.ap-south-1.amazonaws.com/mail.png]
@.***
[https://isignimages.s3.ap-south-1.amazonaws.com/fb.png]https://www.facebook.com/iauro [https://isignimages.s3.ap-south-1.amazonaws.com/twitter.png] https://twitter.com/iauro [https://isignimages.s3.ap-south-1.amazonaws.com/linkedIn.png] https://www.linkedin.com/company/iauro-systems-pvt-ltd [https://isignimages.s3.ap-south-1.amazonaws.com/Insta.png] https://www.instagram.com/iauro_systems/
This e-mail is confidential. It may also be legally privileged. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return e-mail. Internet communications cannot be guaranteed to be timely, secure, error or virus-free. The sender does not accept liability for any errors or omissions. The management of iauro systems Pvt. Ltd. do not accept any liability for mishandling of email account.
[cid:b5854bc7-2410-4a2f-93ea-c114ab81cb21]
From: Josh Gavant @.>
Sent: Thursday, August 10, 2023 8:35 PM
To: cncf/sandbox @.>
Cc: Mayur @.>; Mention @.>
Subject: Re: [cncf/sandbox] [Sandbox] IO Flow (Issue #55)
Thanks for this submission @mayur-yambalhttps://github.com/mayur-yambal! I see parallels with Argo Workflowshttps://argoproj.github.io/argo-workflows/ (as you mention) as well as Serverless Workflowhttps://serverlessworkflow.io/.
Since this project enables users to develop and deliver workflow-style processes and applications I think it fits TAG App Delivery like those other two projects.
Could you present IO Flow at an upcoming TAG App Delivery general meeting? Our next opening is likely Sept 16, here's our running agenda/notes doc: https://docs.google.com/document/d/1OykvqvhSG4AxEdmDMXilrupsX2n1qCSJUWwTc3I7AOs/edit
I'll shortly open a tracking issue in our TAG repo too.
cc @thschuehttps://github.com/thschue
—
Reply to this email directly, view it on GitHubhttps://github.com/cncf/sandbox/issues/55#issuecomment-1673403970, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAE3GMQJNG2WTNLKV6XEKCLXUT2E3ANCNFSM6AAAAAA3HZSINM.
You are receiving this because you were mentioned.Message ID: @.***>
Thank you @mayur-yambal for presenting to TAG App Delivery! Our writeup, the recording and the deck are all now at https://github.com/cncf/tag-app-delivery/issues/447#issuecomment-1789301258
The IOFlow project follows cloud-native principals and is seeking to enable some cloud-related use cases as well as general business process use cases.
thank you for presenting this project for inclusion. The CNCF TOC has decided that this project should re apply in 6 to 12 months. Once the project has a bit more momentum and adoption.
Closing, project can reapply for June review
|
GITHUB_ARCHIVE
|
Windows 8 offers many advantages compared to Windows XP, Windows Vista and even Windows 7. However, to take full advantage of all the new features in Windows 8, the hardware you run it on needs to be equipped with specific hardware.
The list below details the minimum hardware requirements for Windows 8-powered desktops, notebooks and tablets in detail:
By now, you’ve probably heard of Windows 8 and Windows RT. Windows 8 is the familiar x86-based Operating System you’ve come to love and live with in the past years. Windows RT is a new breed of Windows, specifically designed to run on ARM-based processors.
At first glance, Windows 8 minimum requirements are equal to these of Windows 7. When you look closer, though, Windows 8 demands certain processor-specific technologies. First of all, the processor needs to support the Never Execute bit (NX-bit). It also requires SSE2.
NX is the limiting factor here. named Execute Disable in the world of Intel and is available in its processors since the 64bit-capable 90nm Prescott-based Pentium 4 processors (February 2004). The technology, dubbed Enhanced Virus Protection (EVP by AMD is available since the Opteron (April 2003) and Athlon64 (September 2003).
Coincidentally, both Intel and AMD introduced NX in processors at the same time as their 64bit capabilities in mainstream processors. With the exception of some netbooks, all prospective Windows 8 machines are able to run the 64bit versions of Windows 8.
Windows 8 requires a minimum of 1GB of RAM. This requirement is equal to Windows 7, but, like back in the early days of Windows 7, when you want a Windows 8 machine to run smoothly when you use demanding programs, I recommend 2GB RAM.
When you’re running more demanding programs, even on rigs with 2 GB RAM, you’re likely to run into a performance bottleneck. When Windows needs to allocate more RAM than is physically available, it will use the page file on the hard disk. Since disk storage is slower than RAM, this significantly hits performance. Adding RAM solves this problem.
Also, ReadyBoost, a feature that has been around since Windows Vista, can be used. Instead of using the page file on disk to expand RAM, first a file on a flash drive will be used. Flash drives are most commonly faster than disk storage. When using USB media, make sure it’s at least 256MB in size, USB 2.0 compatible and plugged into an USB 2.0 socket.
Windows 8 is primarily designed for wide screens. Many of the New User Interface elements work particularly well on 16:9 aspect ratio and 16:10 aspect ratio screens. You can install Windows 8 on machines with screens with 1024 x 768 resolutions. These machines will run Windows 8 and will display Windows 8 Apps.
However, it’s a better idea to install Windows 8 on machines with screens with 1366 x 768 resolutions (and higher). When the screen offers at least 1366 x 768 pixels, you can use Windows Snap. With Windows Snap, apps (and the desktop) can be snapped to the side of the screen, allowing multi-tasking between Windows 8 apps and between Windows 8 apps and desktop applications.
Many tablets, that seem prime candidates for Windows 8 from a processor and RAM point of view, are unable to offer Windows Snap, due to their 1024 x 768 and 1280 x 800 resolutions and might offer a reduced Windows 8 experience.
Windows Snap can be enabled on lower resolutions using the Registry information here.
While you might think you can finally reuse your beloved PCI-based S3 Trio64 video card, because it will be able to output a 1024 x 768 resolution, the video card will need to be compatible with DirectX 9.0 or higher. It will need to be Direct3D 9-capable and its manufacturer will have to support its card with a WDDM 1.0+ driver.
Many Nvidia cards are Direct3D 9-capable since the 6xxx series in 2004. ATI's DirectX 9.0 technology was introduced in 2002, released as Radeon 9500–9800, X300–X600, and X1050. As a rule of thumb, you can use your graphic card when it’s newer than 2004. Integrated graphics, like the Intel GMA 950, 3000, x3000, 3100 and 500 families, supported Direct3D-9 from 2006.
When you plan to multi-boot or downgrade the machine to Windows 7, the aforementioned video cards will only make the system ‘Compatible with Windows 7’ and you will need to live without Aero Glass and Flip 3D on Windows 7 (features removed from Windows 8). To enable Aero Glass and Flip 3D, you’ll need a DirectX 10+-capable video card with WDDM 1.1+ drivers.
Everything higher spec’d will not benefit you in a significant way when you plan to merely use the Windows 8 interface. When you plan to play 3D games, however, a more sophisticated video card will make a good Christmas gift.
Hard Disk Space
Microsoft recommends 16GB of free hard disk space for 32bit (x86) installations of Windows 8. For 64bit (x64) installations of Windows 8, a minimum of 20GB is specified. When you upgrade to Windows 8 from a previous version of Windows, you’ll need 7768 MB of free space on the system disk (the disk where the Windows folder lives).
However, depriving Windows 8 from disk space from the get-go, might become a sour decision after a while. It’s fine when you limit the disk space for test machines, but when you’re a few months down the line, you’ll no longer be able to download and install updates, due to low disk space.
Also, by default, Windows will create a hibernation file (when the system supports low power states and a page file. Providing a 20GB disk to Windows on a machine with 8GB RAM, will cause problems.
Still in doubt?
when, after reading the above five points, you’re still in doubt whether you system will be able to run Windows 8, use the free Microsoft Windows 8 Upgrade Advisor.
This tool can be used to scan your hardware, applications and connected devices to see if they'll work with Windows 8. It provides a full human-readable compatibility report at the end. Also, it will check your hardware to see if it supports certain Windows 8 features, like Windows Snap, SecureBoot and multitouch.
You can also use this tool to buy, download, and install Windows 8.
|
OPCFW_CODE
|
This document contains official content from the BMC Software Knowledge Base. It is automatically updated when the knowledge article is modified.
BMC Client Management
Any version of BCM
I need to clean some space on my master and/or relays.
I'm running out of space on some of my master or may relay, how can I free some space?
I want to avoid BCM to use too much space on my master and relays over time.
Some datas are not cleaned up automatically from BCM.
A- Clean old versions of the packages you have published:
This would apply to the master. When you publish a package from your packager it'll create a new folder containing this package in the folder ../master/data/Vision64database/packages. The package will be stored in a sub folder named by the checksum corresponding to this version of the package.
Every time you republish a new version of an existing package it will create a new package in sub folder named by its corresponding new checksum. If you fix or update your packages (regularly or not, several times or not) it can quickly take a lot of space over time because the old versions of the package are not deleted automatically. This is because some devices might still be assigned to the package with its older version.
These older versions of your packages can be deleted by simply going the menu "Tools" in the upper left of the console then "Clean-up Old Packages".
Warning: devices that were already assigned to the package or an operational rule (OR) containing the package before you republished the package will still request the older version of the package. Therefore you'll need to reassign all these devices that have still not executed the package to this package/the OR that contains it.
B- Clean published packages:
This would apply to the master. If you have been using the application for quite a long time, there's a chance you have published quite a few packages that you do not need anymore. You should probably go to the node "Packages" in the console, review the packages there and delete the ones you do not need any more. Do not forget to unassign them properly prior deleting them, as described in the KA 000124835.
C- Clean packages from your package factory:
As explained in this document you can send back any published package to any package creator, so there's no need to keep the package in the package factory where you built it as explained in the KA 000142935.
Do not forget to, at least, delete the package from there as well when you'll delete its publish version from the "Packages" node.
D- Clean packages that have not ben requested for a while:
This would apply to relays. The filestore module (which handles file transfers between the master, the relay(s) and the clients) can be set to delete packages when they havn't been requested by its children for a while.
Don't worry: if a child of this relay requests this package after it has been cleaned-up from the relay filestore, the relay will simply request it again from the master, then the client will be able to downloaded from the relay once the relay will have completed the download from the master again.
To enable this, or to check that it's already set to clean them regularly then go to the relay ..\config\filestore.ini edit it and check the parameter "PackageTTL=". If it's set to "0" then the packages are never deleted from the ../data/filestore/downstream of your relay. Set it to the value that you thing would be acceptable for a package to be deleted from the relay filestore, then restart the service of the agent right way for the setting to be taken into account.
Warning: if you have a poor bandwidth between the master and the relay or between a level 2 relay and its level 1 relay etc you might want to set a big value for this parameter, in order to avoid situations where the relay(s) will frequently redownload packages.
E- Clean patches:
This would apply to the master. The size used by the downloaded patches for XP, a an example, can be very big for customer who started to use the product at a time they still had a lot of Windows XP devices to manage. As all devices have now been upgraded to Windows 7 the space they use on the drive is even more of an issue as they've became useless.
Starting from 12.6 it is now possible to do this automatically at start of the service of the agent. Before this version no built-in method ws implemented yet. See the KA 000114912 for details.
F- Clean the old update manager update files:
This would apply to the master. When the update manager will have downloaded a new update for the security products management and/or for the software catalog the zip files from the previous downloads will never be deleted. This can take a lot of space after a while. See the KA 000120555 for details on how to clean these.
Note: don't worry about the fact that it requires you delete the downstream sub folder and the filestore.sqlite: on the master we only create the folders corresponding to the package names and do not copy the files there as they're already stored in ../master/data/vision64database/packages. Only the patch kb and the update manager updates are stored there, that's why you will have to redownload them after having made this cleanup.
G- Clean the OSD projects and the drivers you do not use anymore:
You should review your OSD projects, images and driver files. Even the drivers can take a lot of space if you imported every drivers you had for a device, without filtering at start. Images can take a lot of space as well and they might be stored on the osd manager itself.
If this did not free enough space you still can:
- divert this how-to and set the repositories on a new/other disk of your master/relay, instead of the actual folder.
- create symbolic links to another disk from the ../data/filestore, ../data/vision64database/packages, ../data/vision64database/patches folders.
See the KA 000129468 for details.
|
OPCFW_CODE
|
Have you ever decided to make a change in a design project, and found yourself working your way through a screen full of artboards updating Every. Single. Button?
Chances are you could have saved yourself a lot of time and frustration – and got a better result – by using the Symbol function in Sketch. In Part 4 of our series of Sketch tips, we’ll give you the lowdown on how to get the most out of symbols.
30. Use symbols for repeated elements
When you’re planning to use symbols in Sketch, the first thing to ask yourself is: which elements are repeated throughout the design? Symbols are a bit like rubber stamps: once you have made one, you can reuse it over and over again. And the best thing about symbols in Sketch is that when you edit the design of the master copy, all the instances of that symbol will automatically update. This can make iterative design processes considerably more efficient.
31. Create a symbol
Symbols are especially handy when using a design pattern – for example, cards in responsive web design. In the example here, the card consists of a circular image, description text, a call to action, and a background fill. Once you’ve arranged these elements how you want them, to transform them into a symbol, simply select them by clicking and dragging a marquee, and then click “Create Symbol” in the Toolbar.
32. Use overrides to customise instances of a symbol
Particularly when using symbols to help with a design pattern like cards, you might want to change the text or images that appear in each instance of a symbol. This can be done using Sketch’s overrides feature. To set overrides, first select an instance of a symbol in your workspace. In the Inspector on the right hand side of the Sketch window, you’ll now see a section called “Overrides”. In this example, there are 4 text fields – just edit the text to change what appears in that instance.
33. Rename layers to be override prompts
To change the prompts that appear when entering overrides, double-click an instance of the symbol. This will take you into master copy view for that symbol. (You can also get to master copies of all your symbols by going to the “Symbols” page in the top-left of the Sketch window.) From here, simply rename the layers within your symbol.
34. Prevent overrides by locking layers
To make sure that no overrides can be applied to a particular element within a symbol, enter the master copy view, and then activate the padlock icon in the layer list. Now, when you select an instance, that layer won’t appear as an override option.
35. Use nested symbols to improve consistency
Design patterns work on multiple levels. For example, we might want to define a single “button” symbol to use across a whole project. An instance of that button symbol can be nested within another symbol, so that if the button is updated, it will automatically update in all instances of symbols that contain it.
36. Symbols can be resized
Instances of symbols can be resized like any other object – but note that changing the size of the master symbol will cause all resized instances of that symbol to snap back to their original dimensions.
37. Detach an instance from its symbol
To detach an instance from its parent master symbol, right click on the instance and select detach from symbol. This will convert the instance back into normal layers. This can be useful if, for example, you want to start working on designing a new symbol based on the same pattern.
Thanks for stopping by! If you’re a Sketch beginner, why not let us know in the comments how you’re getting on?
More Sketch Tips
- Sketch Tips Part 1: Objects, Layers, and Artboards
- Sketch Tips Part 2: Editing and Exporting
- Sketch Tips Part 3: Composition, Light and Shadow
If you’d like to learn the fundamentals of contemporary design, Designlab offers a Design 101 course that combines online lectures, curated resources, hands-on exercises and expert mentor support. Find out more about Design 101.
As a Designlab student, you can also get 50% off the price of a Sketch license. Find out more about student perks.
|
OPCFW_CODE
|
Introducing full guest access in Microsoft Teams revolutionized the whole concept of team collaboration, which is even more important these days when the need of remote work tools has increased. Now, you can invite anyone with an email to join your team, collaborate with you, and even create channels on their own. As great as that is, you need to be cautious when giving people outside your company access to your content. That’s why we’ve prepared this Ultimate admin guide to Microsoft Teams guest users for you.
Who can be a guest user in Microsoft Teams?
This year, Microsoft launched a full guest access in Microsoft Teams. This is a huge improvement in the sense of collaboration, meaning that you don’t need to have a Microsoft account to be invited as a guest user anymore. You can invite:
- Anyone with an Office 365 subscription;
- Anyone with any type of email address, such as Outlook or Gmail.
What can Microsoft Teams guest users do?
So, what are guest users allowed to do? The following table lists the features available to guest users, compared to authenticated Teams users:
Microsoft Teams guest users capabilities
As you can see, some features are not available to guest users, but those that are, are sufficient for basic collaboration. You can even invite guest users to your team meetings via a link. That means no more entering email accounts or signing in – just a simple click and you’re ready to go. When they accept the invitation, guest users are placed in a lobby where they wait for an authenticated participant to admit them. This is a security step before final acceptance in a meeting.
However, there are some limitations to meeting features for guest users. Guest participants don’t have access to Files, Chats or Activity. They can only participate in audio conversations, without the option to send instant messages or send and receive files. Guests cannot share their camera or screen, but they can view other members’ shared screens. The options of this feature are still in development, so we can expect to have more options soon.
UPDATE: All the above-mentioned options of meetings are now available to guest users, but for now, only in the desktop app.
Setting up guest users’ access for Microsoft Teams
Before you can add guest users to your teams, an Office 365 global admin must enable the guest option. According to Microsoft documentation, an admin can set the guest access option on four levels of authorization inside the Office 365 tenant:
- Azure Active Directory (AAD): Guest access in Microsoft Teams relies on the Azure AD business-to-business (B2B) platform. Controls the guest experience at the directory, tenant, and application level.
- SharePoint Online and OneDrive for Business: Controls the guest experience in SharePoint Online, OneDrive for Business, Office 365 Groups, and Microsoft Teams.
- Office 365 Groups: Controls the guest experience in Office 365 Groups and Microsoft Teams.
- Microsoft Teams: Controls Microsoft Teams only.
UPDATE: To enable guest access on the Microsoft Teams level, an admin must:
Sign in to the Microsoft Teams admin portal (https://admin.teams.microsoft.com)
In the navigation menu, choose Org-wide settings and select Guest access
Click on the toggle next to Allow guest access in Microsoft Teams
It takes 2-24 hours for changes to be effective. So, if you see a message “Contact your administrator” when you try to add a guest to your team, it’s likely that the settings haven’t become effective yet.
Dear reader, this is the functionality of our former product, SysKit Security Manager. Check out our new cloud-based Office 365 governance solution, SysKit Point, to monitor user activity, manage permissions, make reports, and govern your users and resources.
In AAD, a global admin can choose, on a global level, who will be able to invite guest users to an organization:
- Directory admins and users in the guest inviter role;
- AAD members;
Inviting guest users to Microsoft Teams
According to Microsoft docs, an Office 365 global admin can add a new guest user to the organization in a couple of ways:
- Through the Microsoft Teams desktop or the web clients, if a global admin is also an owner of a team. This is a more intuitive and faster approach since the admin is already in the team to which he wants to invite guest users.
- Through Azure Active Directory B2B collaboration. The global admin can invite and authorize a set of external users by uploading a comma-separated values (CSV) file with up to 2,000 lines to the B2B collaboration portal.
Adding guest users through Azure AD
If global sharing settings allow, a team owner or member can invite guest users, too. They can do it in a couple of ways:
- Through the Microsoft Teams desktop or web application;
- Through the Azure AD Application Access Panel, if a global admin has delegated this option to group or application owners.
Adding guest users inside a team
Depending on the applied external sharing settings, it’s possible that your global AAD admin needs to invite the guest user to the organization before a team owner or member can invite users to the team.
Viewing guest users in Microsoft Teams
Every member can view other members of their Team, including guest members, by clicking the Manage team option.
Manage Microsoft Teams guest users
UPDATE: A global admin can view all the guest users in all the Teams in the tenant. However, they can only see guests that are added as members. Meaning that if a user shares a file directly to people outside the organization, they are not listed as guests. So, it’s not exactly a polished way of tracking your guest users. With SysKit Point, you are able to view all Microsoft Teams guests in your tenant, no matter how they were added to your Teams.
Restricting guest users
You can restrict guest access in Microsoft Teams by using Windows PowerShell. You have three options at your disposal:
- Allow or block guest access to all teams and Office 365 groups;
- Allow users to add guests to all teams and Office 365 groups;
- Allow or block guest users from a specific team or Office 365 group.
In addition to those three options, you can allow or block guest users based on their domain. It is the same procedure that you need to follow when allowing or blocking guest users in Office 365 Groups. The downfall is that this option is only available to those with Premium AAD license.
Webinar Microsoft Teams Behind the Scenes – Recording
If you’re still not sure what Microsoft Teams can bring you, and how it affects your Office 365 administration, this is the right webinar for you! Apply for the recording here.
What we’ll cover:
- Microsoft Teams permissions governance
- Microsoft Teams settings
- Microsoft Teams guest access and how to control it
SysKit Point— A Centralized Office 365 & Teams Reporting Tool
SysKit Point brings Microsoft Teams reporting and management. It helps you:
- Reach an optimal level of productivity if you use Microsoft Teams for remote work.
- Discover Teams in your tenant and associated Office 365 Groups.
- Find out who your Team Owners, Team Members, and Guest Members are.
- Check Teams’ related audit logs for a custom time period.
- Remove guest users and sharing links from Teams with one click.
|
OPCFW_CODE
|
Get answers to cloud and mobile programming questions that have been bugging you
Get to know your fellow Autodesk cloud and mobile programmers and members of the Autodesk cloud and mobile engineering team
Ask your programming questions of our panel of hardcore cloud and mobile experts from our software development teams. If you are writing solutions that are based on these technologies or you are just about to start and want to know more, then this is the perfect forum to get to know the people who create the APIs and services you work with and your fellow programmers who use those APIs. Come and ask questions, add your expertise to the discussion, or just listen and learn.
Programmers with intermediate or advanced knowledge of programming for cloud and mobile who like to talk about programming and new Autodesk technologies in that area
Cyrille has been with Autodesk since 1993 focusing on providing programming support, consulting, training and evangelism to external developers. He has worked for Autodesk in a number of countries: he started his career in the Switzerland and has - so far - working from Switzerland, the United States and France. He and his family have now settled back in Brittany the home land of Cyrille's wife. Cyrille's current position is Manager of the ADN Sparks (or ADN M&E), the worldwide team of API gurus providing technical services through the Autodesk Developer Network.
Chris Andrews has been a technology evangelist and proponent of standards-based enterprise architectures and processes for over ten years, frequently working in the municipal and utilities industries. As Senior Product Line Manager for the Media and Entertainment Cloud Services at Autodesk, Chris is focused on gathering customer requirements and delivering solutions that will help customers in the film, tv, and game industries. Chris' work has taken him from requirements gathering at the Kennedy Space Center to architecting and building geospatial SaaS applications at startup companies in San Francisco. Chris has actively published on 3D technology and enterprise software architecture. In his spare time, Chris also participates as a board member and activist for a local medical-related nonprofit organization.
Philippe has a master's degree in Computer Sciences. He carried his studies in Paris at I.S.E.P and in USA, at Colorado School of Mines. He started his career as software engineer for a French company where he participated to the implementation of a simulator for the French Navy combat frigate Horizon. He joined Autodesk 7 years ago where he works as developer consultant for Autodesk Developer Network. He supports various products APIs such as AutoCAD®, AutoCAD Mechanical®, and Inventor®. He also focuses on Cloud and Mobile technologies. He likes to travel, meet developers from around the world to work with them around programming, CAD and manufacturing challenging topics. During his free time, Philippe enjoys doing sport, especially swimming, running, snowboarding or trekking in Swiss mountains, where he is living now.
Ron Meldiner is the SWD manager of the AutoCAD 360 Mobile dev team and has been a part of the AutoCAD 360 team for the last 2 years. Prior to his time in Autodesk, Ron served in the Israeli army for six years as a software engineer and commanded of an R&D team that developed military intelligence analysis solutions. Ron holds a B.Sc. in Software Engineering and a M.Sc. in Computer Science.
Adam Nagy joined Autodesk back in 2005 and has been providing programming support, consulting, training and evangelism to external developers. He started his career in Budapest working for a Civil Engineering CAD software company, then worked for Autodesk in Prague for 3 years and now lives in South England, UK.At the moment focusing on the manufacturing products, plus cloud and mobile related technologies.Adam has a degree in Software Engineering and has been working in that area since even before leaving college.
Stephen Preston is Senior Manager for the Worldwide Developer Technical Services team - the team responsible for evangelizing and supporting the APIs for Autodesk cloud and desktop platforms. Stephen started his career as a scientist, and has a D.Phil. in Atomic and Laser Physics from the University of Oxford.
Doug Redmond has been a software engineer for Autodesk for over 13 years. In that time, he has worked on four separate product lines: Streamline, Vault and Productstream Professional and PLM 360. He has worked on Vault for over 8 years, developing features and overseeing the APIs. Doug is the author of "It's All Just Ones and Zeros" which is a blog on Vault API development. Check it out at http://justonesandzeros.typepad.com/
Gopinath is a member of the Autodesk Developer Technical Services Team. He has more than nine years of experience developing and supporting AutoCAD® APIs, including ObjectARX®, Microsoft® .NET, VBA and LISP. Gopinath also has several years of experience in software development on other CAD platforms, including MicroStation®, SolidWorks®, and CATIA® mainly using C++ and technologies such as MFC and COM. Gopinath was also involved in the development of Web-based applications for Autodesk® MapGuide® and AutoCAD Map 3D. Currently Gopinath is working with AEC products (Revit, ACA) and cloud based solutions inside Autodesk.Gopinath has master's degrees in Civil Engineering and Software Systems.
Kean has been with Autodesk since 1995, working for most of that time in a variety of roles—and in a number of different countries—for the Autodesk Developer Network organization. Kean's current role is Software Architect for the AutoCAD family of products, and he continues to write regular posts for his popular development-oriented blog, "Through the Interface" (http://blogs.autodesk.com/through-the-interface). Kean currently lives and works in Switzerland.
|
OPCFW_CODE
|
Hi, I'm looking for a finnish HR recruiting professional to do job application instructions for young workers. More detailed instructions in Finnish bellow. Eli haluaisin luoda nuorille työnhaku tietopankin. Tämä ensimmäinen projekti olisi ohjeet asiakaspalvelu aloihin hakemiseen. Ohjeet yleisimpiin haastattelu ja hakemus kysymyksiin vastaamiseen ja malli vastaukset. Ets...
data entry jobs nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
as discussed? gkyfkyfkmhdcng ngxdsnjtsjytdjydc gdjdj hdkmyhdcmhd hfkyfkyhf hydjydkjyfdk kmjyhdkydfkmyhhfkm,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,hvcfyhdjyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
To help us test 3d files in Unity according to attached checklist, we need a tool inside Unity Editor to check files in a folder for quality assurance and display the flaws, so our 3d modeler can fix those. Skills required: Unity 3d C# Gamedevelopment
Looking for part time models for an adult website (launching soon). This is a work-from-home based project. PM us for more information. Looking for open minded and cheerful models. With or without experience can apply.
This about the long Personal call Conversations with our clients who have enrolled with us. This is based on different issues and topics which actually is about the life. Some Skills Required 1. Voice Modulation 2. Consumer Care 3. Comfortable Attitude 4. Different Topic Conversation
Hi, There are 300 emails that i have received in my inbox for my enquiry about guest posting cost on their blogs. You have to read through the emails and prepare the following excel table :- 1) Sitename 2) Price Per Guest blog 3) Site email (The from email address) 4) Site domain authority (You can check this on the internet) I preferred new freelancers. (Bonus: European, American)
I need a logo designed.
Hello, I saw your profile and I would like to have a medieval figurine recreated using 3D-modelling. I just started building my business which will specialize in bronze art, for that I need a reliable partner who can design models for me. If you are interested I can send you some photos of a small candlestick.
Looking for body parts models for an adult toys virtual shop. Products will be delivered to you via registered post. You will need to model with it (no facial shot) and take nice (professional-like) photos. Simple and easy. And you get to keep the product. Models with experience or without experience can apply. Private message with your interest or any query. Send in your proposal. Thank you.
Hi, we have a project to be developed on AIML based health tech.. wherein, we want to combine several medical ML related codes/application on one health-tech based platform The projects are like skin cancer detection, malaria through cell images, ECG readers and others. We want to have around 20+ models built. Few of the codes can be referred to GitHub/kaggle, for others, python-based programs m...
|
OPCFW_CODE
|
In versions prior to 4.3 you could include a Resource in the Tomcat server.xml and then an entry in the entityEngine.xml file and access the data source through ConnectionFactory.getConnection( dataSourceName ). Per the notes at http://confluence.atlassian.com/display/JIRA/Plugin+Developer+Notes+for+JIRA+4.3#PluginDeveloperNotesforJIRA43-AccessingdelegatorsconnectionsanddatasourcesinOfBiz this is no longer supported. I cannot determine how our plugin should access a database outside of the JIRA database. Please help!
Yeah, sure - firstly in your server.xml, add a Resource
<Context path="/jira" docBase="/src/jira50/classes/artifacts/jira" workDir="/src/jira50/target/work" reloadable="false" useHttpOnly="true"> <Resource name="UserTransaction" auth="Container" type="javax.transaction.UserTransaction" factory="org.objectweb.jotm.UserTransactionFactory" jotm.timeout="60"/> <Resource auth="Container" driverClassName="org.postgresql.Driver" name="jdbc/otherDS" username="jirauser" password="jirauser" maxActive="20" type="javax.sql.DataSource" url="jdbc:postgresql://localhost:5432/otherDB" /> <Manager pathname=""/> </Context>
Then in your entityengine.xml simply create a datasource element that references this resource, you can see I simply ignored the dire warnings :-) As we're not going to get ofbiz to manage the connection it is important to set all the checks to false.
DATASOURCE You should no longer define a datasource in this file, the database is now configured through the UI at setup time. The only time you would want to configure it here is when you migrate from an older version and need to point the new installation at an existing db. This is considered a legacy method and will not work if dbconfig.xml exists in the home directory. --> <datasource name="otherDS" field-type-name="postgres72" schema-name="public" helper-class="org.ofbiz.core.entity.GenericHelperDAO" check-on-start="false" use-foreign-keys="false" use-foreign-key-indices="false" check-fks-on-start="false" check-fk-indices-on-start="false" add-missing-on-start="false" check-indices-on-start="false"> <jndi-jdbc jndi-server-name="default" jndi-name="java:comp/env/jdbc/otherDS"/> </datasource>
Now fro your Java code you can obtain a JDBC connection using
Officially moving forward, we want plugin developers to use Active Objects to access databases, but this doesn't help you a great deal, as the default behaviour here is to use the JIRA database. If you want to use an external database you can still add it to a Resource section, and you still have access to the entityengine.xml, and although the documentation no longer suggests that this works, ConnectionFactory is still available to you. There are some problems with this approach though, the main JIRA database is no longer managed through the resource pool but via direct JDBC, so I'm a little unsure if you could get transactions to span from the JIRA db to your db.
In this case I don't need to worry about transactions. I am reading an external database to pull in data to set some custom JIRA fields so the other database is not updated in my plugin, it is read only. Could you possibly elaborate a little more on the solution of using the ConnectionFactory or how I would get another data source from the entityengine.xml? Its not very clear to me. Thanks!
In a world of dark-scrum, faux-scrum, and scrum-butt, the question still remains: What is scrum and how do you do it “right?” That’s the question we set out to answer. I'm Max, I've been teaching c...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs
|
OPCFW_CODE
|
Finding most remote spot in Eastern United States?
I have this hunch that I could travel no more than 50 miles from the most remote spot in the Eastern United States (east of the Mississippi River), in the direction of the nearest road, and find a road.
Definitions: Most Remote: Spot furthest from a road. Road: Google Maps definition of a road.
How could I prove or disprove this claim (i.e where is the most remote spot in the Eastern US)?
How is 'remoteness' defined? http://www.business.otago.ac.nz/sirc/conferences/1999/23_Dunne.pdf (population, road type, islands) Remoteness Classification.
Unfortunately, the above link is no longer working.
A fast and informative way is to create a distance grid based on the roads. This is usually done in a projected coordinate system, which necessarily introduces some error, but by choosing a good coordinate system the error will not be too great (and can be corrected).
The following example defines a "road" as a US Interstate or US or state highway of comparable magnitude. These roads are shown as red polylines. It uses a Lambert Conformal Conic projection. Although its metric distortion can readily be corrected in terms of latitude, that's not really necessary in this example because the distortion is less than 0.6% except in Florida, where it grows to 2.3%: good enough for this illustration.
The distances are color coded from dark cyan (short) through yellow (long) and hillshaded to emphasize the local maxima. A glance shows the greatest distances are attained in central Wisconsin and the North Carolina coast. The GIS tells me the maximum distances attained are 194 km and 180 km, respectively. (The maximum attained in Michigan is 120 km, less even than the maximum in central Mississippi, 137 km.)
Using any raster GIS (such as ArcGIS, GRASS, Manifold, etc.) one can perform a similar computation using any roads layer desired (such as Census TIGER streets features). Straightforward post-processing will find all local maxima of the distance grid (seen as peaks on this map), thereby identifying all points that locally are as far from a road as you can get. Very simple post-processing will identify all points exceeding a distance threshold such as 50 miles (about 80 km).
A variant uses a "costdistance" calculation, instead of Euclidean distance (as a proxy for spherical distance), to determine points that are (say) a maximum travel time from the nearest road. This is not an onerous task: typical computation times are a few seconds (at most) at the 1 km resolution used here.
Nice suggestion - I think this would be a lot quicker than buffering also.
@celenius You're right. Buffering works but it's less informative and is not flexible enough to answer the questions a distance map invites us to ask, such as "where are the almost remotest points" and "how do I adjust the calculation to preclude travel over large bodies of water," etc.
This is a wonderful place to start. I have no knowledge of GIS applications so some of the technical jargon lost me. (SE Math sent me here), but this response has pointed me in the right direction. From here, I'll get a Delorme map of Wisconsin, Mississippi, and North Carolina and use a compass and colored pencils.
How could I prove or disprove this claim?
Take the road network (TIGER data?) and buffer it with 50 mile radius. You'll see if any land masses are not within the buffer zones.
Where is the most remote spot in Eastern US (spot furthest from a road?)
Iteratively increase the buffer radius until you've narrowed it down.
|
STACK_EXCHANGE
|
In this post I will discuss 2 issues you might encounter when configuring an SSL certificate on Azure Web App.
- The certificate does not show in the drop-down.
- Browsing to your website over SSL / HTTPS you receive a security warning and the Azure Website wildcard certificate is delivered.
##The certificate does not show in the drop-down
You might encounter an issue after uploading an SSL certificate and when attempting to configure SSL, the certificate doesn’t show up in the CHOOSE A CERTIFICATE drop-down in the SSL BINDINGS section. Similar to that shown in Figure 1.
Figure 1, SSL certificate on Azure Web App does not show in drop-down
Firstly, you must have a custom domain linked to the Azure Web App in order to configure an SSL BINDING. Looking at Figure 1, I indeed have that. Secondly, the domain for which the certificate is created for, must match the custom domain. I look at the details of my certificate and see this is the case, as shown in Figure 2.
Figure 2, an SSL certificate for Azure Web App
So why doesn’t it show in the drop-down? One common reason is that the domain to which the certificate is for is not configured for the Azure Web App. Reconfirm that the custom domain name in which you have bound to your Azure Website exists in the SSL certificate you have uploaded. For example, as shown in Figure 2, open the certificate, click on the Details tab and look at the domain the certificate is for in the Subject Field, does it match the custom domain you have bound to your Azure Website? Additionally, you may have a certificate which support Subject Alternative Names (SAN). Scroll down to that field and confirm that your custom domain bound to your Azure Web App exists in the list.
A second common reason is missing or not supported intermediate certificate. Here is a very nice overview of this, which I also provide a link to below. Point is, when you export your certificate for upload to an Azure Web App, make sure you include and intermediate certificate.
The next issue discussed here has to do with wildcard certificates. I learned what I needed to know here. The point here is that if you get a wildcard certificate, for example like, *.contoso.com, it does not mean that it works for infinity as you move further away from the TLD.
Actually, the ultimate answer is that there are a lot of reasons for this, but what I can recommend is that you create a test certificate and upload it and then test again. By doing this you will be certain that there is no issue with the platform, rather the issue lies with the certificate itself and its ability or supportability on the Azure Web App platform.
I wrote some instructions on how to create a certificate using MAKECERT here. I have looked here, which discusses using a more current method for creating a certificate, it looks good, but have not personally used it yet. If after creating the test certificate, you upload it and you do see the certificate in the drop-down, Figure 3, there is something in your official certificate that is not supported or causing some problems.
Figure 3, SSL certificate on Azure Web App shows up in drop-down
You might want to check these locations for some hints as to why your certificate is not working.
You might be able to save some time (some days) if you perform this action as then you’d know better which support team to contact, if required.
Security warning and wildcard certificate
If you access your Azure Web App URL (https://??.azurewebsites.net) using HTTPS you will receive the Azure Web App wildcard certificate. If you have a custom domain without a successfully configured SSL certificate, accessing the custom domain using HTTPS would result in that shown in Figure 4.
Figure 4, Certificate Error – Azure Web App – *.azurewebsites.net
This is expected behavior if your certificate is not configured correctly or invalid. For example, you may have configured SNI based SSL which requires ‘modern’ browsers. If this is the case, try using IP Based SSL to see if the issue remains.
In regards to Wildcard certificates, something which I mentioned earlier, a common reason is a misunderstanding of the wildcard certificate requirements and limitiations. I learned what I needed to know here. The point is that if you get a wildcard certificate, for example like, *.contoso.com, it does not mean that it works for infinity as you move further upwards and away from the TLD. For example, a wildcard certificate in this format *.contoso.com will work for all subdomains, like admin.contoso.com, my.contoso.com, but will not work with ssl.admin.contoso.com or billing.my.contoso.com, you would need a wildcard certificate for *.admin.contoso.com and *.my.contoso.com, to support both the subdomain names.
|
OPCFW_CODE
|
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
The landscape for generative AI for code generation got a bit more crowded today with the launch of the new StarCoder large language model (LLM).
StarCoder is part of the BigCode Project, a joint effort of ServiceNow and Hugging Face. BigCode was originally announced in September 2022 as an effort to build out an open community around code generation tools for AI. The StarCoder LLM is a 15 billion parameter model that has been trained on source code that was permissively licensed and available on GitHub.
The model has been trained on more than 80 programming languages, although it has a particular strength with the popular Python programming language that is widely used for data science and machine learning (ML).
Market heating up
The effort to build an open generative AI code generation tool brings new competition to OpenAI’s Codex, which powers the GitHub co-pilot service, as well as efforts from other vendors including Amazon’s CodeWhisper tool. Both OpenAI and Amazon tools are based on proprietary code, whereas StarCoder is being made available under an Open Responsible AI Licenses (OpenRAIL) license.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
“There are powerful code models out there, but they are all closed source, nobody knows exactly how to train them,” Leandro von Werra, ML engineer at Hugging Face and co‑lead of BigCode, told VentureBeat.
Von Werra added that the idea behind BigCode and StarCoder is to build powerful code generation models in the open. While the effort is led by Hugging Face and Service now, he emphasized that there is an active community of approximately 600 people in the community that are contributing to the project’s success.
BigCode is spiritual successor of BigScience
The BigCode effort isn’t the first time that HuggingFace has helped to build a community to open up AI development.
Von Werra called BigCode the ‘spiritual successor’ of the BigScience effort, which got started in 2021. In 2022, the BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) was released, providing a multi-language text generation model intended to be an open alternative to OpenAI’s GPT-3.
With StarCoder, the project is providing a fully-featured code generation tool that spans 80 languages. Harm de Vries, lead of the LLM lab at ServiceNow Research and co‑lead of BigCode, explained to VentureBeat that StarCoder can be used in a variety of scenarios. For example, he demonstrated how StarCoder can be used as a coding assistant, providing direction on how to modify existing code or create new code.
The StarCoder LLM can run on its own as a text to code generation tool and it can also be integrated via a plugin to be used with popular development tools including Microsoft VS Code. Von Werra noted that StarCoder can also understand and make code changes. For example, a user can use a text prompt such as ‘I want to fix the bug in this function’ and the LLM will do just that.
Why explainable AI needs an open license
A critical aspect of StarCoder and the BigCode effort in general is that the technologies are all available under an open license.
A key challenge for organizations deploying AI today is the need for explainable AI, where it is possible to understand how and why a model made certain choices and decisions. A related challenge is the need to ensure that AI is used responsibly and doesn’t cause harm to people via toxic content or malware. To help solve those thorny issues, BigCode is using OpenRail licenses and for StarCoder in particular, the Code Open RAIL‑M license.
“We know these models are very powerful and we want to make sure that they’re used for good use cases and not for use cases which will have bad implications,” said De Vries.
The Code Open RAIL‑M license allows users to see the code inside the model with a restrictions intended to prevent code from being misused — such as using it to generate ransomware or a social engineering attack.
“It’s completely open like an open source license,” said De Vries. “It just comes with the restrictions that make sure we stick to our responsible AI principles.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
|
OPCFW_CODE
|
Dictionary with custom data-type values, unable to update the values
I am trying to iterate over a DataTable and separate out whether an opportunity resulted in a sale or not - but there are multiple avenues for the opportunity to happen, and I want to organise my output by user.
I have a dictionary, the key being the user and the value field being a custom data structure. When adding keys/values initially things seem to behaving normally, but when updating values that already exist in the dictionary, the changes are not kept. My custom data type appears to be updating it's value internally, but when the function call exits, the value of the dictionary remains unchanged.
Data structures being used are:
Public Structure ColleagueSlogs
Public Call As LoggedItems
Public Email As LoggedItems
End Structure
Public Structure LoggedItems
Public SalesLogged As Integer
Public Sub IncrementSales()
Me.SalesLogged += 1
End Sub
Public NonSalesLogged As Integer
Public Sub IncrementNonSales()
Me.NonSalesLogged += 1
End Sub
End Structure
The calling function, slightly simplified for clarity:
Protected Function SortData(ByVal data As DataTable) As Dictionary(Of String, ColleagueSlogs)
Dim tmpDict As New Dictionary(Of String, ColleagueSlogs)
For Each result As DataRow In data.Rows
Dim tmpName As String = result.Item("UserID")
If tmpDict.ContainsKey(tmpName) Then ''This block does not update correctly
'this exists - increment the relevant variable
Select Case result.Item("Origin")
Case "Call"
Select Case result.Item("Sale")
Case "Yes"
tmpDict(tmpName).Call.IncrementSales()
Case "No"
tmpDict(tmpName).Call.IncrementNonSales()
End Select
Case "Email"
Select Case result.Item("Sale")
Case "Yes"
tmpDict(tmpName).Email.IncrementSales()
Case "No"
tmpDict(tmpName).Email.IncrementNonSales()
End Select
End Select
Else ''This block works as expected
'create data structure, increment the relevant var and add it to dict
Dim tmpSlogs As New ColleagueSlogs
Select Case result.Item("Origin")
Case "Call"
Select Case result.Item("Sale")
Case "Yes"
tmpSlogs.Call.IncrementSales()
Case "No"
tmpSlogs.Call.IncrementNonSales()
End Select
Case "Email"
Select Case result.Item("Sale")
Case "Yes"
tmpSlogs.Email.IncrementSales()
Case "No"
tmpSlogs.Email.IncrementNonSales()
End Select
End Select
tmpDict.Add(tmpName, tmpSlogs)
End If
Next
Return tmpDict
End Function
tmpDict(tmpName).Call.SalesLogged += 1 is a value and cannot be the target of an assignment - Is there a way to prevent the values of the dictionary from behaving like ReadOnly values?Would the issue lie in the definition of my custom data types? Or should I be looking for a different approach to the problem altogether?
possible duplicate of Modify Struct variable in a Dictionary
|
STACK_EXCHANGE
|
It is critical to know the electron dose when doing minimal dose imaging of frozen, hydrated biological samples in order to minimize the radiation damage caused by each exposure. There are also times when one needs to control the total electron dose that a specimen receives during data collection for electron tomography (especially cryo electron tomography) or when performing radiation damage studies, It is also generally a good idea to have some estimate of the electron dose that any specimen receives, since it is easy to not see damage in hard materials until and unless one is specifically looking for it. Depending upon how a particular electron microscope has been calibrated, there should be a variety of ways for a user to estimate the dose that a particular sample experiences.
Estimating the Electron Dose in TEM
For example, a calibration procedure using a Faraday cup was used to measure the beam current in the Electron Microscopy Center's JEOL JEM 3200FS. The beam current was systematically varied (using all possible combinations of the different condenser apertures and spot sizes - in the plot to the left, spots sizes from 1 through 5 are color coded and occur in groups of four as the condenser aperture grows smaller and smaller) and measured with the Faraday cup. Knowing the beam current for these combinations then allows the user to predict the electron dose a specimen receives by estimating how much of the beam actually interacts with the specimen. For example, it should be possible to estimate the fraction of the beam that is captured by a CCD camera when the electron beam is spread to cover the large phosphor screen on any given TEM. Since the Faraday cup provides the total electron dose for any combination of spot size and condenser aperture, knowing this fraction of the total screen area would allow a user to estimate the total number of electrons that interacted with the specimen that is recorded in any image. If the specimen of interest (say a metallic nano-particle) only occupied a fraction of the recorded image, it would also be possible to determine what fraction of the beam interacted with a particular particle.
Although such Faraday cup measurements can be used to determine the strength of the electron beam in a given image (provided that the imaging conditions were carefully controlled), a more important question is the electron dose per unit area that the specimen experiences. One can extrapolate from total beam current to electron dose per unit area provided the area illuminated by the electron beam is known (and most CCD cameras will have a calibrated pixel size at all available magnifications). If one knows the total electron dose in an entire image, it should be possible to divide by the area of the image and generate a dose per unit area (measured in anything from coulombs per square centimeter to electrons per square Ångstrom). This is fundamentally just an extension of what was described in the preceding paragraph, and again depends on knowing exactly how an image was acquired and calculating afterwards what dose the specimen received.
A more practical approach is to use the Faraday cup readings and to record simultaneously images with a particular camera (e.g., the Gatan UltraScan 4000 on the EMC's 3200FS). Such images must be recorded without a specimen in the electron beam path (so no scattering from the sample occurs) and such that the entire beam only covers a portion of the CCD frame (so that the total beam current is recorded in the image, meaning that the Faraday cup's reading can be directly related to counts in the image). Since the total beam current, the pixel size at any given magnification and the exposure time of these images are known, it is possible to determine the number of electrons that interacted with the CCD and to determine the average counts per electron that a particular camera produces. For the Gatan UltraScan 4000 on the EMC's 3200FS, that number is about 6.7. Once this value is known, it is then possible to determine the electron dose simply by recording any image where there is nothing in the beam path, and relating the average counts in such an image to the electrons per pixel (and then per nm2 or Å2).
Similar calibrations are often built into various pieces of software such as Gatan's DigitalMicrograph (DM) and serialEM (the tomography program from the Boulder Laboratory for 3-D Electron Microscopy of Cells). For example, DigitalMicrograph can be configured to display "calibrated" images, where image dimensions and distances between features are measured in µm or nm and pixel values are reported in electrons. When DM displays uncalibrated images, distances are reported in pixels and pixel values are reported in CCD counts. When DM was installed on the EMC's 3200FS, calibrations were done to determine pixel sizes at the different magnifications and a number to convert counts into electrons. However, that calibration of counts per electron was ~12 whereas recent work with a Faraday cup indicates that the value should be ~7. In addition, the values for electrons/pixel that DM generates reflect the number of electrons that actually interact with the CCD and not what was experienced by the specimen. In other words, electrons that were scattered out of the final image or removed from the image by the energy filter cannot appear in the CCD record but were part of the incident electron dose that the specimen received. That means that a dose estimated from the "electrons" that appear in the image will always be an underestimate of the incident dose (and in commonly encountered situations, this can be a significant underestimate).
serialEM also makes use of such calibrations. Pixel size calibration is an explicit part of the software installation and is straightforward enough that in most circumstances, serialEM pixel sizes are likely to be more trustworthy than DM pixel sizes. serialEM also creates and uses its own gain and dark references, and is able to correlate the settings of the electron microscope with these reference images and with a conversion factor between counts and electrons to estimate the dose of the incident electron beam during tilt series acquisition. This estimate is not based on the counts present in acquired images, but rather is truly an estimate of the incident electron beam used for all images recorded while a tilt series is being acquired.
The phosphor screens on most TEM's (if present) are often coupled to some sort of current reading. On the 3200FS, both the large phosphor screen (160 mm diameter) and the smaller focusing screen (25 mm diameter) give output (measured in pA/cm2) about the strength of the beam. The large screen directly reads the current produced by electrons that hit it, while the focusing screen generates a calibrated output. Because of this, the two readings will be different (and the large screen reading will generally be more accurate). Keep in mind that the pA/cm2 reading is only accurate if the entire large (or small) screen is illuminated by the electron beam. In other words, when the large screen measures the current of the electron beam that hits it, the output screen current value is calculated assuming that the current is evenly distributed across the entire screen (and simply takes the measured current and divides by the area of the large screen: ~201 cm2).
Also, similar to the way DigitalMicrograph reports electrons per pixel in the image and not the incident electron beam as it hits the specimen, measurements made on the phosphor screens are reporting electrons after they have passed through the specimen, post-specimen apertures and the energy filter. With these warning in mind, it is possible to use the screen current readings to help maintain consistency of the electron beam strength across a series of images from the same grid, and to provide a measure of day-to-day consistency while examining similar samples.
Finally, starting at the end of the summer of 2014, David Morgan (as part of his daily alignment procedure for the 3200FS using a standard replica diffraction grating or waffle grid) has been recording the reading from the large phosphor screen when the magnification is set to 200,000x and the beam covers the entire screen. This is an easy way to monitor significant changes in the strength of the electron beam over time (with the caveat that subtle changes due to the exact composition of the waffle grid illuminated by the electron beam will definitely cause fluctuations in the readings even when the actual beam current has not changed).
|
OPCFW_CODE
|
Low resource usage but low query performance.
Hi, I am having the similar situation. Low cpu/mem/io usage but really low performance.it's my table create sql:
CREATE TABLE met430_base ( created_date Date DEFAULT today(), created_at DateTime DEFAULT now(), id Int32, time String, uuid String, dep String, sevice String, idc String, cpuload1 Float64, cpuload2 Float64, cpuload3 Float64, cpuload4 Float64, cpuload5 Float64, cpuload6 Float64, cpuload7 Float64, cpuload8 Float64, diskusage1 Float64, diskusage2 Float64, diskusage3 Float64, diskusage4 Float64, diskusage5 Float64, diskusage6 Float64, diskusage7 Float64, diskusage8 Float64, memory1 Float64, memory2 Float64, memory3 Float64, memory4 Float64, memory5 Float64, memory6 Float64, memory7 Float64, memory8 Float64, network1 Float64, network2 Float64, network3 Float64, network4 Float64, network5 Float64, network6 Float64, network7 Float64, network8 Float64, temp1 Float64, temp2 Float64, temp3 Float64, temp4 Float64, temp5 Float64, temp6 Float64, temp7 Float64, temp8 Float64, fopen1 Float64, fopen2 Float64, fopen3 Float64, fopen4 Float64, fopen5 Float64, fopen6 Float64, fopen7 Float64, fopen8 Float64) ENGINE = MergeTree(created_date, (uuid, created_at), 8192)
CREATE TABLE met430 as met430_base ENGINE = Distributed(logs, mdb, met430_base, rand())
8 nodes: 80 x Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz MEM:300GiB
2 clickhouse-servers on 1 node.
clusters: sharding:8. repica:2
dataset: met430 with:22 billions rows,
query:
select idc,sum(cpuload18) ,count() from mdb.met430 group by idc limit 1
it took at least 80s with almost no resource consumption.
I wonder if there is some settings or configures to make clickhouse-server use resource as much as possible to improve the query performance and reduce time cost.
thanks.
Hello
We had a similar issue where queries were slow without resource usage (CPU and IO ). In our context the issue was related to a bad partitioning schema leading to too many files on nodes. In this case CH was spending all the time opening/closing files ( and i 'm just guessing but the file system was saturated).
Do a simple math , number of col * number of part ( which need to be retreived from the system.part as partition can be split ). If it reach millions you may face the same issue.
Also, a good KPI is the amount of system-cpu-time which was large compare to normal workload.
I hope that help
Can you show the output from cl-client with processed stats for one shard
select idc,sum(cpuload18) ,count() from mdb. met430_base group by idc limit 1
If you have huge number of idc, huge results need to be transferred (so it maybe a network is a bottleneck) to a node query-initiator and then the initiator needs to do final aggregation.
@den-crane @TH-HA
Thanks for your reply and advice. I tried to modify my sort key, and I really works for me.
Best regards and happy new year.
@seven7777777 I met the same problem, I have a table just like met430_base? so, how did you modify sort key on this met430_base table? thanks.
|
GITHUB_ARCHIVE
|
My email is this 👉 [email protected]
👇 Skip to what you want to see...
Who this document is for: people who need accessible and engaging content written about their projects or products. My strengths lie in wrapping complex concepts in a friendly and conversational tone. If that's what you need, please let's have a meeting. 🤝
If you want to hire me for game writing or other kinds of fiction, you're better off visiting my website 👩💻
What do I do?
What am I like?
I have been writing about tech policy for five years now. The last three years have been freelance, which means I’ve had the opportunity to work with loads of different amazing clients including Careful Industries, Hattusia, and AWO.
I’ve worked closely with Hattusia to produce lots of different kinds of reports:
I was recently the communications consultant for data rights agency AWO. For them I devised a strategy consisting of a monthly newsletter on algorithm governance, an overhaul of website copy, and a new publishing pipeline for their blog and case studies. I also wrote comprehensive guidelines and templates (including a writing style guide) so that AWO team members would be more self-sufficient when writing about completed projects.
Throughout 2022 I also worked with Careful Industries to produce reports for their clients. This work has been extremely varied, spanning from briefings about the value of community technology, to reports on the history of taxonomies and classification, and how these biases still exist in data sets today.
In 2022 I consulted with Generative Engineering to devise a content strategy. They were building a platform which would enable people and communities to procedurally design & build critical infrastructure at a local level. I devised a content strategy that was novel, exciting, and would clearly explain what their product could do. This work only reached the proposal stage, because Generative have since been acquired.
Alix Dunn and The Relay: I've been supporting Alix Dunn with her content strategy for two years now. I have helped her clarify her ideas and edit copy for The Relay, her monthly newsletter on moral imagination and technology. Alongside producing this newsletter, I have also partnered with her to structure content strategy for her courses on facilitation.
In 2021 I edited a comprehensive report commissioned by Policy Link which outlines the ways in which Big Tech business practices further entrench racial inequity in the USA. This report presented a unique challenge: we sought to describe, in depth, all the facets of the ‘Big Tech business model’ in a way that was accessible, and pair this with relevant policy recommendations.
In 2020, I consulted as a data strategist for Turn2Us, a charity who support those experiencing financial hardship. Most of Turn2Us’s services are digital, and at this time did not have any kind of data strategy or internal policies in place. I conducted external and internal interviews, synthesised the results, and wrote a report consisting of both short-term and long-term recommendations for an effective data strategy.
I worked full-time at Metomic as their content lead for just over a year. I wrote and maintained their blog (now archived due their pivot) and Medium publication. The idea with this blog was to breathe some life into subjects that are notoriously unsexy within tech establishments: data privacy, infosec, and regulations.
🗝 Some of the key parts of my job at Metomic:
Before joining Metomic I worked on a game called Packs, with a small company called One Hundred Emoji Limited (💯). Packs is a cute and engaging pattern-matching game — and I wrote the tutorials and other in-game copy. I also handled the Twitter side of things with the help of their mascot, Bork.
This list is highly 'curated' to ease your boredom...
Alix Dunn: writer and facilitation expert.
Alice Thwaite: founder, Hattusia
Rachel Coldicutt: founder, Careful Industries
Rich Vibert: co-founder, Metomic
|
OPCFW_CODE
|
Some years ago, a coworker introduced me to using Mediawiki as a documentation tool and I got immediately hooked on it.
The biggest pro is that it’s very easy to use and install (millions of people already use it on Wikipedia) and you can use it for almost anything.
IT documentation is always changing and constantly needs to be updated, the harder it is to update – the less updated it will be, unfortunately.
I’ll start by listing some of the pro’s with using Mediawiki for your IT documentation:
- Open source software, no cost for application, possible to install using L.A.M.P. or on a Microsoft platform using Screwturn Wiki.
- One dedicated platform for documentation, you’re not forced to share environment (and search results) with e.g. other Sharpoint sites.
- Fast and easy to create pages, update or rollback changes.
Use standardized wikipedia code to create pages, or feel free to use the awesome WYSIWYG editing tool.
- Built in version control for pages and LDAP / Microsoft Active Directory support. Restricted edit use for the IT department or give read permissions for everyone else.
- Only text makes searches very useful and you always find what you are looking for. If you can’t find it someone probably haven’t documented it yet. No more having to download and open Word documents to find the correct documentation.
- For offline use – Install a 3:rd party plugin for exporting to offline pdf-files.
The flexibility of using it for documentation however requires you to set some ground rules not unlike those used by Wikipedia.
Here are some rules I’ve found useful when implementing mediawiki:
- One page per server, application or area of documentation. Never split it up in several pages.
- Use descriptive page names and avoid names that can have multiple meanings.
- Create templates (also stored in the wiki) for Servers and Applications that can be used when creating new pages.
- Use headers to create a hierarchy for your page. Very useful when linking in to larger pages.
- Use capital letters for Server names, makes them easier to identify.
- Use the server & application pages for logging recent changes. Type in what you did and when you did it to make troubleshooting easier.
- Force users to search before they create a page, to avoid duplicates with similar names.
- Only allow upload of images (like screenshots or graphs) to the wiki. Never allow pdf, word or excel files to be uploaded. The wiki should not be a document store.
- Assign one or more mediawiki evangelists who help out with the initial design of the wiki, they can also help out with questions from other users.
Software requirements for installing could be Ubuntu (or your favorite Linux distribution), MySQL, Apache and Mediawiki.
Use mysqldump to backup your MySQL database to local disk and have your backup software do a backup of the files on the server. That way you can easily restore your files to a new server when needed.
Good luck with your wiki!
|
OPCFW_CODE
|
To dig deep about comp. engineering we need to have a good fundamental on computer arithmetic.
One of the the fundamental is performing arithmetic operation on numbers, but it’s not “just numbers” that I’d like to review today. It’s unusual number, as I might say.
Ok…there are a number of different bases, or radices. Most of us use the decimal positional numeral system, i.e. base 10 (decimal) for our everyday jobs. When it comes to computers most people use the binary, the hexadecimal or even the octal numeral system (you need to read, Why it’s easier for computer use binary instead of decimal like human). However, there are a number of different “unusual” bases.
For example, there are negative bases. An example is the negadecimal positional numeral system, that is using the base -10. Converting a number from base -10 to base 10 is as simple as:
But why use such a base? It’s very simple, you can represent any number you want, positive or negative, without using a sign. For example:
The conversion from decimal to negadecimal is pretty simple. You continuously divide by -10 and keep the remainder as you would do with any other positional numeral system. For example:
So . Converting a positive number is done the same way too.
So . As you can see, there is no need for a sign symbol (pretty neat huh?). And when using the negabinary numeral system there is no problem with signed and unsigned integers since there is no need for a sign bit!
But a negative base isn’t the only non-standard base. You can use complex numbers as bases too. This way there is no need to use a real and an imaginary part to represent a complex number. An example of such a base is where of course . A number can then have the form
Using this base you can represent any complex you want without using the symbol (God bless the inventor of this technique).
Converting from this base to decimal is pretty simple, however the reverse is a little bit difficult. What you do for the convertion is divide continuously with as usual. The remainder will always be or . So, if the quotient is then:
That means that if and are both odd or even, then , otherwise . Then we continue the division of the quotient as usual.
Now let’s calculate the value of 2.
2 has both the real and imaginary part even, so .
The real and imaginary part are both odd, so again.
Since the real part is even and the imaginary is odd, . So, we can divide by the number minus 1 and the remainder will be 0.
Now, the real part is odd and the imaginary is even. So again . We divide by so,
We now stop since . So we have . Pretty cool, now anyone can code it on NetBeans!
That’s something that I dig deep on the library of engineering at UDE for the last few weeks…hehehe….geeks die hard!
Finally on Monday, I went to the front of Prof. Auer’s Comp. Arithmetic class and solve some problems on the board and voila got bonus point as well.
~Semangat Anak Medan.
Studentenwerk, Kammerstrasse 206-208, 47057 Duisburg, Germany.
|
OPCFW_CODE
|
Microsoft’s Visual Studio 2008 and .Net Framework 3.5 have been released to manufacturing and are now available for MSDN (Microsoft Developer Network) subscribers to download, the company announced Monday.
.Net Framework 3.5, meanwhile, builds incrementally on the new features added in .Net Framework 3.0, including feature sets in Windows Workflow Foundation (WF), Windows Communication Foundation (WCF), Windows Presentation Foundation (WPF) and Windows CardSpace. Version 3.5 also contains a number of new features to avoid breaking changes, Microsoft said.
250 New Features
“Visual Studio 2008 delivers over 250 new features, makes improvements to existing features including performance work on many areas, and we’ve made significant enhancements to every version of Visual Studio 2008, from the Express Editions to Visual Studio Team System,” said S. “Soma” Somasegar, corporate vice president of the developer division at Microsoft.
“In Visual Studio Team System in particular, I’m pleased with the progress we made in scalability and performance for Team Foundation Server (TFS),” he added.
Although the products are now available for download, they won’t be officially released until February.
Web 2.0 Functionality
Visual Studio 2008 delivers improved language and data features, such as Language Integrated Query (LINQ), that make it easier for individual programmers to build solutions that analyze and act on information. It also provides developers with the ability to build applications that target the .Net Framework 2.0, 3.0 or 3.5, supporting a wide variety of projects in the same environment, Microsoft said.
New tools speed the creation of connected applications on the latest platforms including the Web, Windows Vista, Office 2007, SQL Server 2008 and Windows Server 2008, while other additions help improve collaboration in development teams, including tools that help integrate database professionals and graphic designers into the development process.
Together, Visual Studio and the .Net Framework reduce the need for common plumbing code, reducing development time and enabling developers to concentrate on solving business problems, Microsoft said.
Hitting a ‘Sore Spot’
“I think the LINQ feature is really the No. 1 new feature in Visual Studio,” Greg DeMichillie, lead analyst at Directions on Microsoft, told TechNewsWorld.
All programmers write programs that access data, using either Visual Basic or C# with SQL, DeMichillie noted. LINQ makes it easier to write code that queries databases, and so is a particularly important addition because it “really hits a sore spot for developers,” he explained. “Virtually 100 percent of customers will end up using that feature.”
Life Cycle Management
.Net Framework 3.5 will also help visual designers create and manage graphically rich Web 2.0 software, Melinda Ballou, program director with IDC, told TechNewsWorld. In addition, Visual Studio 2008’s TFS makes improvements in both performance and version control, she said.
“My main concern about Web 2.0 development is the need to do better quality testing,” Ballou added. Web 2.0 applications, because of their complexity and incorporation of diverse types of data, require different steps in managing the life cycle, she noted.
“I look forward to companies such as Microsoft and others providing effective life cycle management support for both Web 2.0 and service oriented architecture (SOA)-based software,” Ballou said.
|
OPCFW_CODE
|
Let's see what DALL-E is and how this artificial intelligence capable of generating images from text works. It is one of the AIs that started this revolution of generating images, together with others such as Stable Diffusion and MidJourney.
We will try to simplify our explanation, so that you don't need to have technical knowledge or understand technical terms to get an idea of how it works. And when we're done, we'll also tell you how you can try DALL-E and use it on your own to generate images.
What is DALL-E
DALL-E is an artificial intelligence system created by OpenAI, the same creators of ChatGPT. In this case, it is an AI that generates images from text, so you only have to describe what you want it to draw, and it will generate the image from nothing.
Developers at Routinehub can now make use of the DALL-E API to work magic.
This artificial intelligence is based on GPT-3, a language model trained with millions of parameters. This means that it is able to understand what you are asking it with natural language, since it has been trained to distinguish the formulas we use when we express ourselves and want to ask or ask something.
In addition to this, DALL-E has also been trained through a gigantic library of works of art and photographs. Thanks to this, when you ask it to draw a celebrity, DALL-E will know who you are referring to, and it will draw what this person looks like by performing an action that it will also know how to interpret and draw.
In addition to this, this artificial intelligence system is also capable of combining concepts, styles and attributes for an image. So, if you explain that you want to see a certain thing, specifying details or even artistic style, the AI will try to combine everything in the image.
DALL-E is a model that is constantly evolving. Its first version was presented in 2021, and in 2022 OpenAI presented DALL-E 2, which is the current version. And eventually it will bring out a DALL-E 3, which will be more capable and will generate better images through our texts.
How DALL-E works
FALL-E uses what is called a diffusion model, which are those artificial intelligence systems capable of creating images out of nothing. In this creation process, it learns from the latent structures of the data to train itself to eliminate the Gaussian noise of blurred images, which are those small distortions that can be generated in this type of AIs.
Its creation process is the same as other similar AIs, and can be summarized in three steps. First, it encodes and understands the text you have written in the prompt or request. In this way it tries to know what you mean, and tries to distinguish the different features, characteristics and styles that you have asked it to draw.
Then, DALL-E creates image information from this request, and finally uses a decoder that paints the image from that text. In short, it first understands what you ask it to do, then it thinks what elements it will have depending on your request, and finally it draws the picture.
Each time you ask it to draw something the result changes, since it processes it again from scratch. Therefore, you can make a request until it finally draws what you want to see.
How to use DALL-E
OpenAI, which is the developer of this AI, has an official website where you can use DALL-E. The only thing you need is to have registered with OpenAI, the same as you need to use ChatGPT. Therefore, you can use both AIs with the same account. The DALL-E website is Openai.com/dall-e-2, where you have to login or register.
Once you have identified yourself, you will enter the DALL-E page, where you will be asked to acquire credits, if you registered before April 6, 2023, you will have 5 free credits. In it, you have a bar where you must write what you want it to draw, something you can do both in English and Spanish.
And that's it, when you type something and click on the Generate button, the AI will take a few seconds and present you with 4 images that represent what you have asked it to draw. Then, you can ask it for more different things or add more details to your request to refine, although you can also ask it to generate images again from the same promtp.
|
OPCFW_CODE
|
I've spent two days trying to figure out why Maple won't allocate more than 512 MB of memory on my Mac. I've checked all the forums here an on Apple and it seems that this is a problem which has been around for at least two years, without being properly resolved. Previous posts can be found here:
I've discussed with technical support from both Apple and Maple, and it seems we have narrowed the problem to a larger issue with the Maple script than was realised in the previous 2 forums.
Essentially, the problem is this: there is a file, `.../bin.APPLE_UNIVERSAL_OSX/mserver_ulimit' which can be altered so that the upper limit to the amount of memory Maple can use may be any arbitrary number up to `unlimited', for which the `datalimit' inside Maple is set to infinity. In the current distribution of Maple 13, and possibly in earlier releases, this value is set to unlimited, by default. However, kernelopts(datalimit) returns 409600 (kibibytes). The reason this happens is that there is a 512000 kibibyte maximum that is externally set in the maple script. The mserver_ulimit file actually offers a list of values which are submitted to be ulimit, and the value is set at the largest number below that cutoff. In the currently distributed code, the value which meets this criterion is 409600, but can be manually altered to anything up to 512000. Alternatively, all the values less than `unlimited' may be deleted from mserver_ulimit, so that after it is re-executed the datalimit is set to that global upper limit---512000.
The file which contains the error is, I think, `.../bin/maple'. Please check this by opening this executable file in a text editor and searching for `ulimit'. I'm not a programmer, but is seems pretty obvious to me that code like
if [ "$DLIMIT" = 6144 ]
ulimit -d 512000 1>/dev/null 2>&1 || ulimit -d 409600 1>/dev/null 2>&1 || ulimit -d 1>/dev/null 2>&1 307200
if [ $DLIMIT != "unlimited" ]; then
if [ $DLIMIT -lt 512000 ]; then
ulimit -d 512000 2>/dev/null || ulimit -d 409600 2>/dev/null || ulimit -d 307200 2>/dev/null || echo "Unable to set the required datalimit. Maple may not function properly."
if [ $CUSTOMHEAP -ne 0 ]; then
JAVAHEAP=`expr $DLIMIT / 1024 - 50`
must be the culprit. The first part is the code for running Maple on Mac OS. My friend at technical support is going to talk to a developer to see if this can not be fixed so that ulimit can be set to unlimited, as for Windows and Linux platforms.
If anyone has any further insight, or can offer a solution, or if a replacement file is written and made available which could fix this problem, please post it! There are forums covering this issue on both Apple.com and MaplePrimes, so I know there are other Mac users who have been having this issue for more than 2 years.
|
OPCFW_CODE
|
jQuery.browser is deprecated, but how do you use .support?
On my web page, I have this CSS:
tr:hover {
background-color: #f0f;
}
Which works nicely in all browsers except for good old IE. I know that I can just write some jQuery to add and remove a class on mouse over/out, but I'd prefer not to handicap (albeit ever so slightly) all the other browsers which support :hover properly - so I want to only apply this JS behaviour for the browsers which don't support the pure CSS solution natively.
Of course, we all know that $.browser is deprecated, and we all know that browser sniffing is a bad thing, and every other question on SO has a raft of answers along the lines of "you're not supposed to check for the browser, check for the feature", and that's all well and good in the magical fairy land where these people live, but the rest of us need to get our sites working and looking ok across IE6 and other browsers.
$.support looks like this for IE6 & 7:
leadingWhitespace: false
tbody: false
objectAll: false
htmlSerialize: false
style: false
hrefNormalized: false
opacity: false
cssFloat: false
scriptEval: false
noCloneEvent: false
boxModel: true
How on earth am I supposed to use these properties to determine whether tr:hover will work?
Yes I know that in this example, it's fairly innocuous and I could probably get away with either not giving IE users that feature, or by simulating it across all browsers, but that's not the point. How are you supposed to stop using $.browser when $.support doesn't come close to replacing it?
Simple answer - in some circumstances you can't, this is one of those circumstances where I would argue that you should use whatever means necessary to get the job done. Many popular plugins ( datepickers and modal scripts ) need to do this ( eg iframe shim ) or the script wouldn't work properly for a specific ( albeit old ) browser.
However instead of sniffing the userAgent as $.browser does, I would object detect for IE6 or use CC's.
<!--[if IE 6]><script src="/js/ie6.js"></script><![endif]-->
You could also feed a general IE js file and then branch out inside of it based on the version.
+1, because this probably is more reliable than user-agent sniffing, but it doesn't address the problem which is that you want your code to be forward compatible: if IE9 comes out and supports feature X, using an IE-only conditional comment won't work. (I know you could do [if IE<9] or something, but that's still sniffing..
The advantage though of CCs are they can't be spoofed, or well at least not without hacking your registry. I wish jQuery didn't deprecate the $.browser. Must of been a long list of emails to Mr Resig: "You're doing it wrong!"
|
STACK_EXCHANGE
|
We — Manton Reece and Brent Simmons — have noticed that JSON has
become the developers’ choice for APIs, and that developers will
often go out of their way to avoid XML. JSON is simpler to read
and write, and it’s less prone to bugs.
So we developed JSON Feed, a format similar to RSS and Atom but in
JSON. It reflects the lessons learned from our years of work
reading and publishing feeds.
Sure, but the API also exposed at the top level feedparser.parse('http://example.com/feed.json').namespaces, and 100% of the behavior of feedparser is in FeedParserMixin which is 100% attached to XML. I mean I understand the spirit in which this comment of offered but it's hard to imagine feedparser changing enough that it would support json feeds without basically being rewritten, even the test framework of feedparser is all setup around XML
Might be interesting, and I'd be happy to generate this for my own stuff if it sees much usage, though one gets a little nervous about fragmentation. I feel like I remember an RSS-as-JSON spec a while back that didn't really get any uptake. On a related note, I'm still waiting for someone smarter than me to take the off-the-cuff thought of "sites could publish (and feedreaders consume) .git" and do something interesting with it.
It's hard to escape the media pronouncements that iPhones are now boring again after Samsung unveiled its latest Galaxy S8, Apple's Mac business is being overshadowed by more exciting Surface Windows PCs from Microsoft and that Apple Watch is a disappointing dud. But all of those media narratives are wrong, here's why.
The United States Senate continues the war against their own users. One Hackernews suspects some kind of massive federal conspiracy to censor comments on reddit.com. Another suddenly realizes that people might disagree about things for reasons other than ignorance, and becomes distressed. The rest of the comments are people arguing about technical methods to work around the user-tracking they implement in their day jobs.
Google continues the war against their own users. The XMPP Memorial Society trades barbs about whose fault it is that a misdesigned overengineered shitshow of a protocol failed to gain traction amongst non-erlang enthusiasts. Every single messaging platform in current existence is held up as Obviously The Future. Hackernews tries to figure out what Google's master plan is, and why Google is working so hard to make it look like aimless poorly-managed floundering. IRCv3 continues to be a retarded pile of solutions to the wrong problems.
The United States House of Representatives continues the war against their own users. Hackernews is outraged, presumably because the rules will now enable other companies to compete with Google in the lucrative Fuck Everybody's Privacy market sector. The entire comment thread is just Hackernews arguing about political shit and deciding which elected officials are betraying the American people. Not a single goddamn Hackernews makes the obvious connection to the shit they do at work all day for a living. The tacit consensus: Hackernews isn't bad for creating the tools of surveillance capitalism; Congress is bad for letting people use them.
Some academics figure out how to make shit in pictures look like shit in other pictures. One Hackernews notices that the machine learning papers have largely stopped relying on mathematics or any other scientific endeavor; the others are ready with reassurances that someone will get around to formal research sooner or later. All this stuff is super worthwhile in the meantime because we can just keep passing around training sets verbatim and treating them as infallible, just like we do with node.js libraries! Both the machine learning community and the web development community are completely free of charlatans! Scout's honor!
HELLO FRIENDS. I am announcing this everywhere because I'm very excited about
it. I released a new zine today! Read it here! Read all my zine things at jvns.ca/zines!
This zine is about some of my favorite Linux debugging tools, especially tools that I don't think are as well-known as they should be. It covers strace, opensnoop/eBPF, and dstat! netcat, netstat, tcpdump, wireshark, and ngrep! And there's a whole section on perf because perf is the best.
If you don't know what any of those tools I just mentioned are -- PERFECT. You
are who this zine is for!!! Read it and find out why I love them! Also, a lot
of these tools happen to work on OS X :)
I've been really delighted to see that a ton of people have enjoyed & learned
something new from this zine, whether they just started using Linux (!!!) or
have been debugging on Linux for 10 years.
As usual, there are 3 versions. If you print it, you can print as many as you
want! Give them to your friends! Teach them about tcpdump!
Due to popular demand, I am pleased to announce the launch of the Rails Tutorial screencats! The art of web development procrastination has never been cuter:
Although they’re not nearly as adorable as the screencats (for obvious reasons), I’m also pleased to announce the launch of the Ruby on Rails Tutorial screencasts, updated for Rails 5!
The Rails Tutorial screencasts are the most up-to-date resource for learning web development with Ruby on Rails. They are available for free via the Learn Enough Society, as well as being available for purchase as direct downloads. Those links include a 20% launch discount that expires in a week, so get them while it lasts!
The best way to get the new screencasts is via the Learn Enough Society, which includes all 15+ hours as integrated streaming video:
The Learn Enough Society also includes text and video for the three Developer Fundamentals tutorials (Command Line, Text Editor, Git), as well as immediate access to new tutorials as they’re released.
The Rails Tutorial screencasts are the ideal complement to the Rails Tutorial book, allowing you to see exactly how web applications are built in practice. There are video lessons corresponding to each chapter of the book, totaling over 15 hours of content. You can view a full sample lesson here.
As with the 3rd edition of the tutorial, the new 4th edition covers every major aspect of web development:
Creating both static and dynamic pages with Rails templates
Data modeling with a full database back-end
Creating a working signup page from scratch
Building a custom login and authentication system
Activating accounts and resetting passwords
Sending email in Rails, both locally and in production
Advanced data modeling to create a mini Twitter-like application
Coverage of software best practices, including test-driven development and version control
Emphasis on strong security throughout
Deploying to production early and often
Those familiar with the previous edition will find the following main differences in the Rails 5 version:
14 lessons instead of 12, due not to new material but to the two longest lessons being split in two (much more manageable)
Full compatibility with Rails 5, including the use of the rails command in place of rake
A shift toward integration-style testing for controllers, together with a new convention for passing parameters in tests
Because Rails 4.2 and Rails 5.0 are so similar, the new edition of the screencasts did not need to be created from scratch. Instead, the minor diffs mentioned above are highlighted as text notes in the videos themselves. The result is that it is immediately apparent which parts of the Rails framework have changed between versions.
Remember, all 15+ hours of the Rails Tutorial screencasts are available both via the Learn Enough Society and as direct purchase. Those links include a 20% launch discount* that expires in a week, so get them while it lasts!
* Note: The discount applies to any Rails Tutorial purchase or to the first month of the Learn Enough Society.
|
OPCFW_CODE
|
I would break this question to multiple pieces and try to answer each piece.
1) Is the design methodology for opamps (or say analog circuits) that people used in deep-sub-um technologies any different than what people should use in latest nm technologies?
Not really. The physics did not change and the back-of-envelop calculations won’t change either. You will still consider second-order effects the same way you used to do it before.
In recent technologies you have more complex effects due to layout and technology restrictions, but it is very hard to use hand calculations for them. So, you would still understand their root cause, but you would rely on the foundry providing accurate model for these effects.
2) What are the challenges for analog (or opamp) design in these deep technologies?
There are some challenges and limitations due to the technology itself. This includes the layout-dependent effects mentioned before, the discrete dimensions designers should use, the poor intrinsic gain as you move from one technology to another, the limitations on the supply range you can use if your design requires high-voltage .. and so on. One key here is to make sure that you know exactly the region covered by your models and to be extra careful if your innovation is based on biasing devices in regions not covered by the model. While you can see the simulation results working, you will never get the performance if the model does not support your use case.
Another type of challenges is due to the fact that your opamp is in a nm technology for a reason. It won’t scale much in area is its fellow digital blocks. In this case, since the chip is almost dominant by digital circuits then you want to make sure the integration does not cause failure to your analog design. There can be further restrictions on the supply, extra guard-rings needed to isolate from digital noise, and many others that can be discovered when you work on full-chip solutions.
3) What methodology should you use?
Again, it is not really different from before. You should understand the limitations of models, the limitations of device choices, do the most simple and intuitive method for hand calculation, understand the devices limitations and try to do some characterization for each device in simulation by itself to get parameters such as gm/Id vs. Id, Ids vs. Vgs and Vds, and other simple relations with different device geometries. Understand when saturation occurs per device width. Check the definition of threshold voltage according to the foundry … etc. Extract the parameters to use them in your hand calculations to get the most accuracy out of it. Once you feel you are comfortable with your knowledge about the technology go ahead and do a simple design, say a differential amplifier. Try cascoding and see what gain do you gain out of it. See if this matches your previous device characterization for output impedance and intrinsic gain.
Finally, go ahead and do your design in small steps to understand how each piece work, then assemble parts together and run simulations. It will match your intuition and your understanding of physics and you will know at which direction you need to tweak current, device size, loading …etc
You are not done yet, you still have to check how PVT variations affects you. Later to make your design robust you need to consider reliability and aging effects.
This answer is not complete and it is not meant to be complete. Analog design comes from the understanding of basics and the amount of experience you build from different design and from always thinking about why or why not it may work. So, don’t be overwhelmed. Just get started and build strong basic and intuition. You will get there.
|
OPCFW_CODE
|