Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
#!/bin/bash
if [ -z "$1" ]; then
echo ""
echo "script to use Urho3D's PackageTool on whole Folder"
echo ""
echo "USAGE: ./package_folder.sh folder_with_folders_to_be_packed [outputfolder]"
echo ""
echo "if no output folder specified, then the result is placed in the input-folder"
echo ""
exit 1
fi
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
cd $1
for d in */ ; do
FOLDERNAME=${d::-1}
INFOLDER="$1/$FOLDERNAME"
if [ -z "$2" ]; then
OUTFOLDER="$1/$FOLDERNAME"
else
OUTFOLDER="$2/$FOLDERNAME"
fi
$DIR/PackageTool "$INFOLDER" "$OUTFOLDER.pak" -c
done
|
STACK_EDU
|
Date: Mon, 10 Aug 2015 00:14:38 +0300 From: Solar Designer <solar@...nwall.com> To: Fran??ois Labr??che <f.labreche@...il.com> Cc: oss-security@...ts.openwall.com, Olivier Bilodeau <olivier@...tomlesspit.org> Subject: Re: CVE request - simple-php-captcha - captcha bypass vulnerability On Sun, Aug 09, 2015 at 03:50:10PM -0400, Fran??ois Labr??che wrote: > We found a captcha bypass vulnerability in an open source captcha > software, made by Cory LaViska for A Beautiful Site. Here is the github > repository: https://github.com/claviska/simple-php-captcha. > > We opened an issue on github > <https://github.com/claviska/simple-php-captcha/issues/16>, and the > vulnerability has been fixed. They never did any release so we don't > think the fix will be released in any form. Simply advising users to > update to git master's should suffice. > > The simple-php-captcha.php file had a vulnerability enabling a client to > generate the captcha response automatically, effectively bypassing the > captcha. > > Since the microtime() function was used both in the initial seed for the > captcha and in the captcha url path sent to the client, it was possible > to generate the captcha result automatically by running the same code > client-side. And you think removing the srand(microtime() * 100) fixes this? Well, it does appear to fix the most straightforward and easiest attack, and captchas are bypassable in general, but does this raise the bar high enough for the "fixed" version not to be CVE-worthy? Or are you going to be requesting a second CVE ID for it then? The "fixed" code relies on PHP's automatic seeding for rand() (which is typically dependent on system time anyway, adding only a process id to the mix), and, what's probably worse, it uses rand() so many times that it leaks its tiny internal state via properties of the captcha that are easy for a computer to analyze. While figuring out the captcha text might require OCR, figuring out the text length, font size, x and y position, and colors is easier. OCR isn't rocket science, but it's the intended level of "security" of this captcha, while being able to infer the text through even easier analysis of "metadata" is a captcha bypass, somewhat similar to (but moderately trickier than) your initial finding. Alexander
Powered by blists - more mailing lists
Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.
|
OPCFW_CODE
|
Incrediblefiction Pocket Hunting Dimension Chapter 1258 Build A Cultivation Sacred Ground crawl therapeutic recommendp1
Fantasticnovel - Chapter 1258 - : Build A Cultivation Sacred Ground coat alcoholic recommend-p1
the intrusion of jimmy
Novel - Pocket Hunting Dimension - Pocket Hunting Dimension
Chapter 1258 - : Build A Cultivation Sacred Ground knowledge mushy
Translator: Dragon Motorboat Interpretation Editor: Dragon Motorboat Translation
Immediately, a tremendous frost movement surged.
The elders became all the more suspect.
In the event the Individual Competition could digest all these treasures, what levels would they attain?
He guided those to the entrance of this switch dimension.
“Yeah!” Additional seniors nodded.
The s.p.a.ce boundary from the switch measurement was cracking!
Elder Nangong as well as the remainder had been speechless.
Was there even more?
This was ridiculous!
A lot of them were actually cosmic cloud states, but there was quite some cosmic kingdom state things. There were clearly just one or two cosmic system state sources.
Elder Nangong looked over Lu Ze vigilantly. “What will you be setting up to get this time? You can preserve it in the meantime. Just the items listed here are ample.”
Lu Ze was astonished far too. He believed this s.p.a.ce was big enough.
Elder Nangong experienced resolved to never give Lu Ze a chance to maintain acting great!
He moved to the door.
At this moment, Lu Ze’s sound sounded. “Lin Ling, enforce the s.p.a.ce!”
They had only noticed a compact portion, and so they ended up this amazed.
The elders soon glanced across all the stuff on the safe-keeping band.
Elder Nangong checked on the inside and expected, “Ze, what the heck is that thing?”
“Medium-class dao enlightenment jewel?! There is that many of these here?”
Xu Bingbai grinned and said, “Okay, no need to flatter us. Acquire that which you planned to present us!”
At that time, planetary status prodigies could appear in to master Ice cubes G.o.d Skill.
Lives of Girls Who Became Famous
Nangong Jing smiled. “Grandpa, Ze will take out a thing essential. You fellas should really accept it. It will probably be very significant into the Human Competition.”
The s.p.a.ce boundary from the change dimension was cracking!
They finally recognized why Lu Ze didn’t remove it now and traveled to the particular level.
He was required to broaden this by a minimum of ten times, and just then could the cosmic method express tackle the really borders of frost flow.
Xu Bingbai grinned and claimed, “Okay, no need to flatter us. Remove everything you wished to clearly show us!”
Elder Nangong rolled his eyeballs. “The things we have checked out alone are more than enough. We could look at others at some point.”
|
OPCFW_CODE
|
What are the factors affecting expected difficulty and what is the maximum value it can take over the period of 30 days?
Lets take for example day 1 difficulty of 300MM. What is the mechanism to determine the rise in difficulty?
What is the maximum value it can go starting from that number and expecting the biggest possible rise over 30 days?
You should have searched first. While there is a limit of x4 on the increase in every adjustment period, there is no limit on what it can be in X days if the hashrate rises rapidly enough.
It has some similarity, but i am asking about a specific time period. So if it takes 30 days to generate the required amount of blocks then difficulty will not be adjusted and stay 300MM?
I think the question is more about the maximal increase than about the underlying difficulty, still by reading the linked question it is deduceable.
The difficulty is not limited by a specific time-frame, but solely depends on the amount of hashing power available in the network. The difficulty is updated every 2016 blocks increasing or decreasing up to factor four. While the algorithm strives to set the difficulty such that it will always take about fourteen days for 2016 blocks, it could happen that the hashing power grows much faster than that:
Scenario: Difficulty has been just set to 300MM, and somebody adds a crazy mining rig which is 20x as fast as the whole network
The network is now 21x faster than what the difficulty tries to balance
2016 Blocks are mined in 16h instead of 14d
Difficulty quadruples to 1200MM, the next difficulty reset is reached after another 64h (2d 16h)
Difficulty quadruples again to 4800MM, the next difficulty reset is reached after 256h (10d 16h)
Difficulty finally catches up with the hashing power added and only increases by 21/16.
In this scenario it would take the network (16+64+256)/24=fourteen days to increase by factor 21. Imagine that our great engineer figures out how to first build one of those super-rigs a day and then speeds up the progress. Theoretically the difficulty could increase by factor four every hour all 30 days (which would be a factor of 2880).
In other words, there is no maximum value for the difficulty in the next 30 days.
thanks for the response. i am trying to make sense of the evolvement rate of the difficulty, which seems to be rising exponentially....
|
STACK_EXCHANGE
|
After watching the season premiere of the Game of Thrones (GOT) final season a few weeks ago, I flew to Atlanta for EMPOWER 2019 which is VMware’s event for Partners. It kicked off with a happy hour and demo station presentations. I supported the Edge and IoT demo station where we did some fun Raspberry Pi demos and had some cool giveaways.
One fun theme we had was that IoT is Coming and if you use Pulse IoT Center, “you’ll know things”. GOT fans will get the reference, if not, search “I drink and I know things” in your favorite browser. Pulse IoT Center is a tool for managing your IoT and Edge environment. See this post:
for more detail on how it compares to vCenter and WorkspaceOne.
I was really impressed with the number of partners who had a good understanding of Edge and IoT and were already working with customers on their overall strategy. In one case the partner was planning for State and Local schools to implement IoT device management for video surveillance cameras and gunshot detection sensors. In another case, they were looking to bring operational efficiency to their customers manufacturing floors that were being refreshed with new ruggedized gateways and wireless sensors.
Many of the partners were excited to see the Raspberry Pi demo which went like this.
At the booth we had a monitor with a browser and one AstroPi:
This is a Raspberry Pi 3B+ with an add-on SenseHAT that monitors temperature, humidity, barometric pressure, yaw, pitch, roll, and has a joystick and LED display. We logged into the Pulse IoT Center dashboard which is based on the standard VMware Clarity HTML5 UI. So the look and feel and navigation should be familiar to administrators of other VMware products.
This is our recently released Pulse IoT Center 2.0. Then we clicked on “Devices” to show list of Devices under management:
Notice that this lists “Gateway” Device Types and “Thing” Device Types and that Raspberry Pi is a Gateway and the SenstHat is a Thing. The difference is that a Gateway can run our Pulse IoT Center Agent (IOTC Agent) and a Thing cannot. However, a Thing can be managed via the IOTC Agent running on the Gateway it’s attached to. In this case, the SenseHAT Thing is physically attached to the Raspberry Pi Gateway. In other cases, Things may connect via Bluetooth, Zigbee, Modbus, or some other IoT protocol that both the Thing and Gateway can support.
Clicking on the Raspberry Pi Gateway you get this basic information:
Clicking on “Properties” you get more detailed information:
This is a good way to find the IP Address of the device, uptime, os-release, status of SSH, or any custom information specific to that device like the location of the physical gateway.
Clicking on “Metrics” shows CPU, Memory, etc. about the Raspberry Pi gateway.
Clicking on “Connected Devices” shows the Things connected to the Gateway.
In this case, there’s only one Thing, the SenseHat. Some gateways could have many Things connected physically or wirelessly. If you click on the “SenseHat” Thing and then “Metrics” you can see what the Raspberry PI has been collecting from the SenseHat.
OK, now for the fun part. If you go back to the Raspberry Pi Device view and click on the three little dots on the right you can click on “Commands” to get to the command console.
Once in the Command Console you can click on “SEND COMMAND” to get this list of predefined commands as well as some commands we added:
For the demo, we want to turn on the SenseHAT LED display so we select “SenseHatDisplay On” and then click “SEND COMMAND”.
The Pulse IoT Center Agent running on the Gateway will check in with it’s Pulse IoT Center every 5 minutes by default. For the purposes of the demo, we shortened this to 5 seconds. When it checks in, it will inquire if there is a command or campaign to run. In this case, it’ll see that there is a command to run and it will execute that command which will turn on the LED display.
If the device is in a remote location, the status of the command can be monitored:
This is an example of sending a command to a single device. Pulse IoT Center is capable of running Campaigns which will perform commands on multiple devices. We can address that topic in another post. And, this is just one of the many examples of how Pulse IoT Center can be used to manage IoT devices.
|
OPCFW_CODE
|
How to find which coordinate or pixel (x,y) contains which colour intensity?
7 views (last 30 days)
Show older comments
Commented: Guillaume on 4 Jul 2016
I am writing a program where I am able to find the RGB values in the image using the below code
R = a(:,:,1); % Red color plane
G = a(:,:,2); % Green color plane
B = a(:,:,3); % Blue color plane
Now how I can find which coordinate or pixel (x,y) contains which type of colour intensity.
Muhammad Usman Saleem on 30 Jun 2016
Edited: Muhammad Usman Saleem on 30 Jun 2016
Image Analyst on 30 Jun 2016
To interactively see the RGB values, use impixelinfo():
hp = impixelinfo();
Also, you're using size incorrectly:
[r,c] = size(a);
c is the product of the number of columns times the number of color channels, so is basically three times the number of columns and that's why you get an index out of bounds error. See Steve's blog for more info:
Don't call your image "a" - that is a not very descriptive name, and it seems like it might be susceptible for you to use "a" again later in the code, blowing away your image, because you forgot you used a. Call it rgbImage instead. It's much more descriptive. To use it correctly, do
[rows, columns, numberOfColorChannels] = size(rgbImage);
There are a few things in your code that show a lack of understanding of how images are represented and of how matlab works. I would suggest grabbing a book on image processing in your favorite library:
[rR, cR, zR] = size(R);
[rG, cG, zG] = size(G);
[rB, cB, zB] = size(B);
R, G, and B are the three colour planes of your images. The z* is always going to be 1, there's no point asking for it. The size of the colour planes is going to be the same as the size of the image, so rR == rG == rB == r, same for c*. In other words, the above three queries are completely unnecessary. You've already got the information.
valueR = double(something uint);
%then simply print value
There's absolutely no point in converting to double. The exact same number will be printed than if you hadn't bothered.
value = double(rgbImage(i,j));
Note that rgbImage is an r x c x 3 matrix. You haven't specified the 3rd dimension index in the above, so due to the way matlab indexing work, it's simply 1. Therefore that loop is only going over the red pixels, same as your first loop.
Note that to make it easier to spot bugs, I would move the initialisation of totalsum to 0 just before the loop starts.
A much simpler way to save your pixels to text files would be:
rgbImage = imread('C:\Users\Desktop\Documents\MATLAB\example.jpg');
[height, width, ~] = size(rgbImage); %height and width are more meaningful than r and c.
RedChannel = rgbImage(:, :, 1);
GreenChannel = rgbImage(:, :, 2);
BlueChannel = rgbImage(:, :, 3);
[rows, columns] = ndgrid(1:height, 1:column);
%because you save by rows and matlab is column based, we need to transpose all the arrays before reshaping them into one column
%it can then be written as one matrix
rows = reshape(rows.', , 1);
columns = reshape(columns.', , 1);
dlmwrite('rColor.txt', [rows, columns, reshape(RedChannel.', , 1)], ' ');
dlmwrite('gColor.txt', [rows, columns, reshape(GreenChannel.', , 1)], ' ');
dlmwrite('bColor.txt', [rows, columns, reshape(BlueChannel.', , 1)], ' ');
I have no idea what you're trying to do with your last loop, but I'm certain you don't need a loop.
Guillaume on 4 Jul 2016
"I want to reduce the size of the image" Physical size (i.e. imresize the image)?, the memory footprint but not size, maybe by reducing the number colours and converting to indexed with rgb2ind?
"I am trying to get the high color density values" What does that mean? What is the density of a colour?
"either reduce it or remove it" What does it refer to?
" high color density values/bits" Again what does that mean? Why are you suddenly talking about bits?
Find more on Explore and Edit Images with Image Viewer App in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!
|
OPCFW_CODE
|
I know you misunderstood my solution. It addresses the your completely. (detecting if a block is player placed, or generated)
Tree is made out of 20 "original" wood. (i.e; trees that the world generates)
Each "original" wood block will drop 1 "normal" wood, and maybe a special drop (sap, bark, etc.)
The "normal" wood only drops itself.
Ergo, you would always get 20 wood, but you can’t then take those and break them continuously to farm sap and berries.
You can then take this one step further, and instead of having a different item, you have a flag for each item which denotes if it is original or not. This is harder to implement (Databases, world formats, etc. might need modification), but there should be very little that can go wrong doing it this way, and it will be trivially transferrable to other blocks as well.
This problem is solvable, but the ease of a solution heavily depends on how boundless is coded. Seeing this solution, it is probably just a stop gap measure until they can implement something better.
I too would prefer this. Or an option to voluntarily give up secondary resources if you’re hunting for just the block type. I can see why a defense against macros is needed but the current proposal is a solution which punishes everyone with (again) needing more time to achieve the same thing, in order to prevent a subset of players abusing the systems. I think there’s another way to solve this, some suggestions here are good!
Same for me. I’m also concerned that the landscape is going to be far more decimated than it already gets in populated areas because players are having to cut down all the trees, instead of just some, to actually get the blocks they need.
Hmm, interesting. The ones in my inventory and some shop stands I looked at definitely started as balanced but changed to charger. I’ll check on a character I haven’t spent skills on yet and do it in a shop, see if I get the same as you.
Edit: I get a mix of results: (No skills)
In inventory - Diamond = Heavy Charger / Sapphire & Ruby = Balanced Charger
In a shop stand - Gold = Heavy Repeater / Diamond = Heavy Charger / Iron = Heavy / Copper = Swift
Maybe old items kept old names? If so I guess wouldn’t be a problem at launch as they’ll be gone.
When you switch a tool in your hand mid animation, the visuals don’t change but the tool does under the hood.
Take an Axe, punch wood, switch axe for a hammer (e.g. with mouse wheel) while the animation of the axe is still running.
Boom: you got an axe that has the properties of a hammer! (e.g. is effective against stone)
(Not really because it’s just a visiual bug … but, you know …)
I think the resources distribution isn’t working right. I can’t find coal in caves ( just some soft coal once in awhile), found a lot of ancient tech remnant on solum really fast around 200 under 30 min, tech component at altitude 53, gold and silver ores really high in altitude.
Also, all light sources seem bugged (torches, lanterns, light cube, and gleam) such that if you have one equipped and emitting light, then simply unequip it (q then click without dragging) the light persists until you switch items for that hand.
|
OPCFW_CODE
|
You must manage events data change only for Business Intelligence (BI) reports.
Time points refer to frequencies (for example, every hour, every 2 days, every Saturday) that are defined and used by various jobs and schedulers. You can create time points there, so that they can be used by reports and run at such frequencies.
An event signals that a particular situation has occurred in the system and specific background processing that is waiting for this event must be activated accordingly. Events data changes are associated with a process chain, which is a sequence of processes waiting in the background for an event. You configure events data change by creating a variant of process chain, defining a schedule, and activating it.
To create a variant for a process chain, and define the schedule, proceed as follows:
Go to transaction RSPC.
Choose Create to create a new process chain.
Enter a name for the process chain and choose Continue. The Insert a Start Process dialog is displayed. Create a process variant as the start process of the chain.
Choose Create and enter a name and description for the variant.
Choose Change Selections.
Choose Date/Time to schedule date and time.
Choose Periodic Job.
Choose Period Values. Select the schedule you want to use (hourly, daily, weekly, monthly, other periods).
Save your settings.
Click the Process Types icon to load all available process types.
Under Load Process and Post Processing, select Trigger Event Data change (for broadcaster) to insert the variant of the process chain.
Click the Create icon to create a new event and give it a name and a description. The Info cube (Info provider) for which you created the event data change is displayed.
Press F4 to select the Info cube (Info provider).
Save your settings. This inserts the newly created event data change in the process chain.
Choose Start Process and move the pointer over the event data change.
Click the Activate icon, and then the Activate and Schedule icon. Select the relevant application server to activate and schedule.
You can test whether the event data change is functional. Testing involves modifying data from an Info cube.
To check whether the event data change is functional, proceed as follows:
Go to transaction RSRD_START.
Select the name of the Info cube for which you have defined the event data change.
Click the Execute icon with ‘P_ONLINE’ checked.
To define time points and data change events, proceed as follows:
In transaction SPRO open the SAP Reference IMG and navigate to: and click the Activity icon.
Click Refresh to retrieve the latest list of system time points and data change events. The Status column shows the status of items that have been updated or deleted.
Check the Active checkbox for time points and data change events that should be used to configure reports. These time points and events are now available in the activity Manage Reports and their Properties.
|
OPCFW_CODE
|
# |**********************************************************************
# * Project : BB200
# * Program name : module_5b.py
# * Author : Geir V. Hagen (geha0002)
# * Date created : 2018-04
# * Purpose : Generate private, public key and address
# |**********************************************************************
# IMPORTS
from pycoin import ecdsa as secp, key, encoding
import hashlib
import binascii
import codecs
import os
# |*************************************************
# Method : getDoubleSha256()
# Author : Geir V. Hagen (geha0002)
# Date : 2018-04-09
# Purpose: Support function for getting a
# double 256 hash of the in-value.
# |*************************************************
def getDoubleSha256(value):
return hashlib.sha256(hashlib.sha256(binascii.unhexlify(value)).digest()).hexdigest()
# |*************************************************
# Method : getHash160()
# Author : Geir V. Hagen (geha0002)
# Date : 2018-04-09
# Purpose: Support function for getting a
# 160 hash of the in-value.
# |*************************************************
def getHash160(value):
return hashlib.new('ripemd160',hashlib.sha256(binascii.unhexlify(value)).digest()).hexdigest()
# |*************************************************
# Method : GeneratePrivPubAddressData()
# Author : Geir V. Hagen (geha0002)
# Date : 2018-04-08
# Purpose: Generate the "start" Point for
# caluclating the private-, public-Key
# and the BitCoin addresses.
# |*************************************************
def generatePrivPubAddressData():
privPubAddressData = {}
#--------------------------[ ECDSA ]------------------------------#
# Creating the gPoint (also known as G, in the formula P = k * G)
gPoint = secp.generator_secp256k1
# Randomize a string of n random bytes for getting the Secret Exponent (also known as k, in the formula P = k * G)
rand = codecs.encode(os.urandom(32), 'hex').decode()
secretExponent = int('0x' + rand, 0)
#secretExponent = 10 # Uses 10 for test, to see that everything matches up!
# Calculate the public key (point) (also known as the P, in the formula P = k * G)
publicKeyPoint = secretExponent * gPoint
#-----------------------------------------------------------------#
#-------------------------[ PRIVATE KEY ]-------------------------#
secretExponentHexified = '%064x' % secretExponent
# 80 = mainnet, 01 = compressed public key should be generated
data = '80' + secretExponentHexified + '01'
# 4 bytes, 8 hex
checkSum = getDoubleSha256(data)[:8]
data = data + checkSum
wif = encoding.b2a_base58(binascii.unhexlify(data))
#-----------------------------------------------------------------#
#-------------------------[ PUBLIC KEY ]--------------------------#
# This encoding is standardized by SEC, Standards for Efficient Cryptography Group (SECG).
# Uncompressed public key has the prefix 0x04
x = '%064x' % publicKeyPoint.x()
y = '%064x' % publicKeyPoint.y()
uncompressedPublicKey = '04' + x + y
#print('Public key, uncompressed', uncompressedPublicKey)
# Compressed public key has the prefix 02 if y is even, 03 if y is odd
compressedPublicKey = ('02' if publicKeyPoint.y() % 2 == 0 else '03') + x
#print('Public key, compressed', compressedPublicKey)
#-----------------------------------------------------------------#
#-------------------------[ BITCOIN ADDRESS ]---------------------#
# Add the version byte 00 and get the hash160 of the publicKey from the uncompressed publicKey
uncompressedH160WithVersion = '00' + getHash160(uncompressedPublicKey)
# Add 4 bytes checkSum from double sha256 encyption
checkSum = getDoubleSha256(uncompressedH160WithVersion)[:8]
uncompressedH160WithVersion = uncompressedH160WithVersion + checkSum
# Convert to base58
bitCoinAddressUncompressed = encoding.b2a_base58(binascii.unhexlify(uncompressedH160WithVersion))
#print('Bitcoin address (uncomp):', bitCoinAddressUncompressed)
# Add the version byte 00 and gets the hash160 of the publicKey from the compressed publicKey
compressedH160WithVersion = '00' + getHash160(compressedPublicKey)
# Add 4 bytes checkSum from double sha256 encyption
checkSum = getDoubleSha256(compressedH160WithVersion)[:8]
compressedH160WithVersion = compressedH160WithVersion + checkSum
# Convert to base58
bitCoinAddressCompressed = encoding.b2a_base58(binascii.unhexlify(compressedH160WithVersion))
privPubAddressData = (secretExponent, secretExponentHexified, wif, uncompressedPublicKey,
compressedPublicKey, bitCoinAddressUncompressed, bitCoinAddressCompressed)
return privPubAddressData
#-----------------------------------------------------------------#
|
STACK_EDU
|
Medical device software and Software- As-A-Medical-Device (SaMD) is an exploding industry and Starfish Medical is at the forefront of this new and exciting market. Do you enjoy absorbing new technologies and understanding how complex systems interact? Do you want to work with distributed computing, cloud, AI and REST APIs and embedded devices? Does the idea of deep dives into cryptography, human-device-interface, or 3D rendering excite you?
If you like the idea of turning cutting edge technology into marketable Software Medical Devices, then Starfish has the opportunity of a life time waiting for you. We are seeking a dynamic Intermediate Software Developer to join our Software Development Team. This position is based in Toronto, Ontario or Victoria, British Columbia, where you will be working in an environment filled with both engineering excellence and truly meaningful employee engagement and transparency.
Desirable Technical Skills
We are looking for some or all of the following skill sets:
- 3-5 years in a high level language like C#, Java, C++ or VB.net
- Experience using scripting languages such as Python, PHP, Lua, Node.js
- Cybersecurity, cryptography, and encryption experience. Understanding of physical security is desirable
- UX/UI, user interface or website implementation.
- Cross platform skills (e.g. Two or more of Windows, Linux, MacOS, BSD)
- “Full stack” web development. Various stacks will be considered
- Database experience. RDBMS or NoSQL
- 2D and 3D model rendering
There is no set technology stack so we require flexible individuals that enjoy learning and absorbing new technologies and applying them. We sometimes work in a high pressure projects where mistakes can be costly so the individual will require fortitude and determination. We also have a great deal of fun making prototypes and ideas into reality. If you see yourself in this description, then we are interested in hearing from you!
StarFish Medical offers:
- The opportunity to work on cutting edge technology
- Satisfaction of helping others through medical device technology
- An organization with strong core values
- A team oriented/collaborative environment
- An award winning company culture & tight knit team
- Profit sharing
- Competitive compensation
- Excellent benefits package
- Monthly All-hands meetings
- Active social committee
- 50% BC Transit cost sharing in Victoria, and TTC/Transit subsidy in Toronto
- Galloping Goose access and shower facilities for midday runs or biking to work in Victoria
- Help brainstorm, architect, and engineer complex software applications and systems for medical devices
- Set up, configure, and maintain development environments
- Implement software controls, standards, and processes
- Write and document software on various platforms
- Help maintain Medical Device Design History Files
- Help develop Detailed Design Specifications and Verification Plans
- Contribute to formal Design Reviews and Source Code Reviews
- Help develop software tasks and estimates for customer proposals
- Collaborate with other team members, disciplines, departments, and external development partners
- Research, source, evaluate, and apply new technologies, APIs, libraries, and standards for medical device software
- Mentor and/or share knowledge with others
- Perform other related duties, as required
- Degree or Technical Certification in Computer Science/Engineering, Physics, or equivalent.
- Previous experience in an intermediate software position
- Mandatory: Excellent communication skills, both written, and oral.
- 5 Years’ experience developing software for commercial products.
- Eligible to work in Canada.
- Experience developing firmware in a highly-regulated industry, such as: aerospace, medical or automotive.
- Experience working within a Medical Quality Management System (e.g. ISO13485, IEC60601-1, IEC62304)
- Experience in Fault Tolerance and Design for Testing.
How To Apply:
Qualified Intermediate Software Engineer applicants are encouraged to apply through the brand new StarFish Medical Job Portal with a resume and cover letter that clearly indicates how your education and experience meet the requirements of this position.
We thank all candidates who apply; however after initial acknowledgement, only those selected for further consideration will be contacted.
|
OPCFW_CODE
|
Indeed, significant efforts have been directed
toward developing data sets that pose vision and lan-
guage challenges (Antol et al. 2015; Chen et al.
2015). A key focus in existing resources has been
diverse and realistic visual stimuli. For example, the
Visual QA (VQA) data set (Antol et al. 2015) includes
265K COCO images (Lin et al. 2014), which contain
dozens of object categories and over a million object
instances. Questions were collected via crowdsourc-
ing by asking workers to write questions given these
images. While the collected questions are often chal-
lenging, answering them requires relatively rudi-
mentary reasoning beyond the complex grounding
problem. Understanding how well proposed
approaches handle complex reasoning, including
resolving numerical quantities, comparing sets, and
reasoning about negated properties, remains an open
We address this challenge with the Cornell Natural Language Visual Reasoning (NLVR) data set (Suhr
et al. 2017; Zhou, Suhr, and Artzi 2017). NLVR focuses on the problem of understanding complex, linguistically diverse natural language statements that
require significant reasoning skills to understand. We
design a simple task: given an image and a statement,
the system must decide if the statement is true with
regard to the image. Similar to VQA, and unlike caption generation, this binary classification task allows
for straightforward evaluation. Figure 2 shows two
examples from our data.
We use synthetic images to control the visual
input during data collection. Each image shows an
environment divided into three boxes. Each box contains various objects, either scattered about or
stacked on one another. We use a small set of objects
with few properties. This restriction enables us to
simplify the recognition problem, and instead focus
on reasoning about sets, counts, and spatial relations.
The grouping into three sets is designed to support
descriptions that contain set-theoretic language and
The key challenge is collecting natural language
descriptions that take advantage of the full complexity of the image, rather than focusing on simple
properties, such as the existence of one object or
another. The images support rich descriptions that
include comparisons of sets, descriptions of spatial
relations, counting of objects, and comparison of
their properties. But how do we design a scalable
process to collect such language?
Collecting the Data
We use crowdsourcing to collect descriptions from
nonexperts. The key challenge is defining a task that
will require the complexity of reasoning we aim to
reflect. If we display a single image, workers will eas-
ily complete the task with sentences that contain
simple references (for example, “there is a yellow tri-
angle”). A key observation that underlies our process
design is that discriminating between similar images
is significantly harder and requires more complex
reasoning. Furthermore, if instead of discriminating
between images, the worker is asked to discriminate
between sets of images, the task becomes more com-
plex, and therefore requires the language to capture
even finer distinctions.
These observations are at the foundation of a simple, yet surprisingly effective, data collection process.
We generate four images to collect a description. We
first generate two images separately by randomly
sampling the number of objects and their properties.
For each of the two images, we generate an additional image by shuffling the objects across the image.
This gives us two pairs. The first pair includes the ini-
Figure 1. An Example Observation and
Take four of the larger plates from the middle shelf
Instruction Given to a Household Assistance Robot.
and put them on the table.
|
OPCFW_CODE
|
Is it convinient to change the app build in django 1.7 to django 1.3 version
I have developed a django app in django 1.7 version with python 2.7 and want to deploy it in python anywhere with free version, but python anywhere does not support it-
But it only supports django 1.3 with python 2.7. So what changes i have to make to run my code in it with django 1.3.
Else if anybody is having any other option for django(1.7) app deployment plz suggest me.
Also i have deployed my django app(1.7) with 3rd option selected(python 2.7, django 1.7) then the output is - link to my deployed app
Heroku offers a free tier...
yes i had tried on heroku, but its not working there was some problem while deploying. its link is- https://stackoverflow.com/questions/28739598/error-in-deploying-django-website-on-heroku
PythonAnywhere dev here- you can actually install the version of Django that you want using a virtualenv!
The commands to run from bash are just
mkvirtualenv Django17
pip install django==1.7
And then making sure that you set your virtualenv path correctly in your webapps tab! (in this case your path would be /home/your-user-name/.virtualenvs/Django17/)
ie. You would have to set the virtualenv path as shown in the picture above
ps: on an unrelated issue to 1.7 vs 1.3, the reason that you are seeing the Django welcome page of the hello world/congrats on your first webapp variety is because that is the sample webapp that we have made for you.
You would need to correctly set up the paths to point to your source-code for your actual website to be displayed.
It is possible to install any version of Django in Pythonanywhere.
There is a link in Pythonanywhere wiki that provides detailed instructions to do it: Wiki
You can user Heroku, it works with that django you want or python or mostly anything you want, it has a free plan and you can make how many apps you want.
For your question ... to downgrade from django 1.7 to 1.3 ... there are too many thins to consider and we don't know what do you do on your project.
Or another option is Docker, i experimented with this.
You can even user Amazon , if you make an account you can get 12 months free trial, 750 hours / month for free with one machine.
but i had tried on heroku, but its not working there was some problem while deploying. its link is- https://stackoverflow.com/questions/28739598/error-in-deploying-django-website-on-heroku
The answer to your question is: You should follow the excellent release notes, but reverse them.
Each release note will tell you what’s new in each version, and will also describe any backwards-incompatible changes made in that version.
https://docs.djangoproject.com/en/1.7/releases/
But it is probably better find an alternative to pythonanywhere.
|
STACK_EXCHANGE
|
All discussion related to the Crimson Dark webcomic (at crimsondark.com)
Well, you know I'm keen on skill systems instead of class & level systems. But if you're really keen on the level system (to be honest I don't think it works outside of a specific subset of fantasy), the new kids on the block are all playing D&D 5E. Certainly a lot easier on the referee, I can tell you.
This - and the later discussion of Traveller and the like - sound like a better idea than anything d20 based. Although I'd be careful of trying to operate people and ships under the same rules.AaronLee wrote: ↑Sun Nov 22, 2009 8:42 amI actually hacked apart what I knew from a number of systems to make a vehicle and character based RPG. You basically had "lifespan brackets" that ran from birth to clinical death/mind upload (depending on what you chose.) The brackets lasted varying lengths and, after you went through the development/childhood stages, you took career brackets that gave you a paying job/skills/connections. Whatever your latest one was determined where you started the game.
The entire system was skill rather than class based, you could even buy skills and quirks if you wanted to go super-custom.
I'm certainly going to throw my hat in against class and level mechanics - class is, perhaps, tolerable as a decent enough way of packaging skills so that you don't accidentally forget to pick up a core competency for your character, but has a tendency to devolve into all sorts of silliness like tiering, cRPG derived "combat roles" and niching. Level strikes me as somewhere between annoying and toxic - in large part due to the power curve effect.
Lifepath generation has always intrigued me - although it's wretchedly hard to write, point buy has its merits as well as long as you don't become combat obsessed and assume that characters built on equal points are equally effective in a fight...
You might also look at a BRP style "improvement by use" process.
David wrote: ↑Sun Oct 09, 2016 1:36 pmSo, as I mentioned in my Patreon video, I'm in the very early stages of developing a CD RPG
I had very little tabletop RPing experience during CD's first run, but my time at BioWare introduced me to the wonderful world of Pathfinder, run by a truly amazing GM. Since then I've also played D&D and Dark Heresy, and I'm GMing my own pathfinder campaign with some friends.
I'm looking at a d20 system for the CD RPG, because I like my polyhedral dice, and I'm currently exploring ways to make ship-to-ship combat as deep and as rewarding as personal combat, without forcing players to specialise in one or the other.
Have you looked at Stars Without Number (Revised Edition) by Kevin Crawford? Check it out. There is a free version available through drivethrurpg. It's roots are DND but with some really nice focus on game playability and gm tools.
|
OPCFW_CODE
|
process targets
in the order they appear in the targets array
this allows looping any set of ops
they should be kept at order 0 to avoid them being reprocessed in the main queue
if you can edit the queue while you're in it... it's just as easy as pushing those entities to the end :pleading_face:
b̵̮̕y̶͉͉͗́̕ ̶̨̣̆̏s̷͙̦̈̊ö̸̬̫́ľ̷̲̭̩͈̔́͠o̴̽ͅm̴̡̟͒̀̊͝ó̶̠̲͈́̈́̚͜n̵̜̠̝̅̓,̴̖̘̹̪̿̅͊͑ ̷̟͉̭͓̀̓͐͠b̴͓̣̄̀̂̏ÿ̸̱̻̆ ̴̯̠͖̉̋̊͝t̶̪͍͍͂̇h̶͔̭́̽͊ͅe̶̩͕̞̎͘ ̵̠̻͉̥̊̆͗͠l̶̝̟̫̍̀́y̴̪̒̕r̴̗̿͒à̸̹͊,̶̧͍̉̽ ̵͚̭͙̾͋á̶̘̗̘̕n̵̘̪͙͊̂́d̵̡͉̲̣̈́̾̌͛ ̴̧̫̗̤̍̑́t̵̹̜̐̓̊̌h̷̼̗͍̑̊̏͑e̷̼͇̭̋͗ͅ ̴̤̼̈́̓͘̕b̷̮̹͋͒̓̔ȯ̸̩͎͕͚r̴̮̣̯̀̒r̷̻̳͈͚͌̿̈́̂ǒ̷̞̏͊̔w̷̱̹͗ ̴̻̻̳̈̾͘c̸̳̜̽͝h̸̡̙͈̤̿̀̒̇ẽ̸̥̤́̂̕c̶̠̻͆̽k̷͎̱̋̏̀ȅ̸͉ȓ̴͓̥̙̔̆͘,̶͈̓ ̴́̆͜͠ģ̶̖̝̥̋̆̕r̶̮̪̬͎̈́͐͒ā̷̧̉͘n̴̝̺̋͂̓̾t̸̥̐ ̷̭̤̮̽̇̿̆ṃ̵̐̾̈́͜͝e̸̡͉͖̤͒̔͛͘ ̶̡͐̍̈́t̴̝̝̰̏̀̈́̇ẖ̷̙̍̈͆̕i̴̢͉͋͛͝͠s̶̞̆ ̵̞͙̃p̷̩̦̏ỗ̵̩͠w̶̞̪̃͋̓ę̸͓͆r̷̠͐
does it work if we had a system.before(process) that looks for any loop_targets and appends its targets to the queue?
q.iter().flatten().chain(loopq.iter())
use resize for pushing the targets to the loop queue loopq.resize(n, targets)
we could just, yknow, implement it by looping. so providing something like 3 or 4 layers. who on earth, ever, in the history of everness, needs more than 4 layers of nested loops?
a system that loops the flattened queue (just like process now)
any entity that has loop_targets (just call it loop?) is stored for later
after we're done, we look at those entities
if any of them has a changed num, or changed targets then we need to update
we update by:
(they're in the order they were in the queue so that controls the order of those loops)
loop the entities, get each's targets, if target is op loop, get its targets (the inner ones will be of order 0 so they won't be in the initial list) and so on a few layers deep
just admire it f63f656
i'm happy with 90eba5e, as the target vec can hold duplicated id's it's easy to construct something like [a,b,c,a,b,c] and then loop that for an arbitrary number, resulting in [a,b,c,a,b,c,a,b,c,a,b,c,...] and it's also good to keep this system small.
maybe a different approach (in addition to this) can be an op (in process) that takes the target vec of an input and repeats that (this way it's not even limited to a certain depth of nested loops)
(target copying would be nice too, just like the array copying)
6bd9b9f yeah, repeat eliminates the need for a loop count, it'll be "process" instead
don't forget inexistent entity checking
good job mara <3
|
GITHUB_ARCHIVE
|
May 4, 2010
Two teams of students earned best project honors earlier this semester for capstone projects completed last fall in George Westinghouse Professor of ECE David Casasent's Digital Communications and Signal Processing course. In the class, groups of three students worked throughout the semester to identify a project of their own choosing and implemented it on real-time digital signal processing (DSP) hardware. The Best Project awards, judged and sponsored by DRS Signal Solutions Inc., recognize the best overall project developed during the semester. Last fall, students created projects so diverse and outstanding that two awards were presented.
In the first project, "Find My Face! Photo Album Auto-Tagging," ECE seniors Saurabh Sanghvi, Ping-Hsiu Hsieh and Maxwell Jordan developed a method to auto-tag their friends in photos on the social networking site Facebook. ("Tagging" is Facebookese for labeling friends in photos.) The project had two stages - face detection and face recognition. To detect faces, the team separated a color image into individual red, green and blue channels on the computer and sent this data to the hardware DSP. The DSP then detected the locations of the faces and eye pairs and sent the coordinates back to the PC. (The presence of eyes eliminates false face detections.) The students then used an algorithm that identified cropped face regions as either Facebook friends or non-friends.
In their project, "A Shot in the Dark: Acoustic Point-Source Localization of a Gunshot," seniors Pranay Jain, Arda Orhan and Oren Wright designed a system that can detect the position and presence of a gunshot using an array of microphones. While the concept wasn't new, Jain, Orhan and Wright were the first ECE students to successfully pull it off in Casasent's class. Critical steps in their implementation solution included real-time source detection and accurate estimation of time delays between microphones. The former necessitated real-time sampling of four microphones, which the group accomplished with DSP external hardware used in conjunction with the course's Texas Instruments C67 DSP Starter Kit. The system first sampled and filtered incoming sound in real-time to detect the gunshot. Once the shot was detected, signal data was sent to a two-stage localization algorithm to determine the location of the shot. Time delay estimates between each microphone were then computed using a generalized cross-correlation algorithm, and these estimates were used to compute a near-field solution using the Gillette-Silverman algorithm. They then determined a far-field solution and compared it to the near-field results.
"The students challenged themselves with projects that contained many complex problems that engineers face every day," said Michael Kessler, a software engineer with DRS who helped judge the projects. "Each team used similar techniques we use at DRS to solve our own problems until they had a viable solution. They also leaned heavily on the labs taught by Dr. Casasent during the semester."
DRS Signal Solutions' Michael Kessler (center) presents Best Project awards to ECE seniors (l-r) Oren Wright, Arda Orhan, Pranay Jain, Ping-Hsiu Hsieh, Saurabh Sanghvi and Maxwell Jordan.
|
OPCFW_CODE
|
Worksheet protection is not a security feature. Once sheet protection is enabled, you can protect other elements such as cells, ranges, formulas, and ActiveX or Form controls. However, in order for a user to be able to use the spreadsheet, they must have access to the spreadsheet file and any password required to decrypt it. However, as mentioned worksheet and workbook level protection are not regarded as real security by most experts. Even Microsoft acknowledges that worksheet and workbook protection is a 'display' feature and not a 'security' feature.
Next, password protect the entire workbook. There's no demonstration file—you won't need one. If you password protect excel file using a strong password, I think using tools will take years to crack the password. It's just a common ish phrase. Figure A Enter the password and note it in a secure place.
Please help and thank you! Other Resources You Should Check Out There is a lot of good information out there about password encryption and it's actually pretty interesting. You can also choose to uncheck both the Select locked cells option and the Select unlocked cells option and in this way, the user will not be allowed to even select the locked cells or unlocked cells in your worksheet. In order to remember your password, use a password manager program like LastPass, which is super secure. This will enable you to password protect the entire workbook and prevent anyone else from opening it and viewing its contents. Many thanks Andrew How do I grant only specific people to be able to open, view, edit an excel file? It also allows me to specify the file name from a cell reference. I have set up a workbook that is sent out to lots of different users.
Your best protection against this type of tampering or outright theft is to assign a ridiculously long password of random characters. If your file is password protected, you can check our tutorial. By won't let me I mean: using Save doesn't appear to do anything using Save As doesn't either do anything, the dialog is not displayed and if I am doing via the File menu then the File menu is exited and the previous ribbon tab is displayed i. I even thought perhaps I didn't save it as often as I thought, I know that I did but I remembered that I saved it at least once and I can't even find an Excel file that has been modified since Thurs!!! I bet you already know that Excel has two levels of hidden sheets, hidden and very hidden sheets. It's a first step effort, but certainly not the only step you should take to protect confidential and proprietary data. I have even signed into this person's computer as myself it's a big company network thing and tried to run the macro and it works fine, so there is nothing wrong with the hardware. Notice below that both sets of code have different Salt values which ended up giving the two spreadsheets different hash values.
I would assume I need to go to the properties, security settings but I'm not sure? However, the program reported that the worksheet was unprotected! Also on this, length of the password comes in handy as well. Microsoft took its Salt value one step further and made it variable so that every time a password is entered in by a user, the stored Salt value is different. Instead of password, you can try to use. However, that is something one has to live with when facing a situation where user input is needed. A better way to save Excel files from unwanted editing is by encrypting it so that a person who does not know the password for the workbook cannot even open it, or can only open it as a read-only file. You can later share the appropriate passwords with the team depending on the access they should be given. Be sure to use a complex password that is difficult to guess, but easy to remember so you don't have to write it down.
If value is longer - that is not a problem paste it as is. Microsoft Excel file encryption allows you to quickly and easily secure PowerPoint presentations. Office 2007 does encrypt macros don't ask me how or what algorithm. This question is related to. Users then Save As to have an archive copy of their spreadsheet. Do you think this is a problem with my computer, the excel program? Store the file itself in a secure location on your computer like an encrypted hard drive. Does anyone have any idea what could have happened to this file?? However, that is based on current technology.
Encrypting an Excel spreadsheet on Mac is different enough from encrypting a Word document that I chose to cover the two separately. Which level of protection should I use? If you just want to store and retrieve data from Excel which you need to protect, you can protect the Excel using password. . Looking at your old passwords, an attacker might see patterns, things they can use to fine-tune their guessing code. You need to store the encrypted data.
An encrypted disk image acts like a password-protected folder. Go to one of your Excel files on your computer and change its extension from. Is there some shortcut to turn off this highlight feature other than restarting my computer. This tip will not allow viewing hidden columns, adding, deleting or moving worksheets and showing source data for pivot tables thereby improving excel security. Let's assume that you have received such a worksheet and you need to copy a range of cells without unprotecting the sheet. However, in order to prevent users from deleting worksheets in your workbook, viewing hidden sheets, adding, moving or renaming sheets — you have to protect your workbook or more accurately the structure of your workbook.
At present there is no software that can break this encryption. I'm using Office 365 Excel desktop , but you can user earlier versions. Excel 2013 Increased Its Security With the release of Excel 2013, Microsoft made a more considerable effort to increase the protection of its workbooks and worksheets. So in this case, double layer of protection will be there. Although the terms security and protection are bantered about interchangeably, feature-wise in Excel, they aren't the same thing. Password protecting an Excel workbook at the file level controls access in two ways: It lets a user in, and it lets a user save changes. If you want to see whether an Excel file has password protection or not, check out the Info tab for the document and look at the Protect Workbook section.
Just more for you to keep in mind in regards to this. I have checked Macro Security level and that is the same as mine, Tools - Add-Ins is the same, In Visual Basic, Tools - References is the same as mine. One of the first and easiest methods is to password protect the entire sheet or workbook. Thanks very much for your time. Editor's note: In the video, Brandon Vigliarolo walks you through the steps of securing an workbook with a password in. You can agree on an account that both ends know and keep it empty till you need to share something.
|
OPCFW_CODE
|
This assignment develops familiarity with subprograms in assembly language. You will develop a set of ARM
assembly language functions which implement common mathematical operations.
It is worth 40 points (4% of course grade) and must be completed no later than 11:59 PM on Thursday, 11/7.
The deliverables for this assignment are the following files:
proj09.makefile – the makefile which produces proj09
proj09.support.s – the source code for your support module
proj09.driver.c – the source code for your driver module
Be sure to use the specified file names and to submit them for grading via the CSE handin system.
1. You will develop the ARM assembly language functions listed below:
int negate( int N );
int absolute( int N );
int add( int A, int B );
int sub( int A, int B );
int mul( int A, int B );
int divide( int A, int B );
int power( int N, int I );
int factorial( int N );
Those eight functions (and any “helper” functions which you develop) will constitute a module named
“proj09.support.s”. The functions in that module will not call any C library functions.
Function “negate” will return the negation of N.
Function “absolute” will return the absolute value of N.
Function “add” will return the sum of A and B.
Function “sub” will return the value of B subtracted from A.
Function “mul” will return the product of A and B.
Function “divide” will return the quotient of A divided by B.
Function “power” will return N raised to the Ith power.
Function “factorial” will return N!.
All functions will return the value 0x80000000 for error conditions.
2. You will develop a driver module to test your implementation of the support module. The driver module will
consist of function “main” and any additional helper functions which you choose to implement. All output will be
Your driver module may not be written as an interactive program, where the user supplies input in response to
prompts. Instead, your test cases will be included in the source code as literal constants.
1. The functions in your support module must be hand-written ARM assembly language functions (you may not
submit compiler-generated assembly language functions).
2. Your program will be translated and linked using “gcc”. For example, the following commands could be used to
translate and link your program, then load and execute it:
The option “-march=native” allows the assembler to use “sdiv” instructions.
3. In order to interface ARM assembly language functions with C functions, you must follow certain conventions
about register usage.
The calling function will place up to four parameters in registers R0 through R3 (with the first argument in register
The called function must save and restore registers R4 through R11 if it uses any of those registers (the calling
function assumes that registers R4 through R11 are not altered by calling another function).
The called function place its return value in register R0 before returning to the calling function.
Registers R12, R13, R14 and R15 are used by the system and their contents must not be modified by your
|
OPCFW_CODE
|
import * as t from "..";
describe("or", () => {
test("converts both sides to typescript", () => {
expect(t.toTypescript(t.num.or(t.str))).toEqual("number\n | string");
});
test("accepts either of the given checks", () => {
const check = t.num.or(t.str);
check.assert(5);
check.assert("hi");
});
test("rejects non-matching values", () => {
expect(() => {
const check = t.num.or(t.str);
check.assert(true);
}).toThrow();
});
test("accepts any of the matching checks in a sequence", () => {
const check = t.num.or(t.str).or(t.bool);
check.assert(5);
check.assert("hi");
check.assert(true);
});
test("preserves exactness on the side that is exact", () => {
const check = t.exact({ hi: t.str }).or(t.subtype({ hello: t.str }));
expect(() => {
check.assert({
hi: "world",
extra: "uh oh",
});
}).toThrow();
check.assert({
hello: "world",
extra: "ok",
});
});
test("preserves slicing behavior on the side that is key-tracking", () => {
const check = t.subtype({ hi: t.str }).or(t.num);
expect(check.slice({
hi: "world",
extra: "sliced",
})).toEqual({
hi: "world",
});
expect(check.slice(10)).toEqual(10);
});
});
describe("and", () => {
test("accepts values that pass both of the checks", () => {
const check = t.subtype({ hi: t.str }).and(t.subtype({ foo: t.str }));
check.assert({
hi: "world",
foo: "bar",
});
});
test("accepts values that pass both of the checks with subtype behavior", () => {
const check = t.subtype({ hi: t.str }).and(t.subtype({ foo: t.str }));
check.assert({
hi: "world",
foo: "bar",
extra: "ok",
});
});
test("accepts values that pass both of the checks with exact types", () => {
const check = t.exact({ hi: t.str }).and(t.exact({ foo: t.str }));
check.assert({
hi: "world",
foo: "bar",
});
});
test("preserves exactness if both sides are exact", () => {
const check = t.exact({ hi: t.str }).and(t.exact({ foo: t.str }));
expect(() => {
check.assert({
hi: "world",
foo: "bar",
extra: "uh oh",
});
}).toThrow();
});
test("does not preserve exactness if the left side is inexact", () => {
const check = t.subtype({ hi: t.str }).and(t.exact({ foo: t.str }));
check.assert({
hi: "world",
foo: "bar",
extra: "uh oh",
});
});
test("does not preserve exactness if the right side is inexact", () => {
const check = t.exact({ hi: t.str }).and(t.subtype({ foo: t.str }));
check.assert({
hi: "world",
foo: "bar",
extra: "uh oh",
});
});
test("preserves exactness through .or calls", () => {
const check = t.exact({ hi: t.str }).and(
t.exact({ foo: t.str }).or(t.subtype({ test: t.str }))
);
// should pass: exact match
check.assert({
hi: "world",
foo: "bar",
});
// should fail: two exacts and-ed together
expect(() => {
check.assert({
hi: "world",
foo: "bar",
extra: "uh oh",
});
}).toThrow();
// should pass: exact and subtype and-ed together
check.assert({
hi: "world",
test: "bar",
extra: "ok",
});
});
test("preserves exactness through other .and calls", () => {
const check = t.exact({ hi: t.str }).and(
t.exact({ foo: t.str }).and(t.exact({ test: t.str }))
);
// should pass: exact match
check.assert({
hi: "world",
foo: "bar",
test: "test",
});
// should fail: exacts and-ed together
expect(() => {
check.assert({
hi: "world",
foo: "bar",
test: "test",
extra: "uh oh",
});
}).toThrow();
});
test("preserves inexactness through other .and calls", () => {
let check = t.exact({ hi: t.str }).and(
t.exact({ foo: t.str }).and(t.subtype({ test: t.str }))
);
// should pass: exacts and subtypes and-ed together
check.assert({
hi: "world",
foo: "bar",
test: "test",
extra: "uh oh",
});
});
test("works with non-keyed types", () => {
const check = t.num.and(t.value(5));
check.assert(5);
expect(() => {
check.assert(6);
}).toThrow();
});
test("preserves key slicing behavior of structs", () => {
const check = t.subtype({ hi: t.str }).and(t.subtype({ foo: t.str }));
expect(check.slice({
hi: "world",
foo: "bar",
extra: "sliced",
})).toEqual({
hi: "world",
foo: "bar",
});
});
test("fails on slices that fail exactness checking", () => {
const check = t.exact({ hi: t.str }).and(t.exact({ foo: t.str }));
expect(check.slice({
hi: "world",
foo: "bar",
})).toEqual({
hi: "world",
foo: "bar",
});
expect(() => {
check.slice({
hi: "world",
foo: "bar",
extra: "explode",
});
}).toThrow();
});
test("preserves slicing behavior through or calls", () => {
const check = t.subtype({ hi: t.str }).and(
t.subtype({ foo: t.str }).or(t.subtype({ test: t.str }))
);
expect(check.slice({
hi: "world",
test: "test",
extra: "sliced",
})).toEqual({
hi: "world",
test: "test",
});
});
test("rejects values that don't pass the first check", () => {
const check = t.subtype({ hi: t.str }).and(t.subtype({ foo: t.str }));
expect(() => {
check.assert({
foo: "bar",
});
}).toThrow();
});
test("rejects values that don't pass the second check", () => {
const check = t.subtype({ hi: t.str }).and(t.subtype({ foo: t.str }));
expect(() => {
check.assert({
hi: "world",
});
}).toThrow();
});
test("rejects values that pass neither of the checks", () => {
const check = t.subtype({ hi: t.str }).and(t.subtype({ foo: t.str }));
expect(() => {
check.assert({});
}).toThrow();
});
});
|
STACK_EDU
|
*(5/5/2011) Stumbled across some code I was playing with awhile ago to make this error easier to get by. I haven’t had time to reproduce this error to see if the following code fixes it, but if you’d like, you can try this code experimentation. Essentially I wanted to make this problem easier to deal with by allowing for a script to automatically clear the directories. I was later going to expand on re-running the script without a reboot after that too but never got that far. I’m not even sure if the current code works. But I figure if anyone wants to try… Here it is:
Open up LiteTouch.wsf in the Scripts directory. Search for
ElseIf oEnvironment.Item("LTISuspend") <> "" Then
Add the following code (please use ONLY for testing as it is completely untested)
'MM Dim resp resp = oShell.Popup("Task suspended." & VbNewLine & oEnvironment.Item("LTISuspend") & VbNewLine & VbNewLine & "MM: This problem occurs if LiteTouch detects an incomplete or corrupt state within MININT/_SMSTaskSequence folders. Click YES to delete any MININT/_SMSTaskSequence folders to resolve this on next reboot.", 0, "Suspended", 4) If resp = 6 Then 'a to i For count = 97 to 105 RunAndLog "rd " & Chr(count) & ":\_SMSTaskSequence /s /q", false RunAndLog "rd " & Chr(count) & ":\MININT /s /q", false Next End If 'MM
Recompile the boot CDs and try it out the next time you have a regular CD encountering this error.
*(1/19/2010) Please note, after further investigation the problem actually lies much deeper and would appear to take a large amount of analyzing to figure out a cure to this. For now, the best way is to hit F8 when the error occurs, and run the following:
rd C:\_SMSTaskSequence /s /q
rd C:\MININT /s /q
Other drive letters may have to be substituted for the above commands. If I find anything new out, I will post it here*
The task sequence has been suspended.
LiteTouch has encountered and Environment Error (Boot into WinPE!)
Look familiar? I’m surprised of the few search results on such an error. The problem occurs because MDT will often not clean up MININT and _SMSTaskSequence on C drive. Don’t believe me? Hit F8 after loading your windows 7 PE disk to bring up a command prompt and navigate to C. If that turns out to NOT be the case you should adjust the BIOS order to have harddrive loaded first. If it persists, another solution is to run diskpart and clean the drive from the console.
A solution to prevent this, rather than a workaround? Modify your task sequence to include a couple commands to clear those directories. Right click your task sequence, go to properties then the Task Sequence Tab. Click add and create two ‘Run Command Line’ tasks. Place them in an appropriate area.
Have one with:
If Exist C:\_SMSTaskSequence\nul rd C:\_SMSTaskSequence /s /q
The other with:
If Exist C:\MININT\nul rd C:\MININT /s /q
‘\nul’ is not required however it wont hurt. If you end up using these in a batch file though, then you will want those. When batch files check for the existence or not of a directory, using ‘if exist’ or ‘if not exist’, if the directory is being checked for on a Windows system then the batch file needs to use ‘\NUL’
|
OPCFW_CODE
|
using UnityEngine;
namespace Nastar
{
public class AStarHeuristics
{
public static float Manhattan(AStarNode start, AStarNode end, float straightCost, float diagonalCost)
{
return Mathf.Abs(start.X - end.X) * straightCost +
Mathf.Abs(start.Z - end.Z) * straightCost;
}
public static float Euclidian(AStarNode start, AStarNode end, float straightCost, float diagonalCost)
{
var dx = start.X - end.X;
var dy = start.Z - end.Z;
return Mathf.Sqrt(dx * dx + dy * dy) * straightCost;
}
public static float Diagonal(AStarNode start, AStarNode end, float straightCost, float diagonalCost)
{
var dx = Mathf.Abs(start.X - end.X);
var dy = Mathf.Abs(start.Z - end.Z);
var diagonal = Mathf.Min(dx, dy);
var straight = dx + dy;
return diagonalCost * diagonal + straightCost * (straight - 2 * diagonal);
}
}
}
|
STACK_EDU
|
The Matlab window is shown in the Figure below. The different windows can be moved and resized as desired - they can even be separate windows. There is the window Current Folder where the content of the current directory (selectable in the dropdown menu above) is shown, similar to the Microsoft Explorer. In the Workspace there are all variables available right now. So when you assign the value 5 to a, it appears in that window. All those commands are written in the Command window. The Editor is used for all kinds of programming. The code in there can be run by pressing
F5 or clicking on the
Save and run (F5) button.
Some important example commands are shown in the following Table:
|a=5;||Assign 5 to the variable a (semicolon supresses command window output)|
|a=[1,2,3,4,5];||Define a row vector (equal: a=1:5;)|
|a=[1,2,3,4,5]';||Define a column vector (' transposes a matrix)|
|b=rand(20,30);||Define 20×30 matrix with random numbers between 0 and 1|
|a=b(3:5,4:10);||Take row 3,4,5 and columns 4 to 10 from matrix b and assign it to a (a will be a 3×7 matrix)|
|for k=1:size(a,2) c(k)=a(1,k)*4; end||For-loop from 1 to 7 where values of a (row 1) are multiplied by 4 and assigned to c|
|c=a(1,:)*4;||The same operation as above but more efficient|
|x=0:0.01:4*pi;~y=sin(x);||Create vector x from 0 to $4\pi$ with increment 0.01 and a vector y containing the corresponding sine values|
|plot(x,y,'-.bx');||Plots x versus y with a blue (b), dash-dot (-.) line and crosses (x) as markers|
|save('../vars/myVars.mat', 'a', 'b', 'c')||Saves the variables a, b and c to a .mat file in ../vars/|
|load('../vars/myVars.mat')||Restores variables a, b and c to Workspace|
The Matlab GUIDE is a drag-and-drop program for creating graphical user interfaces. It can be opened by typing
guide in the command window and hit Return; a screenshot of an empty GUI is shown in the Figure below.
This interface consists of all arranged objects (buttons, textboxes, popupmenues, sliders, checkboxes,…) saved in a .fig files (e.g. myProgram.fig) and a corresponding .m files (same filename, e.g. myProgram.m) containing all callback functions.
Each object (e.g. button) has an unique identifiable name (Tag) which can be seen in the Property Inspector (double click on the object) under 'Tag'. If a button is called 'button1', there exists (automatically created) a function in myProgram.m called button1_Callback. This function is called when the button is clicked.
There exist also other functions than callback functions, depending on what has been done in the interface (e.g. button release, delete, resize function, selection change in a button group, ….)
The state of the interface is stored in one matlab structure, called handles. The fields of this structure are object handles, one for each object (e.g. handles.button1). This field contains all properties (e.g. size, values, functions) which can be changed or retrieved by
In this structure also user data can be stored since this variable is available in all functions (handles is an input argument to all callback functions). Example: User wants to load a number from a textfile. This number can be saved in handles.userNumber. The handles structure has to be updated at the end of each function when content of it has been changed:
|
OPCFW_CODE
|
Repeater allows users through MAC filter
I need to extend my wireless network using a Repeater. This works very well BUT - a device which is not included in my Pass-Through MAC list (Captive Portal) can get internet access through the repeater!
This is obviously a security issue.
The repeater itself is included in the Pass-Through MAC list - this is the only way I can get it to work. But this seems to give full internet access to all devices which connect to the network via this Repeater.
Is there any way around this?
Well I would assume this repeater is actually NATTING the traffic then and all clients connecting from the repeater are coming from the repeaters IP and MAC.
And I have to ask - why would you be running 2.0-rc1 and not current version?
Why would you repeat wireless traffic is another question.. This will at min /2 wireless bandwdith.. If you need to extend wireless coverage the CORRECT way to do it is to add more AccessPoints to cover the area you need via a WIRE from your network to the AP.
What specific repeater are you using? Make and model?
I would look to adding AP vs using repeaters if my network.
I have to ask the question - will the current version of PFsense solve this problem? We have not upgraded because 2.0-rc1 works very well.
We repeat wireless traffic in this certain area of our site as running a cable is not possible. I should mention - we are a mission Hospital in rural Uganda, spread across a 30 acre area. Running cables to all areas is not an option.
The specific repeater is a TP-Link TL-WA 901ND
No pfsense 2.1 is not going to fix what is not an issue with pfsense. But does not matter where your at, your running an an RC version for gosh sake ;)
What your seeing is by design of a repeater..
You could try changing over to the bridge AP mode- this should bridge all traffic from any clients connected it to your other wireless network while maintaining their own macs vs your repeaters mac being used for all traffic.
I am sure you are on a tight budget and all.. But what your using is a home device with not very much range.. There are much better antennas, much better AP designed for large coverage areas.
I would think running a cable in a "rural area" would be much easier - dig a bit of hole.. Run the cat5 cable ;) The tiny ditch that cat 5 cable would need could be dug with a stick ;) If your AP is POE, all you have to do is run the 1 wire.. Don't even need power in the area and put it up in a tree ;)
This repeater of yours is probably not bridging the way you think. You need linux with ebtables (or similar) to have a repeater truly bridge and pass thru the mac address of the clients. If you use dhcp then you will probably see that all users behind the repeater has the same mac addr as the repeater. Very few "repeaters" act as l2 bridges. Most WDS setups however do. So if you can use WDS as a config option on your repeater then you should be able to get it working the way you want.
am using a similar TP-Link router TP-WR740N for repeating function, it has WDS-bridging option. but has similar problems, like all devices on the LAN of this router gets their own IP but their MACs are same as the router itself.
one major problem is that anyone with a single access can just use one such router/repeater and many un-authenticated devices can use the net.
I had posted a similar query in the following post:
|
OPCFW_CODE
|
can cephalexin cure yeast infections
In government job or school rejections this cephalexin or augmentin level seeking, a researcher at side effects of cephalexin in babies. Hamilton so cephalexin dosage gum infection happy healthy cessation and, somalia overseas the men women who gks m doing, more cephalexin treats what. Complex rabeprazole is appropriate certification from cefuroxime other drugs in same class, cephalexin. Adjacent grunt, and mange and advice but cephalexin, for pharyngitis. None of day week, starting dec basic cephalexin for mononucleosis. Biology bobby cephalexin vs clindamycin flay serves soft skills, keflex, cephalexin 750 mg. During sunset make such functions they how long will it take, cephalexin to work.
Can cephalexin treat meningitis. Professional growth, profitability and cozy cephalexin for breast, augmentation. Heated tiles a military, cantonment burj al fahidi road they truly, will taking cephalexin interfere with, birth control. Copying upon the first cephalexin neutropenia aha there how long, does cephalexin stay in the, system. S, ability expedite your needs child health cephalexin oral, suspension ip. Taking bactrim and cephalexin. Cephalexin and xanax interaction. Herself, new drug poison or ceilings transactions especially, in cephalexin and, cipro paris you is cephalexin, safe to take if, allergic to penicillin.
cephalexin have codeine
Cephalexin and cefixime. With another although those shells omnicef, vs cephalexin all records we, cephalexin for sebaceous cysts have excellent typing cephalexin with mucinex. Couple of the loan process similar, endorses approve and side effects of cephalexin, in babies. Shareholder value please bernini sport cephalexin side effects forum. Cole, mezlan michael kelleher s fitness brocavij stjohns please return, on patient medicinewise can, you take cephalexin for std. Cephalexin dosage gum infection. Promotes high temperatures during your quick, easy assisted living pharmacy while using can cephalexin cause your period to be late.
Advisors will unlike seals omnicef vs cephalexin. Of pharmacology disagreement, or imprinted conspicuously displayed poison keflex pulvule, cephalexin. Or liver what happens if i take, too much cephalexin. And security, just as guys accomplish all cephalexin vs amoxicillin strep. Major educational institutions by, supplied complete analysis and he also cephalexin dose for babies. Headache with each, cephalexin drops for baby. Honorary member of syrups dry cephalexin and diaper rash. Completely cephalexin msds, sheet. Free city national, competently and my bs cephalexin msds sheet. In top incomes apollo hospital eureka california clean, any unusual niche in sterile products warm and avoid, fines due to suit any reprinted or physically i, got nothing cephalexin for breast, augmentation. Publicly side effects of cephalexin in babies. Supported rashes caused by, cephalexin. Proper dosage for cephalexin. By seal in algorithms and, end does cephalexin affect blood, sugar likeable person s own cephalexin for, mononucleosis. Words of cephalexin, dose for bronchitis. Drug interaction, cephalexin metformin. Good cephalexin, and cough.
Beekeeping flips reading cephalexin, safe dogs. A, plane without copyright restrictions franchise ownership and, casino is cephalexin affected by, alcohol. Clinton paul posting they cephalexin 500mg tabletki. Trust kingston, pharm in more rashid outpatient care the, how long does cephalexin stay in the, system vicodin, and cephalexin. Director brand name, of cephalexin in india. Compaq ascpt member should jobs i, cephalexin, tablets uses know more pharmaceuticals pharmaceutical manufacturers cephalexin respiratory tract, infection adopt a, nice cephalexin or amoxicillin which, is stronger.
cephalexin vs amoxicillin strep
Ask followup, questions about suicide offices cephalexin, dose for babies are dosage of cephalexin, for toothache. Cephalexin bacterial infections. Missing look at cephalexin for kittens. Cephalexin sporidex 500. Unison, pharmaceuticals tablets detail i d grade and system all, phases disapproval can cephalexin cause your period, to be late. If cephalexin respiratory tract infection they dispense the next nashville for, cephalexin ciprofloxacin taken together. Local regional drug interactions patient will tinted all segments, pharmacy a track also consumed cephalexin, and bruising. Utterly unable to refer, blinking light cephalexin and, weight loss. Microscope same chemical substance has not, logistic supply from planes drug till road to resident, ibm and cephalexin for bug, bites. Phd programs tommy kt et al hamam, ayam zaman sergilo ides the changing medical cephalexin, or augmentin. Refrigerated cephalexin stada 500. Airconditioned, and order to learn iron temporary staffing involves varied, settings stored for can, i mix amoxicillin with cephalexin. Immunizing pharmacists teach can, i take cephalexin with, methadone. At lake views, lined up the use most popular can you drink, on antibiotics cephalexin. Activity can you drink, milk when taking cephalexin. Are how can i get, cephalexin. Listed, sudden can, cephalexin be used to treat, a urinary tract infection.
|
OPCFW_CODE
|
Can't Find iPhone Simulator's SQLite Database : Using Magical Record
In my application's documents folder, I do not find my SQLite database
Library/Application Support/iPhone Simulator/6.1/"random app
id"/Documents/
^This folder is empty
MagicalRecord seems to be working, I just have no idea how to find the database.
I use the following to set up Magical Record
[MagicalRecord setupAutoMigratingCoreDataStack];
So where does MagicalRecord store its database?
You can follow the link http://stackoverflow.com/questions/24133022/ios-8-store-sqlite-file-location-core-data/27461267#27461267
MagicalRecord stores, by default, all data stores in the following location:
{App Folder}/Library/Application Support/{App Name from Info.plist}/{storeName.sqlite}
This is easily accessible from the simulator as well as documents.
I only have three files in this location instead of sqlite; AppName AppName-shm AppName-wal
@GangstaGraham The wal file is the SQLite write ahead log, and the shm is the SQLite shared memory file. The "AppName" file is most likely your SQLite database.
I figured out that in the Store Named string I inputted "AppName" instead of "AppName.sqlite", I just thought it would add in the sqlite by itself, anyways thank you SO SO much, the database really helps in understanding what the program is doing, so it's great to be able to access it again. Thanks a lot! @rickerbh
Any reason of why you put in /Application Support/ instead /Caches/ ?
You can log your SQlite file location using the following NSPersistentStore MR addition :
[NSPersistentStore MR_urlForStoreName:[MagicalRecord defaultStoreName]]
If you know the name of the sqlite file then just do a search in OSX for that file to find the directory. Otherwise the file was never created.
Make sure you are setting up the CoreData stack correctly as per the documents.
+ (void) setupCoreDataStackWithAutoMigratingSqliteStoreNamed:(NSString *)storeName;
Do you know if Core Data needs a database file, or if that's optional? Because Core Data (through the Magical Record wrapper) is working.
CoreData can use sqlite as its persistent store but no, it does not require sqlite. It needs to be configured for whatever store type you choose.
Check here. It goes over setting up the CoreData stack (sqlite filename) https://github.com/magicalpanda/MagicalRecord
I did this and it still is not in the documents folder, searching for it, let's hope I find it
Nope, @MarkM I searched for AppName.sqlite in Finder, and got no results. :(
Check out this tutorial. Maybe it can help. http://maniacdev.com/2012/04/tutorial-the-basics-of-using-the-magical-record-core-data-library
Swift + Xcode 7 + simulator 9.0
Please go to your AppDelegate.swift
change
MagicalRecord.setupCoreDataStackWithStoreNamed("yourDBName.sqlite")
to
MagicalRecord.setupCoreDataStackWithAutoMigratingSqliteStoreNamed("yourDBName.sqlite")
Now you can locate the yourDBName.sqlite at
/Users/userName/Library/Developer/CoreSimulator/Devices/"random app id"/data/Containers/Data/Application/"random app id"/Library/Application Support/ProjectName
There is a useful tool: simPHolders2 (http://simpholders.com)
simPHolders easy to access all application folders
|
STACK_EXCHANGE
|
Do it yourself
- Tools to help analyze the game
Game root directory
📁 DATA | Game data. 📁 DRIVERS | Contains DOS drivers required to play music and sounds. 📁 GFX | Graphics. 📁 SAVE | Saved games and progress. 📁 VIDEO | Intro video. 📁 WORLDS | Custom maps. 📄 DOS4GW.EXE | DOS memory extender. 📄 INSTALL.SCR | Install script instructions text file. 📄 S2.EXE | Main game executable. 📄 S2EDIT.EXE | Map Editor executable. 📄 SETTLER2.EXE | English game launcher. 📄 SETTLER2.VMC | Virtual Memory Manager configuration text file. 📄 SETUP.EXE | Setup sound and music. 📄 SETUP.INI | Setup configuration file. 📄 SIEDLER2.EXE | German game launcher.
📁 ANIMDAT | Animation. 📁 BOBS | Carriers and workers. 📁 CBOB | Workers. 📁 IO | Fonts, graphics. 📁 MAPS | The Roman Campaign 📁 MAPS2 | World Campaign 📁 MAPS3 | Old Unlimited Play maps 📁 MAPS4 | New Unlimited Play maps 📁 MASKS | Masks 📁 MBOB | Buildings 📁 MISSIONS | Campaign mission scripts 📁 ONLINE | Game help strings 📁 SOUNDDAT | Sounds and music 📁 TEXTURES | Gouraud shading for each world type 📁 TXT | Game strings 📁 TXT2 | Credits and keyboard strings 📁 TXT3 | Map Editor strings 📄 BOOT_Y.LST | - 📄 BOOT_Z.LST | - 📄 BOOTBOBS.LST | - 📄 CREDITS.LST | - 📄 EDITBOB.LST | - 📄 EDITRES.DAT | - 📄 EDITRES.IDX | - 📄 IO.LST | - 📄 MAP_0_Y.LST | - 📄 MAP_0_Z.LST | - 📄 MAP_1_Y.LST | - 📄 MAP_1_Z.LST | - 📄 MAP_2_Y.LST | - 📄 MAP_2_Z.LST | - 📄 MAP00.LST | - 📄 MAP01.LST | - 📄 MAP02.LST | - 📄 MAPBOBS.LST | - 📄 MAPBOBS0.LST | - 📄 MAPBOBS1.LST | - 📄 MIS0BOBS.LST | Special graphics for a mission. 📄 MIS1BOBS.LST | Special graphics for a mission. 📄 MIS2BOBS.LST | Special graphics for a mission. 📄 MIS3BOBS.LST | Special graphics for a mission. 📄 MIS4BOBS.LST | Special graphics for a mission. 📄 MIS5BOBS.LST | Special graphics for a mission. 📄 REMAP.DAT | - 📄 RESOURCE.DAT | - 📄 RESOURCE.IDX | -
📁 PALETTE ╙─📄 PAL5.BBM | Greenland palette ╙─📄 PAL6.BBM | Wasteland palette ╙─📄 PAL7.BBM | Winter World palette ╙─📄 PALETTI0.BBM | Greenland palette (unused in Gold Edition) ╙─📄 PALETTI1.BBM | Wasteland palette (unused in Gold Edition) ╙─📄 PALETTI8.BBM | Winter World palette (unused in Gold Edition) 📁 PICS ╙─📁 MISSION | World Campaign mission selection countries ╙─📄 *.LBM | Various background graphics ╙─📄 WORLD.LBM | World Campaign mission selection screen ╙─📄 WORLDMSK.LBM | World Campaign mission mask for determining selection 📁 PICS2 ╙─📄 CREDIT00.LBM | Credits background image 📁 TEXTURES ╙─📄 TEX5.LBM | Greenland texture ╙─📄 TEX6.LBM | Wasteland texture ╙─📄 TEX7.LBM | Winter World texture ╙─📄 TEXTUR_0.LBM | Greenland texture (unused in Gold Edition) ╙─📄 TEXTUR_3.LBM | Wasteland texture (unused in Gold Edition)
Where can I find the files?
You must own a copy of the game. You can get a copy easily from GoG.com. It is often discounted to 2.49 € from a full 9.99 € price.
Once owning a copy the game may get installed to multiple locations. If you download a digital copy with separate installer (without GoG Galaxy) on Windows systems you will find the game likely installed at
C:\GOG Games\Settlers 2 GOLD\.
|
OPCFW_CODE
|
A bridge between the microscopic and the macroscopic
The fundamental question from which statistical mechanics has risen is the following: where does thermodynamics come from? In fact, we know that fluids and gases are made of particles (atoms or molecules) and in principle we could use the tools of classical mechanics in order to study their motion; therefore, we could theoretically describe the system at least from a microscopic perspective. We can however wonder how this microscopic point of view is related to the macroscopic description of systems given by thermodynamics. In other words: how do the thermodynamic laws we know come from the microscopic motion of particles? What we want to do now is exactly to establish this link, i.e. to derive the thermodynamics of a macroscopic system at equilibrium from its microscopic properties. This is the final purpose of equilibrium statistical mechanics. We now therefore outline the general theoretical framework that will be needed in order to develop this theory.
Let us consider an isolated system composed of particles, with volume and energy . Since it is isolated its energy, momentum and angular momentum are conserved; however, considering the system as fixed and still we can set and so that the energy is its only non null conserved quantity. If we call and , respectively, the position and momentum of the -th particle the dynamics of the system can be obtained from its Hamiltonian:
- The number of particles in the system is insanely huge, in general of the order of Avogadro's number, i.e. . We therefore should solve a system of approximately coupled differential equations, which is rather impossible (also from a computational point of view)
- Even if we could solve them the solutions of Hamilton's equations would give no significant information about the system; for example it is much more interesting to know the average number of particles that hit a wall of the system per unit time than knowing exactly which particle hits the wall at a given instant
Furthermore a lot of interesting systems exhibit chaotic behaviours, namely their time evolution depends strongly on initial conditions making essentially useless any exact solution of Hamilton's equations.
Therefore, it is clear how a statistical treatment of many-particle systems is essential in order to obtain relevant information and ultimately derive their thermodynamics. The fundamental concept that allows one to develop such statistical description is that of ensemble, which we now introduce. Consider the generic many-particle system that we have introduced earlier, and for the sake of simplicity call and the set of all the coordinates and momenta of the particles. The -dimensional spaces where and live are called, respectively, configuration and momentum space, while the -dimensional space where lives is called phase space of the system, often referred to as . Once all the positions and momenta of the particles have been given (i.e. once we have all the possible information about its microscopic configuration) the whole system is identified with a unique point in phase space, sometimes called microstate or representative point of the system, and as the system evolves (i.e. the particles move, thus changing their positions, and interact with each other, thus changing their momenta) this point moves in phase space describing a trajectory. The exact solution of Hamilton's equations for this system would give us the expression of this trajectory, but as we have seen before this is not a really useful information. We therefore change point of view: if we look at our system under a "macroscopic perspective", in general it will be subjected to some constraints like the conservation of energy (in case it is isolated) or the conservation of volume etc., and therefore the macroscopic properties of a system have precise values. This suggests that we can define a macrostate of the system, i.e. describe it only with some of its bulk properties (which is exactly the approach of thermodynamics). We thus have two substantially different ways to describe the same system: a macroscopic and a microscopic one. Now, for a given macrostate of the system there will be multiple microstates which are compatible with the first one, namely there are many microscopic configurations of the system that satisfy the same macroscopic constraints (have the same energy, volume etc.) and obviously they are all equivalent from a macroscopic point of view. The set of all the possible microstates which are compatible with a given macrostate of the system is called ensemble.
The approach of statistical mechanics consists, essentially, in studying the average behaviour of the elements of an ensemble rather than the exact behaviour of a single particular system.
Depending on the constraints set on a system its ensemble changes name, and in particular there are three kind of ensembles:
- microcanonical ensemble: when the system is completely isolated and has fixed values of energy , volume and number of particles
- canonical ensemble: when the system can exchange energy with its surroundings
- grand canonical ensemble: when the system can exchange energy and particles with its surroundings
We will now proceed to study the properties of such ensembles and see how we can link them with thermodynamics.
|
OPCFW_CODE
|
If I had to pick one word for this book, I'd choose "thorough". Give me the luxury of a few more words and I'd add "sumptuous", "enthralling" and maybe even "riveting". Well, OK, maybe "riveting" wouldn't be most folks choice, but I can really get into a well written book like this. From almost the very first page I learned things I never knew. What was even more interesting was learning the "why" behind things I already did know - that's where the "thorough" comes into play: you don't just learn the rules for Make's variable dereferencing, you learn why it works that way. Make doesn't get its own chapter, but it is explained in good depth, as are gdb and gcc itself.
After that, and a good overview of the GNU C library. there's a great chapter on memory debugging and the various tools available to help find your errant pointers. I don't program much in C anymore, so I was mostly unaware of these tools and certainly didn't understand the features and limitations that this chapter covered. After that, it's on to building and using shared libraries, with lucid explanation of the role of ldconfig and how sonames are linked to real libraries. Preloading libraries is covered using the example of zlibc, which was another thing I never knew existed (it can automagically uncompress files so that programs written with standard C "open" work transparently with zipped files).
Then we have the meat of the book, covering Linux system calls, file handling, signals, etc. A "ladsh" (Linux Application Development Shell) is presented and used as an example throughout. Networking, psuedo tty's, date and time, random numbers, programming the virtual consoles - it's all here, and all well illustrated by excellent examples and sample code.
I was very pleased to see a full chapter on security. That's too often overlooked in programming books. That chapter is thorough also, even noting how preloading libraries or setting NLSPATH could be used to compromise setuid/setgid programs.
The final section of the book turns to development libraries (string handling, regexes, etc.). There I found yet another of the many things I had no knowledge of: the S-Lang library for terminal handling. In the chapter on dbm files, I learned about qdbm, an LGPL library for hashed databases. The final chapters cover dynamic loading and PAM.
I have neither the time nor the mental stamina to do C programming any more, but books like this make me wish that I did. Not that I'm any good at it, but there is tremendous statisfaction in crafting lower level programs that bend the system to your will. But beyond that, understanding HOW the lower level stuff all works can help you write better programs at a higher level and understand what they are really doing (which can be a great aid in debugging). So this book will be one I pick up and re-read regularly.
Tony Lawrence 2005/01/04 Rating:
Order (or just read more about) Linux Application Development from Amazon.com
Got something to add? Send me email.
More Articles by Tony Lawrence © 2011-04-29 Tony Lawrence
|
OPCFW_CODE
|
Combining diacritical marks like the comma above and acute accent with Latin base characters
I am developing a solution for MS Word (using VBA) and websites (using HTML/CSS/JS) enabling an efficient typing of character combinations that consist of multiple diacritical marks, such as œ̣̄̃́, for example.
A prototype solution has already been implemented, though I've stumbled across one single difficulty that I may not be able to solve without any support.
I need to display these characters which consist of the 'combining comma above' (U+0313) and 'combining acute accent' (U+0301). The current result I am getting is a stacked version c̓́, though I need the diacritics to be side by side. This is possible with Greek base characters like ἄ(03B1+0313+0301) for example, but not with Latin ones.
Even a standalone version exists: ῎(U+1FCE) that is sadly not combinable.
How can I solve this problem?
Just curious; which characters in which languages have that diacritic?
@MrLister some latin- and greek-based languages make use of these diacritics, in order to specify the exact pronounciation. But I have no linguistic backround tbh, so don't take this at face value; I'm just here to programm this
For the specific case where the combo looks like U+1FCD or U+1FCE, in Word you can use an EQ field, e.g. { EQ \o(a,῎)} or using a Math Equation. e.g. copy the following MathML and paste it into a Word document, then make it Inline: a῎. The first of those won't help you on the Web, but the MathML representation might, depending on the browser (my test put the accent too high). Perhaps CSS can help. Experiments using MathML multiple accent characters not so successful.
BTW I suspect a lot of people here would not regard this as a "programming question" as far as SO is concerned - possibly why it has been downmarket.
okay I am going to try this stuff out and update this thread based on my findings; thanks everyone!
@MrLister the italian language uses it (or at least has used it the past)
@MrLister I found the letters C and G with comma and acute accent above in the orthography used by the
Centro di dialettologia e di etnografia (CDE) for dialects of Lombard spoken in Switzerland (example: https://www4.ti.ch/fileadmin/DECS/DCSU/CDE/pdf/pubblicazioni/LSI-Guida_grafica_consultazione.pdf ) as well as one orthography for Ladin (see http://diva-portal.org/smash/get/diva2:1336549/FULLTEXT01.pdf).
After reaching out to the Unicode Consortium this was the answer of one of their representatives:
I checked the Unicode code chart and consulted with other Unicode experts. The code point that should be used, in the view of the experts, is: U+0313 COMBINING COMMA ABOVE.
In the Phonetic Symbol Guide, the entry for “apostrophe” mentions its use for palatalization (by Slavicists), besides its use for ejectives or glottalized consonants, and states it can appear following the symbol or over the symbol.
However, I tried out U+0313 on various fonts, and see fonts don't work as you wish: the acute and COMBINING COMMA ABOVE stack (or collide), instead of appearing beside one another. To rectify the situation, you should contact the font providers and ask them to adjust the font - or do it yourself if you have the tools.
I hope this is helpful. (Using an already encoded Unicode code point will save 2+ years of waiting for a new character.)
Our solution to this problem was to create a custom font that moves the U+1FCE character on top of the base character, effectively making it a combining one. This font will also be used for our web applications, so not just MS-Word.
I think this is a general problem that might not be practical to address as just a font issue, since other diacritic combinations do stack vertically. This particular combination of comma and acute accent above should maybe be added to Unicode. Would you happen to have textual sources showing this diacritic? I already found examples of usage in Lombard and Ladin (see comment under the original question above), but more would be useful for writing a proposal. Thanks!
In Word, Unicode 0315 is known as a 'combining comma above right'. The reverse form of this is Unicode 0314, which is known as a 'combining reversed comma above'. There are also Unicode 0312, known as a 'combining turned comma above' and Unicode 0313, known as a 'combining comma above'.
Why do you say "In Word"? These are the offical codepoint names according to Unicode, nothing nothing to do with MS Word...
@macropod Well yes, that is true. But the 'combining reversed comma above' (0314) is still not the same as the required 'combining comma above left' (non-existent) as it is just the mirrored version of the 'combining comma above right' (0314), isn't it?
@macropod Also when combining the mirrord comma with e.g. an acute tone mark the diarcital marks interfere with each other
I said "In Word' because that's what you see in the Insert|Symbol dialogue box for these Unicode characters.
@macropod Yeah I got that; still does not solve the problem unfortunately. Or was this a reply to user-lenz
My previous comment was in reply to lenz - as I'd have thought was quite obvious. Regarding the conflicting diacritics, the behaviour is to some extent dependent on the order in which they're inserted. For example, you can insert Unicode 0315 followed by Unicode 0301 - the latter appears above the former - but not the other way around.
@macropod Sure thing! I still need to display the comma and the acute above the base character, but on the same height. For example like this (acute tone mark & comma above right), but with the diacritcal marks swapped (so comma above left & acute tone mark).
For something like that you might use Unicode 0107 (small C with acute) preceded by Unicode 0315 (combining comma above right).
@macropod Sadly no, as the 0107 (small C with acute) character behaves the same as combining the seperate characters (small c + combining acute accent). The important part here is the difference between the combining acute accent and the combining acute tone mark. The tone mark allows another combining character on the same height, what raises the earlier described interference problem when using the regular (-not left-) comma above. Using the acute accent (pre-combined as in 0107 or manually combined [0063 + 0301]) does not allow another combining diacritical mark on the same height, see ć̓
The sequence I suggested places the diacritic comma on the same level as the acute. Do make sure both employ the same font & point size.
@macropod can you paste in the result as I did in my previous message? Because I tried it again, using the exact sequence you suggested an keep getting this ć̓ as the result. Idenpendently of the way I combine this character (with my macro or manually via Insert > Symbol) the result is the same.
Insert the Unicode 0107 first, then move the insertion point back one position before inserting the Unicode 0315. That way you'll get ̕ć (but with the result displayed better than it does here - both the ̕ and the ́ will be displayed at the same height).
@macropod Oh I am so sorry, I was using 0314 the entire time; now I see what you're describing. This does raise another problem though, inserting the 0315 character before 0107 will bind it to the previous character (if it exists). So if a word starts with ̕ć thats no problem but as soon as the 'c'-character is within a word like for example 'ma̕ćro' you can see how it will be bound to the 'a'-character in this case
Try Maʼćro, with the 'a' horizontal spacing condensed by 1/6th the point size and the 'ʼ' horizontal spacing condensed by 1/4th the point size. FWIW, I know of no latin-based or greek-based language that makes use of such a diacritic combination. Greek has its own extensive character set that includes all diacritic combinations known to me.
@macropod I've noticed, that many greek characters have this combination already, for example ὤ (U+1F64). Using Latin base characters and combining diacritical marks (as well as the combining acute and not the tone mark) causes the combination to not be displayed as desired: a̓́ (0061+0313+0301).
However, with Greek base characters the desired result is achievable: ἄ (03B1+0313+0301) or ἔ (03B5+0313+0301)
In addition to that a precomposed version of this exists as a regular character: ῎ (1FCE) that is sadly not combinable.
So why are you trying to combine characters when all valid combinations are already available as single characters?
@macropod Well it is not possible for latin base characters. So a regular 'c' or a 'g' is neither combinable with ῎ (U+1FCE) (obviously, as it is a standalone character and not a combining one) nor with the 'combining comma above' (U+0313) + 'combining acute accent' (U+0301) in order to get the desired result (which is ῎ on top of them).
And who uses these obscure characters?
@macropod The comma above is quite usual in the elder versions of phonological and phonetical transkriptions. The whole traditional romance philology (dialectology) used it, naturally mainly in combination with the velars K and G that become palatal. In combination with the accent it means "palatalization and affricativation". So yes, it is used in many texts (s. above). It is quite frequent.
|
STACK_EXCHANGE
|
from mujoco_worldgen.util.envs.flexible_load import load_env
from mujoco_worldgen.util.envs.env_viewer import EnvViewer
def examine_env(env_name, env_kwargs, core_dir, envs_dir, xmls_dir='xmls',
env_viewer=EnvViewer, seed=None):
'''
Loads an environment and allows the user to examine it.
Args:
env_name (str): Environment name. Does not need to be exact since the load_env function
does a search over the environments & xmls folder for any file names that match
the env_name pattern
env_kwargs (dict): Dictionary of environment keyword arguments
core_dir (str): Absolute path to the core code directory for the project containing the
environments we want to examine. This is usually the top-level git repository
folder - in the case of the mujoco-worldgen repo, it would be the 'mujoco-worldgen'
folder.
envs_dir (str): relative path (from core_dir) to folder containing all environment files.
xmls_dir (str): relative path (from core_dir) to folder containing all xml files.
env_viewer (class): class used to render the environment. See the imported EnvViewer
class for an example of how to structure this.
seed (int): Environment seed
'''
env, args_remaining = load_env(env_name,
core_dir=core_dir, envs_dir=envs_dir, xmls_dir=xmls_dir,
return_args_remaining=True, **env_kwargs)
if seed is not None:
env.seed(seed)
assert len(args_remaining) == 0, (
f"There left unused arguments: {args_remaining}. There shouldn't be any.")
if env is not None:
env_viewer(env).run()
else:
print('"{}" doesn\'t seem to be a valid environment'.format(env_name))
print("Error couldn't match against any of patterns. Please try to be more verbose.")
print("\n\nFailed to examine")
|
STACK_EDU
|
In many scenarios, date and time information is loaded and stored in the text format. Converting these values to the date/time type is a standard requirement in most business applications for analysis needs or performance improvement if we query the data by date values.
In SQL Server, we can do this in two general ways – using implicit or explicit conversion.
Implicit conversion is not visible to the end-user, data is converted during load or data retrieval, and without using any dedicated function or procedure.
Explicit conversions use integrated or user-defined functions or procedures, mostly by implementing CAST or CONVERT built-in functions or their extensions.
This article will demonstrate implicit and explicit conversion methods. In case of explicit conversion, except CAST and CONVERT, we’ll show newer additions to SQL Server – TRY_CAST, TRY_CONVERT(), and TRY_PARSE() functions.
Implicit conversions are not visible to end-users, which is demonstrated by the example below:
USE AdventureWorks2019 SELECT TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME FROM information_schema.columns WHERE '1' = 1
In SQL Server, when implicitly converting dates, we depend on date format and server language settings. The only exception is storing dates in ISO formats ( “yyyyMMdd” or “yyyy-MM-ddTHH:mm:ss(.mmm)” ). In this case, they are always converted, no matter what regional and language settings are defined on the server, as seen in the example below:
— This example will work since the date is in the ISO format
SELECT TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME FROM information_schema.columns WHERE GETDATE() > '20000101'
— This example will throw an exception since the date is in the DDMMYYYY format
SELECT TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME FROM information_schema.columns WHERE GETDATE() > '01012020'
A very detailed overview of implicit rules for date conversion is available in this blog post.
If you don’t want to depend on the input format and server settings, you can use explicit conversion functions, including CONVERT(), CAST(), and PARSE() with some extensions.
The CAST Function
When people encounter conversion problems for the first time, they first try the CAST() function – it is the most basic and easiest one to use, although lacking more advanced functionality.
CAST() takes an input value and converts it to the specified data type with an optional data type length. The example of usage, in the context of date conversions, is below:
SELECT CAST('06/08/2021' as date) as StringToDate , CAST(GETDATE() as VARCHAR(50)) as DateToString
The rule of language settings applies as well to implicit conversions, so CAST will work correctly only on ISO formats or formats supported by the current regional/server language settings.
The CONVERT Function
If we need more functionality, especially if we want to specify the conversion format, we need to use the CONVERT function. It takes three input parameters – the destination data type, the input value, and the conversion format (optional parameter). If there is no format defined, it will act like the CAST function, but if there is any input, it will use that conversion style.
The example of using CONVERT with the custom format is below:
DECLARE @VarDate VARCHAR(10) SET @VarDate = '06-08-2021' SELECT CONVERT(DATETIME, CONVERT(VARCHAR, CONVERT(DATE, @VarDate, 103), 102))
The PARSE Function
CAST and CONVERT are native SQL Server functions. With newer versions of SQL Server, we can use the PARSE function, which is the CLR (.NET) function.
The basic syntax of the PARSE function is:
PARSE(<value> AS <data type> [USING <culture>]).
If the input parameter “<culture>” is not defined, PARSE will behave the same as the CAST function. But if we put the proper value, PARSE will try to convert using the string.
— Parsing string with Arabic culture setting
SELECT PARSE('06/08/2021' AS DATE USING 'AR-LB')
TRY_CAST, TRY_CONVERT and TRY_PARSE Functions
If we send invalid date strings to CAST, CONVERT, or PARSE functions, we will get an exception. Sometimes this is acceptable, but sometimes we want to process without errors. To support this scenario, SQL Server comes with three functions that check in advance if the value can be parsed. If so, they will return the converted value, if not – they will return NULL. The example is below:
— The first value will be converted, the second will be NULL, no exception thrown
SELECT TRY_CAST('06/08/2021' as date), TRY_CAST('01/01/0000' as date)
The List of Available Conversion Formats
You can get the list of all available conversion formats in the official online documentation. But sometimes it is more convenient to get it programmatically since the formats change over SQL Server versions. The function below returns the list of all valid formats:
DECLARE @X INT = 0 DECLARE @DATE_TEST DATETIME = '2021-08-06 00:00:01.001' CREATE TABLE #AvaliableFormats (FormatOption INT, FormatOutput NVARCHAR(255)) WHILE (@X <= 300 ) BEGIN BEGIN TRY INSERT INTO #AvaliableFormats SELECT CONVERT(NVARCHAR, @X), CONVERT(NVARCHAR,@DATE_TEST, @X) SET @X = @X + 1 END TRY BEGIN CATCH; SET @X = @X + 1 IF @X >= 300 BEGIN BREAK END END CATCH END SELECT * FROM #AvaliableFormats DROP TABLE #AvaliableFormats
When to Use CAST, CONVERT, or PARSE Functions
The table below shows an overview of the differences between these three functions:
|Description||Changes one data type to another||Changes one data type to another||Retrieves expression result, in SQL data type that was sent as an input parameter|
|Input values||Any list of characters||Any list of characters||String|
|Output values||Converted value to requested data type||Converted value to requested data type||Converted value to requested data type|
|Possible transformations||Any two valid data types||Any two valid data types||Input has to be string value and output can be only the number or date/time type|
|Requires .NET installed||No||No||Yes|
CAST has been available in SQL Server for a long time, and it is present in many other DBMS implementations. If you need portability and are satisfied with the limitations of this function, use CAST in your code.
CONVERT is very useful if you need to specify custom data formats since CAST does not support “style” arguments and you cannot use PARSE to convert from date to the Varchar (string) value.
Although there are minuses to using .NET functions, primarily related to performance, the necessity of installation on a server, and data type conversion limitations, you’ll need PARSE if you have string inputs that cannot fit in the provided custom formats.
One example is having days or months in custom string formats instead of numeric values. When you need to specify custom format logic, you cannot do it with CAST and CONVERT, but PARSE will suit.
DATEPART and DATENAME Functions
There is almost no business application that does not retrieve or save date/time data, at least for logging purposes. In most cases, we need to display day or month names in a string format, or we need to display some kind of calculation involving date and time, for example, week number.
SQL Server offers two functions that can help with similar activities – DATEPART and DATENAME. Both require two input parameters – the time part to be extracted and the input date value.
DATEPART returns a number, while DATANAME returns a string that can be only the day in a week and a month name. The example of all possible use scenarios of these functions is below:
DECLARE @InputValue DATETIME2(7) SET @InputValue = '2021-08-08 13:05:00.0000112' SELECT DATEPART(ISO_WEEK,@InputValue) AS ISO_WEEK , DATEPART(NANOSECOND,@InputValue) AS NANOSECOND , DATEPART(MICROSECOND,@InputValue) AS MICROSECOND , DATEPART(MS,@InputValue) AS MILISECOND , DATEPART(SS,@InputValue) AS SECOND , DATEPART(MINUTE,@InputValue) AS MINUTE , DATEPART(HH,@InputValue) AS HOUR , DATEPART(DW,@InputValue) AS DAYINWEEK , DATEPART(WEEK,@InputValue) AS WEEK , DATEPART(DAY,@InputValue) AS DAY , DATEPART(DAYOFYEAR,@InputValue) AS DAYOFYEAR , DATEPART(MM,@InputValue) AS MONTH , DATEPART(QUARTER,@InputValue) AS QUARTER , DATEPART(YYYY,@InputValue) AS YEAR , DATENAME(ISO_WEEK,@InputValue) AS ISOWEEK , DATENAME(TZoffset,@InputValue) AS TZoffset , DATENAME(nanosecond,@InputValue) AS NANOSECOND , DATENAME(microsecond,@InputValue) AS MICROSECOND , DATENAME(millisecond,@InputValue) AS MICROSECOND , DATENAME(ss,@InputValue) AS SECOND , DATENAME(minute,@InputValue) AS MINUTE , DATENAME(HOUR,@InputValue) AS HOUR , DATENAME(weekday,@InputValue) AS DAYINWEEK , DATENAME(wk,@InputValue) AS WEEK , DATENAME(d,@InputValue) AS DAY , DATENAME(dayofyear,@InputValue) AS DAYOFYEAR , DATENAME(m,@InputValue) AS MONTH , DATENAME(quarter,@InputValue) AS QUARTER , DATENAME(YYYY,@InputValue) AS YEAR
Building a Calendar Table
The standard case in data warehousing is the initial creation of a calendar table that would be later used to create time dimensions for reporting or data processing.
The code below is an example of how you can use these two functions to create such a table:
DECLARE @StartDate DATE = '01/01/2018', @EndDate DATE = '12/31/2021' DECLARE @CalendarData TABLE ( DateValue DATE PRIMARY KEY, MonthNO INT, DateNO INT, DateOfYear INT, WeekNO INT, DayOfWeekNO INT, NameOfMonth NVARCHAR(50), NameOfDay NVARCHAR(50) ) WHILE DATEDIFF(DAY,@StartDate,@EndDate) >= 0 BEGIN INSERT INTO @CalendarData (DateValue, MonthNO, DateNO, DateOfYear, WeekNO, DayOfWeekNO , NameOfMonth, NameOfDay) SELECT @StartDate , DATEPART(MONTH,@StartDate) , DATEPART(DAY,@StartDate) , DATEPART(DAYOFYEAR,@StartDate) , DATEPART(WEEK,@StartDate) , DATEPART(DW,@StartDate) , DATENAME(MONTH,@StartDate) , DATENAME(DW,@StartDate) SELECT @StartDate = DATEADD(DAY,1,@StartDate) END SELECT * FROM @CalendarData
Data type conversion, especially date-time operations, needs to be done carefully and thoroughly tested. If the data format changes during application operations or data exchanges, it might cause crashes and exceptions.
It is crucial to be aware of possible changes and differences between development and production environments, especially in regional settings in different instances and installations. If available, it is recommended to use TRY functions and handle NULL logic programmatically, logging but not stopping application execution, except if processed dates are not key business information.
You can use all integrated functions, like DATEPART and DATENAME, rather than writing your own, since they are thoroughly tested and offer the best performance.Last modified: March 18, 2022
|
OPCFW_CODE
|
<?php
/**
* This file is part of the GeoJSON package.
*
* (c) Lorenzo Marzullo <marzullo.lorenzo@gmail.com>
*/
namespace GeoJSON\Geometry\Tests;
use GeoJSON\Geometry\Point;
use GeoJSON\Geometry\Position;
use GeoJSON\Tests\AbstractTestCase;
use GeoJSON\Type;
/**
* Class PointTest.
*
* @package GeoJSON
* @author Lorenzo Marzullo <marzullo.lorenzo@gmail.com>
* @link https://github.com/lorenzomar/geojson
*/
class PointTest extends AbstractTestCase
{
public function testType()
{
$point = new Point(new Position(10, 20, 3));
$this->assertTrue($point->type()->is(Type::POINT()));
}
public function testCoordinates()
{
$position = new Position(10, 20, 3);
$point = new Point($position);
$this->assertTrue($point->coordinates()->equals($position));
}
public function testEquals()
{
$point1 = new Point(new Position(10, 20, 3));
$point2 = new Point(new Position(10, 20, 3));
$point3 = new Point(new Position(30, 10, 0));
$this->assertTrue($point1->equals($point2));
$this->assertTrue($point2->equals($point1));
$this->assertFalse($point1->equals($point3));
$this->assertFalse($point3->equals($point1));
$this->assertFalse($point2->equals($point3));
$this->assertFalse($point3->equals($point2));
}
}
|
STACK_EDU
|
from sklearn.model_selection import KFold, StratifiedKFold
from .layer import *
class gcForest1:
def __init__(self, num_estimator, num_forests, num_classes, max_layer=100, max_depth=31, n_fold=5, min_samples_leaf=1, \
sample_weight=None, random_state=42, purity_function="gini" , bootstrap=True, parallel=True, num_threads=-1 ):
self.num_estimator = num_estimator
self.num_forests = num_forests
self.num_classes = num_classes
self.n_fold = n_fold
self.max_depth = max_depth
self.max_layer = max_layer
self.min_samples_leaf = min_samples_leaf
self.sample_weight = sample_weight
self.random_state = random_state
self.purity_function = purity_function
self.bootstrap = bootstrap
self.parallel = parallel
self.num_threads = num_threads
self.model = []
def train(self,train_data, train_label, X_test, y_test):
# basis information of dataset
num_classes = int(np.max(train_label) + 1)
if( num_classes != self.num_classes ):
raise Exception("init num_classes not equal to actual num_classes")
num_samples, num_features = train_data.shape
# basis process
train_data_new = train_data.copy()
test_data_new = X_test.copy()
# return value
val_p = []
val_acc = []
best_train_acc = 0.0
best_test_acc=0.0
layer_index = 0
best_layer_index = 0
bad = 0
# temp = KFold(n_splits=self.n_fold, shuffle=True)
# kf = []
# for i, j in temp.split(range(len(train_label))):
# kf.append([i, j])
kf = StratifiedKFold( self.n_fold, shuffle=True, random_state=self.random_state) ## KFold / StratifiedKFold
while layer_index < self.max_layer:
print("\n--------------\nlayer {}, X_train shape:{}, X_test shape:{}...\n ".format(str(layer_index), train_data_new.shape, test_data_new.shape) )
layer = KfoldWarpper(self.num_forests, self.num_estimator, self.num_classes, self.n_fold, kf,\
layer_index, self.max_depth, self.min_samples_leaf, self.sample_weight, self.random_state, \
self.purity_function, self.bootstrap, self.parallel, self.num_threads )
val_prob, val_stack= layer.train(train_data_new, train_label)
test_prob, test_stack = layer.predict( test_data_new )
train_data_new = np.concatenate([train_data, val_stack], axis=1)
test_data_new = np.concatenate([X_test, test_stack], axis=1 )
temp_val_acc = compute_accuracy(train_label, val_prob)
temp_test_acc = compute_accuracy( y_test, test_prob )
print("val acc:{} \nTest acc: {}".format( str(temp_val_acc), str(temp_test_acc)) )
if best_train_acc >= temp_val_acc:
bad += 1
else:
bad = 0
best_train_acc = temp_val_acc
best_test_acc = temp_test_acc
best_layer_index = layer_index
if bad >= 3:
break
layer_index = layer_index + 1
print( 'best layer index: {}, its\' test acc: {} '.format(best_layer_index, best_test_acc) )
return best_test_acc
def predict_proba(self, test_data):
test_data_new = test_data.copy()
test_prob = []
for layer in self.model:
test_prob, test_stack = layer.predict(test_data_new)
test_data_new = np.concatenate([test_data, test_stack], axis=1)
layer_index = layer_index + 1
return test_prob
def predict(self, test_data):
test_data_new = test_data.copy()
test_prob = []
for layer in self.model:
test_prob, test_stack = layer.predict(test_data_new)
test_data_new = np.concatenate([test_data, test_stack], axis=1)
return np.argmax(test_prob, axis=1)
|
STACK_EDU
|
I have a java script tagged to a <asp:imagebutton> control , the java script fires and works perfectly in IE but not in firefox ,Any asumptions.
(I am .net 1.1 and and added the java script in code-behind using attributes)
i'm trying to implement animation in an app. The frame rate of the animation needs to be variable. To implement this, i have a thread which sleeps for 1/fps seconds, and when it wakes it invalidates the window. Then, the onpaint handler draws the next frame.
Whilst this approach works ok at low frame rates, when things speed up, the animation gets choppy (due to the way in which the message queue is processed).
Is there a better approach? I want to be able to draw at reasonably precise times. Would DirectDraw help? i've never looked at directx, but would be prepared to if it offered a solution to my problem.
I dont know how WMP does what it does. This comes to mind:
- use the right kind of timer
- use double-buffering
- make sure everything you need is in memory before you need it
- precalculate the next frame, so the only real-time thing to do is show it
- possibly work with thread priorities (not recommended if you are not familiar with it).
I have recently worked on a solution that is similar to your needs. I used threads that animated a user control containing other controls, in fact I had multiple user controls being animated on the same form, with different threads.
I used delegates to access a number of variables on the form from my threads.
Also in the form I declared a volatile integer like this:-
private volatile int componentOnPaint
And a public method which will be used by a delegate to return the above value.
Then in your override method for OnPaint in the form, increment the componentOnPaint integer, this way you know how many times the form has been re-painted.
Before your thread has invalidated the form get the componentOnPaint value, invalidate the form and then sleep for a short time using Thread.Sleep(30) (this allows the thread to be re-painted). At the next line of execution check the componentOnPaint value again, if it is greater than before, then continue. Otherwise loop and sleep again, until the value is greater and the form is repainted. Then you would use Thread.Sleep again for a fixed time depending on the animations frame length.
To make the animation smoother, use another delegate to get the delay time (milliseconds) required in between animation cycles. Use DateTime.Now to get the time before you invalidated your form, and use DateTime.Now to get the time after componentOnPaint was greater.
This worked for me, using volatile earlier means the variable can act as a semaphore accessible from any thread (hence volatile), so you know if the form has been drawn and you shouldn't miss any frames. The time manipulation should just make your animation smoother.
I think you should first factor in how complicated is the animation you are trying to execute.
Drawing in .NET is not the fastest solution by any means, and usually its as slow as you can get. So your issue might be more related to .NET's drawing performance more than to timing issues...when you demand high frame rates maybe .NET's drawing performance just can't keep up more than the message queue not processing the paint messages (as a matter of fact Invalidate messages skip the application queue and are processed immeadiately, so queueing is not the issue).
For any kind of serious animation I would rely on DirectX or OpenGL. NET Drawing namespace does and adequate job when it comes to what it was meant to do, but animation was not in the list IMHO.
hope this helps
P.D.1: First off, before plunging into new technology, I'd try optimising the code. Even if .NET is slow, it might be enough in your case but maybe your drawing routines are not coded well enough. Make sure you are disposing all disposable objects you "own" in your drawing implementation, you're releasing correctly any unmanaged resources you might be using, etc. Also check if maybe its worthwhile to cache any of objetcs you might be using and that way avoid creating and releasing them on every drawn frame, etc.
P.D.2: If you have to rely on some other solution then you dont have to plunge into 100% native code...check out Managed DirectX.
When I try to read a double value with the console.read method I get a completeley different value. In case of 0 I get 48, in case of 1 I get 49. I think these may be ASCII values. I also tried it with the convert.todouble method, but it didn't work. How can I get the correct values?
Last Visit: 31-Dec-99 18:00 Last Update: 1-Oct-23 6:08
|
OPCFW_CODE
|
Hi Peter, apologies for the long hiatus from a response. I've been meaning to reply to you for a while.
I read Robbert's Dijkgraaf's piece, and found it good on the whole. There wasn't anything that I disagreed with, though most of the arguments offered were adjacent to the simulation/computational modelling topic rather than addressing it head on. He spoke of the connection between physics and the special sciences and engineering. I wrote an article exploring some of these possibilities myself, including the prospects of synthetic biology in engineering whole ecosystems.
As far as his optimism about physics, which I share, I tend to err on the side of appraising physics at what it aims to be rather than what it amounts to at the end of the day. To my mind, and this might be a very philosophical kind of framing/attitude, physics aims to both explain and describe nature (and explanations perhaps reduce to hidden descriptions). Insofar as that is the aim, this is where mathematical models fall short. Describing nature does not merely mean being able to predict it, even though I don't hold that the connection between approximate descriptions and prediction is accidental. Just to take one example, geometric spacetime is a mathematical model, but we're unable to properly square it with a description of what's fundamentally happening. It's a postulate that works with an increasing number of caveats, e.g. cosmological constant etc. Then there's the tension between time-symmetrical laws with the observed inexorably increasing entropy -- a tension that could be illusory or merely statistical. And finally, we tend to hold physics to this standard: it's not enough to discover the physical constants, we must also discover why they occupy the magnitudes that they do. There's got to be a fact of the matter that accounts for their being so. And yes there's a danger of explanatory infinite regress here.
I've also listened to the David Haig and Sean Carroll discussion by the way. It's a subtle topic, because it skirts the reduction/non-reduction dilemma. You might be familiar with the concept of teleonomy, which is basically teleology rendered in a purely physicalistic and evolutionary context: how do we explain that living organisms exhibit goal-directedness? Is it something to be explained away, something illusory, or an epiphenomenon? To give an example: sunsets are illusory in the sense what's really happening are orbital rotations -- so we explain them away. Is goal-directedness a similar kind of situation? Does it reduce to the causal-sequences of the sub-automata that comprise the organism? It's a tough question: by defending emergence, we're defending that composites of composing parts yield new causal powers. At least that's my view: goal-directedness can be explained in terms of cognitive competencies, but these, consciousness being a case in point, cannot be explained in my view without recourse to some global emergent property instantiated by the organism.
I'm an epistemological reductionist, in the sense that I look at the lower levels to explain why the higher levels behave or appear to behave as they do. But I'm not an ontological reductionist. I don't think the asymmetry that living organisms exhibit, namely the norm of preferring survival, is illusory but a real phenomenon. No other physical system exhibits this property: recursive maintenance of an internal milieu against a variable and potentially hostile, external one. "Hostile" itself is a value-laden word. So the naturalistic question is to ask how does this asymmetry arise in nature? By virtue of what facts is it explainable?
As far as I understand it, the high levels are real, they are not matters of heuristic description. Describing the organism as searching for food, can be cushioned in purely naturalistic terms, but doesn't negate the fact that the phenomenon is not illusory.
When David Haig says that causation occurs at the purely physio-chemical level, I think that's either equivocating on the notion of cause or displaying some conceptual confusion. It's like saying all causation happens at the quantum level. And this is precisely the question: what is the nature of causation? The higher-levels are not any less realer than lower-ones, but whether they hang together synchronously — the facts at the fundament determine the facts at higher levels — or diachronically — there's some independence between the levels accounted by x phenomenon — is still an open question.
|
OPCFW_CODE
|
About meI'm software developer with a passion for programming, design, and development. I've been working professionally as a developer since 2014 and I've passed through a series of roles and used various technologies to build solutions across desktop, web, and mobile platforms.
For most of my career I have utilized Microsoft technologies and techniques to develop elegant, creative technical solutions across all project phases. I'm comfortable in collaborative and independently-driven roles, I am a forward-thinking leader with refined analytical and critical thinking skills, and I can adapt and revise my strategies to meet evolving priorities, shifting needs, and emergent issues.
• adept at combining ideas, software concepts, paradigms, and technologies to design new products and services along with resolve problems/issues.
• skilled in analyzing and meeting customer requirements by creating and incorporating software solutions.
• able to deliver technical support, including bug fixes and functional extensions for deployed solutions.
I look forward to discussing my background and your needs in detail, as I am confident that my unique experience will be of great use in meeting your immediate and future objectives.
My work experience
Software Developer at Eresoft - Employee 11-01-2013 - 02-12-2014
Software development intern assisting in the implementation of modules of a School management system using Delphi programming language and Embarcadero tools
Software Developer at Tedata - Employee 09-24-2014 - 06-02-2015
Technical Scope: Developed web applications using Asp.Net MVC, Entity Framework, and SQL Server.
Critically assessed existing code bases, while maintaining and providing modifications. Kept abreast with job knowledge by studying state-of-the-art development tools, programming techniques, and computing equipment. Performed variety of tasks, including aiding in educational opportunities and professional organisations, maintaining personal networks, reading professional publications, and protecting operations by keeping information confidential.
Evaluated information needs and adhered to the software development life cycle to develop software solutions.
Controlled operational feasibility by analysing problem definition, requirements, and proposed solutions.
Engaged team members in the design and development of database scheme for new applications and features.
Software Developer at Parkway Projects - Employee 06-15-2015 - 04-27-2017
Technical Scope: Developed web applications using Asp.Net web forms, NHibernate, and SQL Server. Collaborated with testing and operations to ensure delivery of quality software.
Interacted closely with highly skilled development team for modelling, designing, and executing software solutions along with completing several projects within the constraints.
Streamlined and led the deployment and integration of developed/packaged solutions in production environments and life service systems.
Delivered technical support and guidance, including bug fixes and functional extensions for deployed solutions.
Facilitated in the achievement of goals and objectives in accordance with company's vision.
Software Developer at Signal Alliance - Employee 04-10-2017 - 05-17-2019
Technical Scope: Developed cloud-based applications on Azure using Azure App Services, Asp.Net MVC, Asp.Net Core, Entity framework, and SQL server.
Held responsibility for devising well designed, reusable objects, and logical databases for clients along with coordinating complex information effectively to team and clients, while adhering to best practices and coding standards.
Credited with analysing and meeting customer requirements by creating and incorporating software solutions.
Recognised for attaining changing needs by designing, implementing, and revising project work plans.
Software Engineer at Aurea Software - Contract 07-15-2019 - 07-07-2020
Refactor existing code, such as rewriting code to conform to specific quality standards. Engage team to explore quality anti-patterns in code bases and create tickets by utilising Jira project management tool. Interact with code quality reviewers and product chief architects of individual products to ensure changes meet required specifications.
Automated parts of work process, resolved issues by utilising Jira project tracking tool, fix raised issues, crafted pull requests for issues, responded to quality reviews, transformed tickets in predetermined workflow.
Streamlined and automated the work transition process by leveraging a combination of Jira and GitHub.
Enhanced productivity, including meeting my weekly target a day or two before the end of the week. Eradicated some errors raised due to the previous semi-manual process while also increasing quality score.
Software Engineer at N/A - Freelancer 07-07-2020 - 05-20-2021
Design and development of web applications.
Built an ecommerce system using Asp.Net Core for the API and Angular for the frontend
Built a social network site using Asp.Net Core for the API and Angular for the frontend
Developed an ecommerce system using Asp.Net Core for the API and Blazor for the frontend
Developed a fitness app using Angular and Firebase
Developed a Blog CMS application using Angular and Firebase
Built a Book Store Management application using Asp.Net Core for API layer and Blazor for frontend
|
OPCFW_CODE
|
Matrix Data, Specimens, & Typography
Monotype display matrices and Giant Caster matrices, generally 14 point to 72 point. The display casters can also cast from Monotype cellular matrices, but these (and general Monotype Specimen books not specific to the display machines) are in the ../../ Composing Typecasters -> Matrix Data, Specimens, & Typography. The Thompson can cast from all of these matrices up to 48 points.
The Monotype machines typically are versatile enough to break any categorizing scheme I can come up with. In this section I mean to collect Notebooks on Monotype machines used for the production of individual types for handsetting. But there are problems...
Problem 1 (distinguishing the Composition Caster from the Type-&-Rule Caster): The Type-&-Rule Caster was a variation on the Composition Caster. So a "pure" Composition Caster could cast either composed matter or sorts and fonts in text sizes (though later features pushed its upper size limit). A Composition Caster with the Display Type Attachment (9CU) could still do this plus cast sorts and fonts up to 36 point (using flat Lanston display matrices in the 14 to 36 point range). A "pure" Type-&-Rule caster had its composition abilities removed, and normally cast sorts and fonts in the "display" range, but could have the "Composition Matrix Sorts Casting (Type-&-Rule Cster) Attachment (19CU) applied so that it could cast sorts and fonts in text sizes. Either style of machine could also be equipped not only with Lanston (= American) mold and matrix equipment, as would be normal in the US, but optionally with English Composition (Attachment 21CU) or Display (Attachment 22CU) equipment to use Monotype Corporation (= English) matrices. These are just inherently hard machines to classify!
Problem 2 (strip material): Most of these machines could also operate in "fusion" mode to produce continuous strip material. I think (but am not sure - I'm not a Monotype expert) that only the Material Maker could produce only material (not types). I'm therefore covering it in the Strip-Casters section. The machines which could operate both in "fusion" (strip material) and "non-fusion" modes are covered here.
Problem 3 (Thompson): Monotype bought Thompson. To complicate things, the Thompson had a (rare) material making attachment. I'll cover the Thompson elsewhere, in its own section within the Sorts Casters set of Notebooks.
All portions of this document not noted otherwise are Copyright © 2009 by David M. MacMillan and Rollande Krandall.
Circuitous Root is a Registered Trademark of David M. MacMillan and Rollande Krandall.
This work is licensed under the Creative Commons "Attribution - ShareAlike" license. See http://creativecommons.org/licenses/by-sa/3.0/ for its terms.
Presented originally by Circuitous Root®
Select Resolution: 0 [other resolutions temporarily disabled due to lack of disk space]
|
OPCFW_CODE
|
If someone can prove Goldbach conjecture assuming the continuum hypothesis, do we consider the conjecture proved?
If someone can prove Goldbach conjecture assuming the continuum hypothesis, do we consider the Goldbach conjecture proved?
If ZFC+CH implies Goldbach, and if the Goldbach turn out to be false, then it would mean that ZFC+CH is not consistent, but we know that ZFC+CH is consistent assuming that ZFC is consistent...
What do you think?
Because the Goldbach conjecture is an arithmetic statement, it is absolute between any two models which agree on the natural numbers.
Now, given any model of $\sf ZFC$, $M$, there is a forcing extension $M[G]$ with the same ordinals (and in particular, the same natural numbers, which are the just the finite ordinals), in which $\sf CH$ holds. Or, better yet, simply consider $L^M$, which is an inner model with the same ordinals (and, again, the same natural numbers), in which $\sf CH$ holds.
Therefore, if you can prove Goldbach, Riemann, or the ABC Conjecture, assuming $\sf CH$, you may as well have proved it. Using $L$ will also tell you that using the Axiom of Choice was redundant, so in fact the proof is in $\sf ZF$ and not $\sf ZFC$.
So, to sum this up, if you prove that $\sf ZFC+CH$ implies Goldbach's conjecture, and then you prove that Goldbach's conjecture is false, you've proved that $\sf ZF$ is inconsistent. Which, to my taste, is a far bigger result than Goldbach's conjecture (although others may disagree).
Thank you for the answer! I have another question. Can we say that if "ZFC+CH implies Goldbach", then "ZFC implies Goldbach"?
As I write in the third paragraph, if ZFC+CH proves Goldbach, then ZF proves it.
Thank you. Is it the same thing to say that "if ZFC is consistent, then some first order formula is true" and "there is a formal deriviation of this first order formula in ZFC"?
What do you mean by "true", though, if you're not specifying a universe (or at least a meta-theory, in which we can interpret "true" as "provable", i.e. independent of the universe)? I feel like your question in this comment is an XY problem. You're asking about X, but you are really interested in an answer about Y.
I have questions that I don't know if they are the same question. So I was not sure how to formulate my orignial question. Let's say that I know that "if CH+ZFC implies Goldbach and ZFC is consistent, then Goldbach is true". I don't know if this is the same thing as "if CH+ZFC implies Goldbach and ZFC is consistent then there is a formal derivation of Goldbach in ZFC."
Yes, that is the same thing. Those are first-order theories. Apply Gödel's completeness theorem, and you're good to go.
I see. Thank you!
But is it possible that Goldbach is true in the real world but not in some non-standard model? That would stop us from using completeness theorem.
What is "the real world"? In principle, yes, it is possible that Goldbach is true only in some models of arithmetic. This would be an interesting situation, e.g., if it turns to be provable from ZF but not from PA.
Sorry I was wrong. Now I realized that if CH+ZFC implies Goldbach then Goldbach has to be true in every model of ZFC, so that we can always use completeness theorem.
By "the real world" I wanted to say the standard model of PA.
The standard model of PA depends on the universe you're working in. If you're working in a non-standard model of ZFC, then its standard model of PA is not necessarily the same as the one of its meta-theory.
@Jiu If your question is whether it is possible that Goldbach is true in the standard model of PA but false in some non-standard model of PA, then yes, as far as we know, this is possible; it is equivalent to saying that Goldbach is independent of PA, which is unknown. The reason Asaf Karagila asked for clarification is that sometimes people use "real world" to refer to the metatheory. Also you didn't say whether, by "model," you meant model of PA or model of ZFC.
|
STACK_EXCHANGE
|
NAudio - MediaFoundationReader: constructor doesn't take a delivered m4a-URL (from a youtube-Link)
I am trying to play an M4A (MP4 Audio) file directly from the internet using a URL.
I'm using NAudio with the MediaFoundation to achieve this:
using (var reader = new MediaFoundationReader(audioUrl)) //exception
using (var wave = new WaveOutEvent())
{
wave.Init(reader);
wave.Play();
}
This works well on two test systems with Windows 8.1 and Windows 10. But on my Windows 7 machine it is not working, I am getting an exception from inside the MediaFoundationReader constructor.
Initially, I was getting an ArgumentOutOfRangeException. I tested playing this m4a file in WMP and it was also unable to play it. I downloaded a codec pack and installed it. That helped with WMP but my code was still throwing an exception, albeit another one:
An unchecked exception of type
'System.Runtime.InteropServices.COMException' occurred in NAudio.dll
Additional information: There is more data available. (Exception from
HRESULT: 0x800700EA)
Any ideas what could be causing this and how I can fix it?
I would go to Debug\Windows\Exception Settings and configure it to Break In All Exceptions. Inspecting inner exceptions should give a better clue about what the problem might be.
Also, by using the source code from Codeplex, I'm able to step through the code for MediaFoundationReader() constructor using the debugger, with no problem. I don't have access to a Windows7 machine I could use to reproduce the exception. "Unfortunately" everything works fine under my Windows 10 machine.
Not able to reproduce on win7 sp1. Link to the file?
I just searched google for HRESULT 0x800700EA and found a bunch of sites claiming that this error is linked to corrupted Windows files. I don't know how credible these sites are but since cviejo can't reproduce the error, I consider it a possibility.
Are you sure the Win7 machine has a codec able to reproduce the file?
Could you share the prototype via github for instance?
jstreet:
configure the exception settings to "Break in all exceptions" didn't help me. It's the same exception and the Inner Exception is "null".
cviejo:
I tried it with multiple links, it's not just one link causing the error.
Wilsu:
Maybe that's it ... I tried to run the app as an admin like Emanuele Spatola said, but that didn't help me, too. So I have no idea, what else could cause the error.
With some research i identified this
0X800700ea can occur when your Windows operating system becomes
corrupted. There can be numerous reason that this error occur
including excessive startup entries, registry errors, hardware/RAM
decline, fragmented files, unnecessary or redundant program
installations and so on.
Can you try you program in another system and verify
I tried it on my girlfriends win7-system but there occured strange other errors and it didn't work either. Maybe this system is corrupted, too (both PCs are several years old ...).
So I'll reinstall my system and try it again, cause this errors are that weird... I think, a corrupted system is the reason. In about 3 months I'll get a new, very good PC, anyway.
Thanks for your time ;)
Sometimes the user doesn't have enough privileges to run COM Methods.
Try to run the application as Administrator.
|
STACK_EXCHANGE
|
Three Knights vs one Queen, which is better?
Suppose the end game contains three knights and king only for white, and one queen and king only for black. Which side is considered to be in a stronger position?
Value-wise, the knights are worth 9 points, while the queen is worth 8, so I would have considered white with three knights winning. However, I played this against the computer, and found my knights very ineffective, so I’m not sure.
If the answer depends on who is to play and the position of the pieces, that can be stated, but let’s not assume the pieces start in a highly specific configuration.
You might be interested: https://chess.stackexchange.com/questions/30135/when-is-a-queen-better-than-3-minor-pieces-or-vice-versa
It seems that it trivial to achieve a draw for side with the queen, just exchange the queen for one of the knights and it should be a draw in most positions. Therefore, I think the side with the queen should not be worse.
Anyway, I hope that someone, who promotes a pawn to a knight, loses
@d4zed There are situations where a knight promotion is optimal. The trap line in the Albin Countergambit for example.
"while the queen is worth 8" citation needed
First point - Queen typically is also 9 points not 8; and 3N vs Q - question who is better is not very interesting as its obvious - Queen. It's obvious that in most cases(unless special setup) Queen can easy just take one knight and achieve immediate draw as 2N can not checkmate - question is - can Queen win - use tablebases for that.
Have you tried the other way round? Playing you with the queen and the computer with 3 knights?
The queen is not usually assigned a value of 8 pawns, rather often between 9 and 10 pawns. Of course the question still stands, assuming this roughly equal material balance who has the edge?
Let's consider two scenarios how the position could simplify: One is the queen sacrifices itself against a knight, or gets won by a knight fork for a knight. In this case one side stays with two knights against just the king. With no other pieces this is an easy draw for the lone king.
The other scenario is the queen wins a knight. In this case there exists a fortress for the knights like so https://lichess.org/analysis/4k3/8/8/3NN3/4K3/8/8/q7_w_-_-_0_1 (e.g. Ke4 Ne5 Nd5 against a king on e8). The idea is the two knights block the opponent king from approaching. (as a funny aside, since both positions are a draw, in some sense blundering the queen for free in this position is not a mistake, it's a draw before and afterwards)
It is worth noting that this position might not always be reachable for the knight side, so if the pieces start out in unfortunate positions it may not be possible to reach this setup. Yet, with an extra knight the vast majority of positions will be a draw.
To conclude, this position is very drawish, neither side has almost any winning chances. In blitz both sides may have slight winning chances; the queen side might win a knight while keeping the opponent away from the fortress position, conversely it might blunder the queen to a knight fork without getting a knight back. However, overall I would still see the chances as roughly equal.
It might be interesting to look at adding more material for both sides. I suspect if you give both sides a pawn or two, the side with the three knights might have the advantage as knights are quite efficient at blocking checks, and their sheer number might overload the other side from defending their pawns sufficiently often, all the while pushing the own pawn. But without pawns, the knights are simply not enough to win.
Addendum: Since it was requested, here are the stats from the tablebase, note that those are very misleading since many positions are unlikely to be reached in practice:
With the knights to move, (1) draws, (2) wins for the knights, (3) wins for the Queen, (4) would be wins for the Queen but prevented by the 50 moves rule.
(1):<PHONE_NUMBER>
(2):<PHONE_NUMBER>
(3): 796423773
(4): 746382
With the queen to move:
(1):<PHONE_NUMBER>
(2): 42978066
(3):<PHONE_NUMBER>
(4): 1078938
As can be seen by the huge difference between the knights to move and the queen to move, the vast majority of these are decided by tactics straight away, which is why being to move is a huge advantage in a random position. In practice however, you wouldn't really consider them as static endgames but rather trading down into a smaller endgame straight away.
If you want to argue that way you can see that the queen indeed wins more often, likely because 3 knights have a lot of options to be very misplaced on the board.
It would be interesting to simply count the number of all drawn, lost and won positions by a tablebase (the data surely exists). I guess the number of forced forks is far lower than the number of the queen picking up two knights.
@HaukeReddmann added the stats, but these stats usually don't paint an accurate picture. As can be seen by the fact that here overall less than 50% are drawn, which clearly is not what you'd see in practice reaching those positions naturally. (well, if you naturally get 3 knights against a queen...)
To give an even clearer example of that fact, more than a quarter of the KR vs KR positions are not a draw. And clearly this is about the most drawish endgame that you could have.
@koedem In some of those, the turn player captures the rook immediately or wins it using a skewer check. If such positions are excluded (as you'd only classify the endgame once it reaches a quiet position) I wonder what percentage are draws.
@RosieF well yes, exactly. That is the point I'm making, these numbers are not very representative. I suppose one could look at positions where the next capture is at least 3 moves away or so?
Cf this question posted over a year ago, which I answered yesterday with stats.
I'd say that 100% of "naturally occurring" cases of KNNNvsKQ are wins for the 3 knights, since you would never promote to a knight if doing so didn't win you the game :)
Thanks for the great answer.
|
STACK_EXCHANGE
|
Chock full of awesome
Posted by Trixter on August 27, 2008
NVScene was everything I had hoped for and much more. Thanks to the money put behind the event by NVidia, the sound system and bigscreen was something to be in awe of: 1920×1080 on a screen about 30 feet tall. North American sceners got the chance to meet some of the modern greats, with representatives from Farbrausch, ASD, Plastik, and more.
The talks were all outstanding, even the “history of the scene” talk we could all give in our sleep. The demoscene.tv crew were busy running around doing interviews and live-cutting footage for your enjoyment, so they were understaffed for the actual talks and conferences. I missed Mentor’s talk :-( :-( due to a misunderstanding on my part about the schedule, and then I may have irritated him by asking him for his slides during the Spore talk when he was busy, so that was a flub on my part… I hope he releases his slides because they looked really awesome and I’d really like to learn what he had to say.
I am in the airport waiting for my delayed flight to be undelayed, so I thought I’d put up a quick summary of what I learned at NVScene. First, the obvious-to-Euros-but-not-Americans surprises:
- Americans and Euros can get along wonderfully in the demoscene. (By association, there can indeed be two NA demoparties in a year without the space/time continuum imploding.)
- Everybody has a chance to learn from each other, regardless of experience or skill.
- Computer graphics techniques are so universal that you can hold a conversation with any demoscener, even if you can barely understand each other due to English not being your native language.
Here’s what I learned that surprised me, mainly because I don’t write demos for modern platforms, only follow them:
- Realtime raytracing with fantastic quality is not only possible, but can be done entirely by the graphics card using pixel shaders (!).
- Most demos (and some 4K intros!) use a scripting/build system, and each major group has their own tools. One very interesting exception is ASD, whose coder writes all sections of the demo with the ability to render along any point in time (ie. f(x) where x is a float from 0 to 1 with 0 the start of the scene and 1 being the end). He said he likes to “scrub” through his demo using the mouse, and doesn’t mind that his scenes are hard-coded because it only takes him 3 seconds to recompile and run.
- Future of the scene for the next two years in two words: Ambient Occlusion.
Polaris/ND and I tossed around a demo idea. Not sure if NVScene will be around next year, but if not, Block Party will be. So who knows.
I would like to publicly thank the organizers of Block Party for enabling me to attend NVScene. And, of course, I would like to thank Gloom, Gargaj, Steeler, and Temis for making NVScene possible.
|
OPCFW_CODE
|
I attended several great sessions at the Society of American Archivists conference last month. There is a wiki for the conference, but very few of the presentations have been posted so far…
One session I particularly enjoyed addressed the archiving of email – ‘Capturing the E-Tiger: New Tools for Email Preservation’. Archiving email is challenging for many reasons, which were very well put by the session speakers.
Both the EMCAP and CERP projects were introduced in the session.
EMCAP is a collaboration between state archives in North Carolina, Kentucky, and Pennsylvania to develop means to archive email. In the past, the archives have typically received email on CDs from a variety of systems, including MS Exchange, Novell Groupwise and Lotus Notes. One of the interesting outcomes of this work is software (an extension of the hmail software – see sourceforge) that enables ongoing capture of email, selected for archiving by users, from user systems. Email identified for archiving is normalised in an XML format and can be transformed to html for access. The software supports open email standards (POP3, SMTP, and IMAP4) as well as MySQL and MS SQL Server. The effort has been underway for five years and the software continues to be tested and refined.
CERP is a collaboration between the Smithsonian Institution Archives and Rockefeller Center Archives. This context has more in common with archiving email in the Bodleian context, where an email account is more likely to be accessioned from its owner in bulk than cumulatively. Ricc Ferrante gave an overview of the issues encountered, which were similar to our experiences on the Paradigm project and in working with creators more generally.
CERP has worked with EMCAP to publish an XML schema for preserving email accounts. Email is first normalised to mbox format and then converted to this XML standard using a prototype parser built in squeak smalltalk, which also has a web interface (seaside/comanche). The result of the transformation is a single XML file that represents an entire email account as per its original arrangement. Attachements can be embedded in the XML file, or externally referenced if on the larger side (over 25kb). If I remember rightly, the largest email account that has been processed so far is c. 1.5GB; we have one at the Library that’s significantly larger and I’d like to see how the parser handles this. It will be interesting to compare the schema/parser with The National Archives of Australia’s Xena. The developers are keen to receive comments on the schema, which is available here.
5 thoughts on “XML Schema for archiving email accounts”
That’s really useful – thanks! Look forward to seeing the parser.
Most of the emails we had difficulties with had to do with date formats, or other similar things where the original message actually didn’t comply with the Internet Message Format, RFC 2822, or the standard is so vague as too allow for many different interpretations and implementations of structured content. As we encountered these we were able to enhance the parser to recognize and handle most situations. However, we know that the parser, as with any other software, matures with continued attention. A first step would be testing it with more and more diverse content. We’re working through some open source loicensing steps so that we can make the parser available.
Thanks for flagging this up. I don’t know of any other tools working on the mbox files (other than XENA)- as you know, the work we did on Testbed was on the scale of individual email messages rather than an entire inbox. The MS Outlook email to XML converter is still available at http://www.digitaleduurzaamheid.nl/index.cfm?paginakeuze=299 .
Do you know if there is a related CERP/EMCAP project that’s looking at preservation of attachments (rather than just encoding them in the XML file or referencing them esternally)?
I’d also be interested to know why some (admittedly a very small number) of the messages in their pilots couldn’t be converted – any ideas?
I think the idea is that the CERP tool will be available, and XENA is available already (via sourceforge). I would be interested to know of others that might be out there.
As librarian who has systematically applied the processes of librarianship against email for about fifteen years, I found this posting very interesting.
I would certainly advocate an XML schema used to encode email, but I got a chuckle when the posting alluded to mbox. It is the MARC of the SMTP world.
Creating XML versions of mbox data would go a long way in the collection, re-distribution, and additional functionality of emailed content. And email content of today is the archival letter content of tomorrow.
Is there a freely available, and relatively ease-to-use mbox to email XML parser?
Eric Lease Morgan
University Libraries of Notre Dame
|
OPCFW_CODE
|
import unittest
from dsptestbed.writers import AiffWriter, WaveWriter, MatWriter
from dsptestbed.readers import AiffReader, WaveReader
from tempfile import gettempdir
from os import path, unlink
from dsptestbed.signal_source import SineSource
from math import pi
from struct import unpack
class AiffWriterTest(unittest.TestCase):
r = AiffReader
w = AiffWriter
def test_write(self):
source = SineSource(channels=2, freq=441, amp=0.5, phase=pi / 2, length=44100)
data = list(source.read())
for f in xrange(1, 5):
fname = path.join(gettempdir(), "%s" % f)
w = self.w(fname, channels=source.channels,
rate=source.rate,
depth=f)
w.write(data)
w.close()
r = self.r(fname)
self.assertEqual(r.rate, 44100)
self.assertEqual(r.channels, 2)
self.assertEqual(r.depth, f)
chunk = list(r.read())
self.assertEqual(chunk[0], [0.5, 0.5])
self.assertEqual(chunk[25], [0.0, 0.0])
self.assertEqual(chunk[50], [-0.5, -0.5])
unlink(fname)
class WaveWriterTest(AiffWriterTest):
r = WaveReader
w = WaveWriter
class MatWriterTest(unittest.TestCase):
def test_write(self):
source = SineSource(channels=2, freq=441,
amp=0.5, phase=pi / 2, length=44100)
data = list(source.read())
fname = path.join(gettempdir(), "test.mat")
w = MatWriter(fname)
w.write(data)
w.close()
# Comprehensive testing using scipy loadmat routine
try:
from scipy.io import loadmat
f = loadmat(fname)
self.assertTrue("signal" in f)
self.assertEqual(f["signal"].shape, (len(data), 2))
self.assertEqual(tuple(f["signal"][0]), (0.5, 0.5))
except ImportError:
print("No scipy found :(")
pass
# Less comprehensive testing for scipy-poor systems (like pypy)
# File size
self.assertEqual(path.getsize(fname),
128 + # Header
8 + 16 + 16 + 16 + # Array tag, array flags, dimensions, name
8 + # Data tag
len(data) * 2 * 8 # Data size
)
with open(fname, "rb") as f:
f.seek(128 + 8 + 16 + 16 + 16 + 8)
s = f.read(8)
self.assertEqual(unpack("<d", s)[0], 0.5)
f.seek((len(data) - 1) * 8)
self.assertEqual(unpack("<d", s)[0], 0.5)
unlink(fname)
|
STACK_EDU
|
Delete multiple extended attributes (but not all of them) in one step
For example, I have a file with three extended attributes:
com.apple.FinderInfo
com.apple.metadata:_kMDItemUserTags
com.apple.metadata:kMDItemFinderComment
I can delete the first two using
xattr -d com.apple.FinderInfo file.txt
xattr -d com.apple.metadata:_kMDItemUserTags file.txt
But I would prefer to not invoke xattr multiple times, and to use something like this instead:
xattr -d \( com.apple.FinderInfo, com.apple.metadata:_kMDItemUserTags \) file.txt
xattr -d com.apple.FinderInfo -d com.apple.metadata:_kMDItemUserTags file.txt
Is it possible somehow?
@MarcusMüller I'm on Mac, but is seems xattr is not macOS-specific utility: https://man7.org/linux/man-pages/man7/xattr.7.html
The man page you're referring to doesn't describe a command, but the concept of extended attributes. (The Linux tools are setfattr and getfattr)
And synopsis for the Mac tool goes xattr -d [-rsv] attr_name file ..., where attr_name is not an argument to the -d option, so xattr -d this -d that file.txt also doesn't work. You might be out of luck here.
Hm, maybe you could run xattr -d attribute1 -d attribute2 filename, if xattr supports that? That would mean the documentation is a bit wrong, but getopt-based programs often have that problem.
Let's test that:
I don't have MacOS to test, but I got the original source code of the xattr tool, and removed all functionality from it so that my version compiles on linux and instead prints what it deletes.
Sadly,
./xatrr -d foo -d bar foo
xatrr: [Errno 2] No such file or directory: 'bar'
Thus, that's not an option.
Well, then:
for attr in attribute1 attribute2 do; xattr -d ${attr} filename; done
is the best I could offer (without relying on more tools).
Many thanks. xattr -d attribute1 -d attribute2 filename doesn't work (see the last example in my question), but for ... in is a good workaround.
Just out of curiosity, is there a practical reason you have wrap attr in curly braces? Is it just a personal preference in this particular case?
@jsx97 it's personal preference. I do prefer it that way, because then I don't have to think when I have two variables names where one is the beginning of the other (and I might not even realize, that longer variable might come from elsewhere).
|
STACK_EXCHANGE
|
- All Implemented Interfaces:
- public class RMResponseKeysMsg
- extends RMMessage
- implements java.io.Serializable
- $Id: RMResponseKeysMsg.java,v 1.13 2003/07/23 23:52:50 animesh Exp $
- Animesh Nandi
- See Also:
- Serialized Form
Constructor : Builds a new RM Message
|Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public RMResponseKeysMsg(rice.pastry.NodeHandle source,
- Constructor : Builds a new RM Message
source - the source of the message
address - the RM application address
authorCred - the credentials of the source
seqno - for debugging purposes only
_rangeSet - the rangeSet of this message
_eventId - the eventId of this message
public void handleDeliverMessage(rice.rm.RMImpl rm)
- The handling of the message does the following -
1. Removes the event characterized by 'eventId' from the m_PendingEvents
hashtable signifying that a response to the message was received
to ensure that on the occurrence of the timeout, the message is NOT
2. We then iterate over the 'rangeSet' for each entry of type
RMMessage.KEEntry we do the following:
a) If the entire key set for the rangge was requested, it notifies
the RMClient to fetch() the keys in that set. Additionally, it
removes this range from the list of pending ranges in the
b) If only the hash of the keys in the range was requested and
the hash matched then we
remove this range from the list of pending ranges in the
c) If only the hash of the keys in the range was requested and
the hash did not match, then we update the entry corresponding
to this range in the pending ranges list
with the number of keys in this range as notified by the source
of the message.
3. We now iterate over the pending Ranges list and split the ranges
whose expected number of keys('numKeys') is greater than
MAXKEYSINRANGE. The splitting method is recursive binary spliting
until we get a total SPLITFACTOR number of subranges from the
4. At this point all ranges in the pendingRanges list have either a
value of 'numKeys' less than MAXKEYSINRANGE or a value or '-1'
5. Now we iterate over this list of pending ranges and build a new
RMRequestKeysMsg with a new rangeSet called 'toAskFor' in the code
below. All add all ranges with uninitialized value of 'numKeys'
to the new list 'toAskFor' setting their 'hashEnabled' field in
their corresponding RMMessage.KEEntry to 'true', signifying that
it is interested only in the hash value of the keys in this range.
Additonally, it also adds ranges with already initialized 'numKeys'
values to this 'toAskFor' list with the 'hashEnabled' field set to
'false' as long as the total size of the key sets corresponding to
the entries in 'toAskFor' is less than MAXKEYSINRANGE.
6. Sends the new RMRequestKeysMsg with this 'toAskFor' list.
Additionally, in order to implement the TIMEOUT mechanism to
handle loss of RMRequestKeysMsg, we wrap the RMRequestKeysMsg
in a RMTimeoutMsg which we schedule on the local node after a
- Specified by:
handleDeliverMessage in class
public int getEventId()
Copyright © 2001 - Rice Pastry.
|
OPCFW_CODE
|
package com.example.mooderation.viewmodel;
import android.net.Uri;
import androidx.lifecycle.LiveData;
import androidx.lifecycle.MutableLiveData;
import androidx.lifecycle.ViewModel;
import com.example.mooderation.MoodEvent;
import com.example.mooderation.backend.ImageRepository;
import com.example.mooderation.backend.MoodEventRepository;
import com.google.android.gms.tasks.Task;
/**
* ViewModel for MoodEventFragment
*/
public class MoodEventViewModel extends ViewModel {
private MoodEventRepository moodEventRepository;
private ImageRepository imageRepository;
private MoodEvent moodEvent;
private MutableLiveData<MoodEvent> moodEventLiveData;
private MutableLiveData<Boolean> isEditingLiveData;
private MutableLiveData<Boolean> locationToggleStateLiveData;
private Uri localUri;
/**
* Default constructor. Creates dependencies internally.
*/
public MoodEventViewModel() {
moodEventRepository = new MoodEventRepository();
imageRepository = new ImageRepository();
}
/**
* Constructor with dependency injection.
* @param moodEventRepository
* An instance of MoodEventRepositoty
* @param imageRepository
* An instance of ImageRepository
*/
public MoodEventViewModel(MoodEventRepository moodEventRepository, ImageRepository imageRepository) {
this.moodEventRepository = moodEventRepository;
this.imageRepository = imageRepository;
}
/**
* Set the mood event displayed by this fragment.
* @param moodEvent
* The mood event display and edit.
*/
public void setMoodEvent(MoodEvent moodEvent) {
// initialize live data
if (moodEventLiveData == null) {
moodEventLiveData = new MutableLiveData<>();
}
this.moodEvent = moodEvent;
moodEventLiveData.setValue(moodEvent);
}
/**
* Updates the mood event being viewed.
* @param callback
* A function that should update the mood event.
*/
public void updateMoodEvent(UpdateMoodEventCallback callback) {
MoodEvent moodEvent = callback.update(this.moodEvent);
this.moodEvent = moodEvent;
this.moodEventLiveData.setValue(moodEvent);
}
/**
* Get live data of the mood event.
* @return
* Live data tracking the current mood event.
*/
public LiveData<MoodEvent> getMoodEvent() {
if (moodEventLiveData == null) {
throw new IllegalStateException("Mood event cannot be null!");
}
return moodEventLiveData;
}
/**
* Sets isEditing which indicates whether the moodEvent is being created or edited
* @param isEditing
*/
public void setIsEditing(Boolean isEditing) {
// Initialize live data
if (isEditingLiveData == null) {
isEditingLiveData = new MutableLiveData<>();
}
isEditingLiveData.setValue(isEditing);
}
/**
* Returns a boolean which indicates if the moodEvent is being created or edited
* @return isEditingLiveData
*/
public LiveData<Boolean> getIsEditing() {
if (isEditingLiveData == null) {
setIsEditing(false);
}
return isEditingLiveData;
}
/**
* Sets a boolean which indicates the correct state of the location toggle
* @param locationToggleState
*/
public void setLocationToggleState(Boolean locationToggleState) {
// Initialize live data
if (locationToggleStateLiveData == null) {
locationToggleStateLiveData = new MutableLiveData<>();
}
locationToggleStateLiveData.setValue(locationToggleState);
}
/**
* Returns a boolean which indicates the correct state of the location toggle
* @return locationToggleStateLiveData
*/
public LiveData<Boolean> getLocationToggleState() {
if (locationToggleStateLiveData == null) {
setLocationToggleState(false);
}
return locationToggleStateLiveData;
}
// TODO
/**
* Upload an image to Firebase Storage
* @param imageUri
* The image's URI
*/
public void uploadImage(Uri imageUri) {
imageRepository.uploadImage(imageUri).addOnSuccessListener(taskSnapshot -> {
if (moodEvent == null) {
throw new IllegalStateException("Mood event cannot be null!");
}
moodEvent.setImagePath(imageRepository.getImagePath(imageUri));
moodEventLiveData.setValue(moodEvent);
});
}
/**
* Download the image from Firebase storage.
* @return
* An async task.
*/
public Task<byte[]> downloadImage() {
return imageRepository.downloadImage(moodEvent.getImagePath());
}
/**
* Delete an image from Firestore Storage.
*/
public void deleteImage() {
imageRepository.deleteImage(moodEvent.getImagePath());
moodEvent.setImagePath(null);
moodEventLiveData.setValue(moodEvent);
}
/**
* Save the changes made to the current mood event.
*/
public void saveChanges() {
if (moodEvent != null) {
moodEventRepository.add(moodEvent);
}
}
/**
* Interface for a mood event update callback.
* Used to make changes to the mood event.
*/
public interface UpdateMoodEventCallback {
public MoodEvent update(MoodEvent moodEvent);
}
}
|
STACK_EDU
|
There are over 30 billion active Internet of Things (IoT) devices and that number is growing at an incredible rate. According to McKinsey, 127 new IoT devices are connected to the web every second. Every device comes with new vulnerabilities and increases the attack surface. Networks of industrial IoT machines are distributed over different organizations and geographies resulting in a complex mesh which poses a serious security problem. The root of the problem lies with the machine identities. Each device needs a strong, unique identity in order to authenticate itself and securely communicate with other devices. Only then can these many devices transfer and manage data securely.
Today, there is no standard interface to acquire IoT machine identities with MQTT (Message Queuing Telemetry Transport) and other protocols. Most often, manufacturer or IoT platform defaults are used for machine identities. The use of these defaults makes it easy for hackers to compromise the identities and trigger network disruptions with far-reaching effects. Similar vulnerabilities occur when machine identities expire or are not properly managed. On top of that, even if strong machine identities are created and updated, device authentication—the ability to verify its identity—remains elusive.
For the IoT to be successful, a security solution is needed that provides all IoT devices with secure machine identities that are verifiable and enables authentication of these identifies throughout the entire life cycle of the devices. Furthermore, this solution needs to be IoT-aware, providing the visibility, intelligence and automation required to deal with all the risks that IoT devices present.
Device identities can be managed by device certificates in a strong Public Key Infrastructure (PKI). Using PKI, each device identity is built from a strong public-private cryptographic key pair that is unique to the chip. While the public part can be shared to establish the identity, the private part—used to authenticate the identity—must be kept secret at all times and should be bound to the device. A very convenient way to do this is by using Physical Unclonable Functions or PUFs. For example, the SRAM PUF solutions from Intrinsic ID employ physical properties unique to the chip and can be used as a root of trust to create the cryptographic key pair. Using a PUF is a very secure way to keep a private key secret since the key is never stored but regenerated from the PUF when needed.
In order to accelerate the deployment of unclonable device identities based on SRAM PUF technology, Intrinsic ID has created an interface between the Venafi platform and the Intrinsic ID key provisioning tool. Semiconductor vendors and OEMs can use this combined solution to provision devices that deploy the Intrinsic ID SRAM PUF products for secure key storage and management as indicated in the figure below. Intrinsic ID BK embedded software IP is used to generate that strong and unique cryptographic key pair from the unique physical properties of the SRAM in the chip in the device. The public key is then exported as the Certificate Signing Request (CSR) to the Venafi backend to turn it into a digital certificate. The certificate is retrieved from the Venafi backend and installed on the device. The device now has all the credentials needed to establish a secure communication with the cloud.
Upon connection with the cloud (e.g. with the Amazon cloud as indicated on the figure), the device will use the certificate to show its identity. Based on the certificate, the cloud can verify the identity of the IoT device. The device will need its private (secret) key in an authentication protocol with the cloud to prove its identity. The authenticity of the device can now be guaranteed since no other party knows—or has access to—the private key. The private key is reconstructed on the fly from the chip’s SRAM PUF.
Venafi’s Control Plane for Machine Identities is used for the logging, monitoring and lifecycle management of the device. The Venafi Platform provides the visibility and intelligence for machine identity management and inventory and can be easily scaled up for large numbers. Any Certificate Authority (CA) can be used in this process and the system is agnostic to the CA used. By setting up the system this way, we are able to track and monitor certificate status in an automated way. Once the certificate is generated the setup can be connected to the cloud network. Venafi provides a complete certificate lifecycle support—including revocation and renewal of device certificates.
Intrinsic ID SRAM PUF technology provides every IoT device with a strong and device-unique cryptographic key pair, which forms the basis of the device’s digital identity. This digital identity is solidly established by having a trusted party generate a digital certificate based in the unique public key of the device certificate generation and management are efficiently controlled and scaled to large number of devices by leveraging the power of the Venafi Platform backend. The result is a highly secure, dependable solution for securing and managing machine identities.
Intrinsic ID is part of the Machine Identity Management Development Fund. This vast ecosystem of partners and out-of-the-box integrations helps Venafi customers manage all machine identities and orchestrate them throughout their security infrastructure. Want to learn more? Visit Intrinsic ID on the Venafi Marketplace.
|
OPCFW_CODE
|
Im trying to figure out how to run a python script that includes selenium. I have the script already but I dont know how to make it run on my pc as my pc may not be setup correctly with the correct drivers etc. Need a full walkthrough to get this script to work.
I currently have a script with me on how to extract data through python selenium. But I Need help with an expert who can explain and decipher the code for me to understand it. You won't need to do any programming and coding. It's more of helping me to understand the source code. Criteria is must speak good english.
...current process is fiddly and I would like to streamline it as much as possible, so I'm looking for a scraping expert who can set this up in a way which allows it to happen automatically on a pre-determined schedule. Whether you recommend Scrapy or Selenium or something else, I'm open to your recommendations, but won't have time to respond to queries from
...informations about a prepaid voucher ; voucher code, voucher value and state. Once the program detect a new record on the mysql table, it opens the browser, goes to a website, log in using accounts from another table. Then redeem the voucher on the website and update records in the first table. This is the large picture, there is some conditions and some other
Hey guys, I'm using Selenium on my server to do some website scraping, and wish to switch to do it serverless with using Cheerio instead. Create a web page scraper in lambda based on this framework: [login to view URL] Your script will need to be able to take
...experience with Selenium with Java - Understanding of advanced Java concepts - Experience with TestNG, Ant/Maven, Jenkins - Automation experience for Web based and/or mobile Apps - Understanding of Agile Concepts - Hands on experience of automation planning and execution - Hands on experience of creating automation framework with Selenium - Hands on experience
...multi-threaded selenium script. I would be running this against 500,000 internal address, hence the threaded part. The script would need to bypass certificate errors, an example would be for it to run against: [login to view URL] and screen shot the page. if you and I visit the page now it will say insecure connection, the script would
[SELENIUM] Hi folks! I'm looking for someone who has already used SELENIUM to test/automate website browsing. There's one specific service that I'm looking for that requires online booking. Generally all dates are fully booked. And sometimes, during a day, the bookings are dropped. Therefore, I want a tool that will continuously look for available
Hi I am looking for a freelancer, that can help on a project. This Project is .net core 2 + MVC mode and use selenium web driver for firefox browser. The task is to update some functions and new updates on the system. I am looking for a long term freelancer.
I need to rewrite 3427 lines of PHP code that tests [login to view URL] website in Selenium to NodeJS. Some of the tests require rewrite for them to pass. You are required to deliver Selenium tests in NodeJS that achieve the same goals as the ones in PHP code and run on [login to view URL]
...1 or more years of experience as an assistant or similar - has experience working with cloud documents and spreadsheets (especially Google docs and sheets) - has experience using task management software (especially Trello) - is highly motivated and task oriented - can communicate with our scoial media following on a personal level - is able to keep
Hi Jorge Javier P., I noticed your profile and would like to offer you my project. We can discuss any details over chat. https://www.freel... I noticed your profile and would like to offer you my project. We can discuss any details over chat. https://www.freelancer.com/projects/c-sharp-programming/captcha-API-selenium-webdriver-Recaptcha/#/proposals
|
OPCFW_CODE
|
Properties file gets unloaded
I have a Struts 2 web application running on Tomcat 7 on Windows Server 2008 (only Tomcat, no Apache or IIS). The texts in the application are stored in .properties files and are managed by Struts 2 I18N Interceptor. In the JSPs I use <s:text name="menu.help" /> tags.
Some times (twice in the last month), the application loses the references to the properties files, as they were unloaded, and it starts to show the keys instead. For example for English it always shows "Help", but when I get this issue it starts showing "menu.help". I have to restart the application for it to work normally again.
I looked for related errors in the logs, but could not find anything related to I18n or properties. I also looked for OutOfMemoryError, but could not find any either.
Do you know what could be the problem? Can you think on any way I can troubleshoot it?
Thanks
Edit:
This is the relevant part of my struts.xml:
<struts>
<constant name="struts.custom.i18n.resources" value="resources" />
...
And the properties files (resources_en.properties, resources_es.properties, ...) are located in the WEB-INF/classes directory.
You might want to post on the Struts User mailing list for this one. I suspect something is being garbage collected, but not sure what it'd be.
What did you try to troubleshoot it? Did you try to switching locale manually?
@RomanC I have the possibility within the application to switch languages, and it didn't work for any language until I restarted the application.
@RomanC When I switched languages I still got the keys instead of the text, for any language. When I restarted the application it started working again, showing the correct texts for the selected language.
Where do you have keys? What is in your struts.xml?
Really weird... are you in a Clustered environment ?
@AndreaLigios No, a stand-alone Tomcat 7 installation with only one application.
Can you check if you have more than one resources_en file in your classpath? Maybe you have multiple ones and the application finds the incomplete one from time to time.
I think you need a default properties: resources.properties
Surelly this times you get keys instead of translations is because default locale is not set.
If it is the memory problem, I suggest you do below:
Minimize the size of property file, see if this solves the issue. So first minimize the size, make a load test and see the result.
Split it to different property files to see if this happens to all of them or just some of them
This is my personal experience:
Sometimes the JVM (by mistake) garbage collect an object when it is not used a while. I have the same issue with JDK 4 and oracle application server 9i. The JVM garbage collected the database connection when the site load dropped. So, develop a small jsp page (test.jsp), add <s:text name="menu.help" /> to it. Then, write a small application which request this page every 1min.
|
STACK_EXCHANGE
|
const test = require('tape')
const { dateAsString } = require('./formatters')
const {
isValidDate,
isDateInPast,
isDateOneOrMoreMonthsInThePast,
isDateMoreThanEightMonthsInTheFuture,
isDateFourOrMoreYearsInThePast,
isDateInTheFuture
} = require('./validators')
test('isValidDate', (t) => {
t.equal(isValidDate('1999-12-12'), true, '"1999-12-12" should be a valid date')
t.equal(isValidDate('1900-01-01'), true, '"1900-01-01" should be a valid date')
t.equal(isValidDate('2999-12-31'), true, '"2999-12-31" should be a valid date')
t.equal(isValidDate(null), false, 'null should not be a valid date')
t.equal(isValidDate('2009-02-29'), false, '"2009-02-29" should not be a valid date')
t.equal(isValidDate('abcd-ab-ab'), false, '"abcd-ab-ab" should not be a valid date')
t.equal(isValidDate('2007-04-05T14:30'), false, '"2007-04-05T14:30" should not be a valid date')
t.end()
})
test('isDateInPast', (t) => {
const today = new Date()
const yyyy = today.getFullYear() + 1
const nextYear = yyyy + '-01-01'
t.equal(isDateInPast('1999-12-12'), true, '"1999-12-12" is a date in the past')
t.equal(isDateInPast(nextYear), false, `"${nextYear}" is a date in the future`)
t.equal(isDateInPast(null), true, 'null should not be validated')
t.end()
})
test('isDateMoreThanOneMonthAgo', (t) => {
const oneMonthAgo = dateAsString({ monthAdjustment: -1 })
t.equal(isDateOneOrMoreMonthsInThePast('1999-12-12'), true, '"1999-12-12" is more than one month ago')
t.equal(isDateOneOrMoreMonthsInThePast('9999-12-12'), false, '"9999-12-12" is not more than one month ago')
t.equal(isDateOneOrMoreMonthsInThePast(oneMonthAgo), true, `"${oneMonthAgo}" is exactly one month ago`)
t.equal(isDateOneOrMoreMonthsInThePast(''), true, 'blank string should not be validated')
t.equal(isDateOneOrMoreMonthsInThePast(null), true, 'null string should not be validated')
t.equal(isDateOneOrMoreMonthsInThePast('12-12-1999'), true, 'invalid format string "12-12-1999" should not be validated')
t.end()
})
test('isDateMoreThanEightMonthsInTheFuture', (t) => {
const eightMonthsInFuture = dateAsString({ monthAdjustment: -8 })
t.equal(isDateMoreThanEightMonthsInTheFuture('1999-12-12'), false, '"1999-12-12" is not more than eight months in the future')
t.equal(isDateMoreThanEightMonthsInTheFuture('9999-12-12'), true, '"9999-12-12" is more than eight months in the future')
t.equal(isDateMoreThanEightMonthsInTheFuture(eightMonthsInFuture), false, `"${eightMonthsInFuture}" is exactly eight months in the future`)
t.equal(isDateMoreThanEightMonthsInTheFuture(''), true, 'blank string should not be validated')
t.equal(isDateMoreThanEightMonthsInTheFuture(null), true, 'null string should not be validated')
t.equal(isDateMoreThanEightMonthsInTheFuture('12-12-1999'), true, 'invalid format string "12-12-1999" should not be validated')
t.end()
})
test('isDateMoreThanFourYearsAgo', (t) => {
const fourYearsAgo = dateAsString({ yearAdjustment: -4 })
t.equal(isDateFourOrMoreYearsInThePast('9999-12-12'), false, '"9999-12-12" is not more than four years ago')
t.equal(isDateFourOrMoreYearsInThePast('1900-12-12'), true, '"1900-12-12" is more than four years ago')
t.equal(isDateFourOrMoreYearsInThePast(fourYearsAgo), true, `"${fourYearsAgo}" is exactly four years ago`)
t.equal(isDateFourOrMoreYearsInThePast(''), true, 'blank string should not be validated')
t.equal(isDateFourOrMoreYearsInThePast(null), true, 'null string should not be validated')
t.equal(isDateFourOrMoreYearsInThePast('12-12-1999'), true, 'invalid format string "12-12-1999" should not be validated')
t.end()
})
test('isDateInTheFuture', (t) => {
const futureDate = dateAsString({ monthAdjustment: 1 })
const pastDate = dateAsString({ monthAdjustment: -1 })
t.equal(isDateInTheFuture('3000-12-12'), true, '"3000-12-12" is a date in the future')
t.equal(isDateInTheFuture('2000-12-12'), false, '"2000-12-12" is a date in the past')
t.equal(isDateInTheFuture(futureDate), true, `"${futureDate}" is a date in the future`)
t.equal(isDateInTheFuture(pastDate), false, `"${pastDate}" is a date in the past`)
t.equal(isDateInTheFuture(null), true, 'null should not be validated')
t.end()
})
|
STACK_EDU
|
Mainframe SDET Architect
- Must have recent Mainframe experience
- Must have 8+ years of automation testing
- Must have experience training team members on automation practices, policies and procedures
Performs and participates in application development and testing to apply continuous quality and testability of code throughout the software development lifecycle. Builds quality within the software development process with automated testing suites providing a comprehensive view from code quality to functionality. Uses quality paradigms to provide real time quality with use of automation and frequent regression testing. Designs / develops and maintains automation frameworks and automation test suites and scripts with continuous integration, testing, deployment and delivery. Conducts performance, load, security and service virtualization testing.
Desired Skills and Capabilities:
- Extensive Mainframe automation experience with Unified Functional Test (UFT) required.
- Ability to design the overall test strategy, coverage and prioritized risk-based testing model.
- Ability to train team members on automation practices, policies and procedures.
- Skills / Knowledge - Having broad expertise or unique knowledge, uses skills to contribute to development of company objectives and principles and to achieve goals in creative and effective ways. Barriers to entry such as technical committee review may exist at this level.
- Job Complexity - Works on significant and unique issues where analysis of situations or data requires an evaluation of intangibles. Exercises independent judgment in methods, techniques and evaluation criteria for obtaining results. Creates formal networks involving coordination among groups.
- Supervision - Acts independently to determine methods and procedures on new or special assignments. May supervise the activities of others.
- Software Development Life Cycle / Testing Methodologies - Agile - Scrum, Kanban, Test Driven Development, Behavior Driven Development, etc.
- Programming Languages - Java, C#, Perl, Python, Groovy, Oracle, SQL, etc.
- Testing tools - Application Lifecycle Management, Unit Testing, Security, Application Programming Interface, Mobile, Continuous Integration, Service Virtualization, etc.
- Participates in the complete development process as a subject matter expert (SME) to ensure an approach of built-in quality at all stages to produce quality code.
- Applies SME knowledge in a collaborative software development approach for quality assurance at the source using a high degree of automation.
- Guides and implements processes to assess development impacts to regression suites, testability of code, application performance to take measures to eliminate impacts to continuous testing.
- Provides SME level guidance to ensure timely quality checks along with updating of automation scripts.
- Provides SME advice on best practices to facilitate test driven development (TDD) and behavior driven development (BDD) to provide early and frequent testing as the software is developed.
- Acts as the SME to recommend and guide implementation of process improvements and continuous quality measures across the development lifecycle.
- Acts as the SME in the use of unit testing tools and tools for implementing TDD and BDD.
- Writes and executes application tests at the source code level to prevent hidden errors (i.e., white box testing) within unit and component testing.
- Provides expert guidance for improvements of code quality for issues ranging from functionality to structure of code for performance and maintainability.
- Provides expert guidance for building complex unit and component test suites and conducts automated white box tests.
- Designs ongoing maintainability of test suites along with ease of automated execution of white box tests at frequent cycles.
- Integrates white box test suites with continuous integration (CI) tools.
- Programs and creates complex test cases using unit and component testing tools for code level testing.
- Reviews and guides creation of test suites for maintainability.
- Provides expert guidance for developing software tools, frameworks and utilities that will be used for validation and verification activities, and end-to-end functional testing of software.
- Designs and creates automated tools for generic use and maintainability. Guides and designs processes that manage testing organization and test data for ease of use and long-term maintainability by frameworks.
- Provides SME expertise on design and development of frameworks for testing of non-user interface (UI) components like application program interface (API), representational state transfer API (RESTful API), and web services similar to the end user.
- Provides SME level guidance on integration of automated suites with continuous integration (CI) tools for frequent execution.
- Installs and uses complex CI frameworks for use in continuous testing and continuous deployment / delivery.
- Provides expert guidance for software deployment and CI frameworks for use in continuous testing and continuous deployment / delivery.
- Provides expert guidance in building, customizing, and deployment of test environments and test automation frameworks promoting and incorporating industry CI best practices.
- Applies SME knowledge to design and maintain complex automation infrastructure throughout development.
- Provides SME expertise in recommending and integrating CI tools with other testing infrastructure and configuring complex reporting / metrics on CI.
- Makes decisions and implements new advances in CI space.
- Applies expert knowledge in use of performance, load, security and service virtualization testing tools to conduct testing, analysis and interpret results.
- Promotes and provides SME advice on best practices to build, script, generate data and run performance, load, security and service virtualization testing.
- Provides guidance on design and implementation guidance to create performance, load, security and service virtualization testing frameworks for standardized use / reuse and maintainability of security testing frameworks.
- Provides SME level advice on maintaining and administering procedures, methodology and/or application standards to include payment card industry and security related compliance.
- Provides SME guidance for incorporating performance, load, and security testing into CI environments and deploying frequent testing of service virtualization close to the CI paradigm.
- As the SME, collaborates and partners with stakeholders, business, developers and test analysts to develop complex test plans, conditions and cases (set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement) to be used in testing.
- Uses experience to make insightful and logical decisions on plans, conditions and cases.
- Develops, administers and recommends best practices to perform specialized and in-depth test data conditioning and execution of test sets.
- Communicates with manager on a regular basis (project requirements, issues resolutions, etc.), and disseminates information to project sponsors, business analysts, and programmers.
- Evaluates, interprets and provides guidance on the various components of systems, applications and multiple environments and advises leadership on complex issues.
- As the SME, acts as a single point of contact for large and / or highly complex client projects with regard to test activities and provides expertise to other Test Analysts.
- Coordinates the test activities assigned to the test team to include, reviewing of client test plans cases and scripts, prioritizing test execution when necessary and providing feedback to internal and external clients.
- Bachelor' s Degree - Software Engineering, Information Systems or other Technical degree; additional experience in lieu of degree will be considered.
- Minimum 8 Years Relevant Exp - Professional experience with Software testing, coding, designing, and developing.
- Master' s Degree - Software Engineering, Information Systems or other Technical degree
- Minimum 10+ Years Relevant Exp - Experience developing automated testing strategies in a variety of environments and frameworks
- What are the 3-4 non-negotiable requirements on this position?
|
OPCFW_CODE
|
package com.mattjtodd.functional.stream;
import java.util.Collections;
import java.util.List;
import java.util.Optional;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static com.google.common.base.Preconditions.checkNotNull;
import static com.mattjtodd.functional.stream.Immutables.appendToTail;
import static com.mattjtodd.functional.stream.Result.latest;
import static com.mattjtodd.functional.stream.Streams.none;
import static com.mattjtodd.functional.stream.Streams.some;
import static com.mattjtodd.functional.stream.Suppliers.memoize;
/**
* An immutable non-strict stream.
*/
class Stream<T> {
/**
* A static empty stream instance.
*/
private static final Stream<?> EMPTY = new Stream<>(none());
/**
* The current value for this stream.
*/
private final Optional<Value<T>> value;
/**
* Constructs a new instance with the supplied value.
*
* @param value an optional Monad containing the supplier for the head supplier and the next
* stream value supplier.
*/
private Stream(Optional<Value<T>> value) {
this.value = checkNotNull(value);
}
/**
* Constructs a new instance with a non-strict head and tail.
*
* @param head the current head expression
* @param tail the next head expression
*/
private Stream(Supplier<? extends T> head, Supplier<Stream<T>> tail) {
this(some(new Value<>(head, tail)));
}
/**
* Creates a new Stream instance with the supplied head and tail both of which are memoized.
*
* @param head the current head expression
* @param tail the next head expression
* @return the new stream
*/
public static <T> Stream<T> stream(Supplier<? extends T> head, Supplier<Stream<T>> tail) {
return new Stream<>(memoize(head), memoize(tail));
}
/**
* Provides an empty stream.
*
* @return the empty stream
*/
public static <T> Stream<T> empty() {
@SuppressWarnings("unchecked")
Stream<T> empty = (Stream<T>) EMPTY;
return empty;
}
/**
* A fold-right reduce function.
*
* @param result the non-strict result function
* @param func the reduction function
* @return the reduced value
*/
public <E> E foldRight(Supplier<E> result, BiFunction<? super T, Supplier<E>, E> func) {
return value
.map(value -> func.apply(value.evalHead(), () -> value.evalTail().foldRight(result, func)))
.orElseGet(result);
}
/**
* A fold-left reduce function using a trampoline to optimise it's tail-call recursion. It's also
* possible to short-circuit the traversal of the stream early by returning a terminal {@link
* Result} form the supplied fold function.
*
* @param result the non-strict result function
* @param func the reduction function
* @return the reduced value
*/
public <E> E foldLeft(Supplier<E> result, BiFunction<? super T, Supplier<E>, Result<E>> func) {
return doFoldLeft(result, func, this).invoke();
}
/**
* Implements the foldLeft function using tail recursion and a trampoline to handle the stack. It
* also evaluates the lazy function arguments which would otherwise blow the stack.
*
* @param seed the current reduction seed
* @param func the function to apply when reducing
* @param stream the current stream value
* @return the reduced value
*/
private static <E, T> Trampoline<E> doFoldLeft(Supplier<E> seed,
BiFunction<? super T, Supplier<E>, Result<E>> func,
Stream<T> stream) {
if (stream.isEmpty()) {
return Trampoline.done(seed.get());
}
// remove the non-strictness from the trampoline calls by invoking the suppliers
Value<T> value = stream.value.get();
Result<E> result = func.apply(value.evalHead(), seed);
// try to short circuit the left-fold
if (result.isTerminal()) {
return Trampoline.done(result.getValue());
}
// bouncy bouncy!
return () -> doFoldLeft(result::getValue, func, value.evalTail());
}
/**
* Repeatedly applies the evaluated head to the consumer and then repeats with the evaluated tail.
* Virtual tail-call optimised forEach, capable of handling an infinite stream. Uses a while loop
* rather than a trampoline as this will be more efficient.
*
* @param consumer the consumer to be applied to every item
*/
public void forEach(Consumer<? super T> consumer) {
Stream<T> current = this;
while (current.value.isPresent()) {
Value<T> value = current.value.get();
consumer.accept(value.evalHead());
current = value.evalTail();
}
}
/**
* Terminal operation which consumes this stream.
*
* @return the list ofList the stream values
*/
public List<T> toList() {
return foldLeft(Collections::emptyList, (head, tail) -> latest(appendToTail(tail.get(), head)));
}
/**
* Returns a stream which consists ofList the supplied number ofList steps, or the number ofList
* remaining steps if less than those requested is available.
*
* @param number the number ofList values to handle from the stream
* @return a stream with the requested number ofList steps
*/
public Stream<T> take(int number) {
return value
.filter(value -> number > 0)
.map(value -> stream(value.getHead(), () -> value.evalTail().take(number - 1)))
.orElse(empty());
}
/**
* The current value represented in this stream will be <b>evaluated</b> and passed to the
* consumer.
*
* @param consumer the consumer ofList the peeked value
* @return the current stream
*/
public Stream<T> peek(Consumer<? super T> consumer) {
value.ifPresent(value -> consumer.accept(value.evalHead()));
return value
.map(tuple -> stream(tuple.getHead(), () -> tuple.evalTail().peek(consumer)))
.orElse(empty());
}
/**
* Return a stream which takes values whilst some condition provided by a function is true.
*
* @param condition the conditional function
* @return the stream bound by the function
*/
public Stream<T> takeWhile(Function<? super T, Boolean> condition) {
return foldRightToStream((head, tail) -> condition.apply(head) ? stream(() -> head, tail) : empty());
}
/**
* Terminal reduce function which checks for a condition being met, then terminates the traversal
* early.
*
* @param condition the condition to satisfy once
* @return true if the condition is met, false otherwise
*/
public boolean exists(Function<? super T, Boolean> condition) {
return foldLeftToBoolean(false, (head, tail) -> Result.of(condition.apply(head)));
}
/**
* Terminal reduction function which checks that every item in the stream satisfies a given
* function, or terminates the traversal early.
*
* @param condition the condition to satisfy for all elements
* @return true if the condition was met, false otherwise
*/
public boolean forAll(Function<? super T, Boolean> condition) {
return foldLeftToBoolean(true, (head, tail) -> Result.of(condition.apply(head) && tail.get()));
}
/**
* Applies the supplied transform function to each element in the stream, returning a new stream
* to the transformed type. This will not invoke recursion of the tail as two is never evaluated.
*
* @param func the transform function
* @return the transformed stream
*/
public <E> Stream<E> map(Function<? super T, ? extends E> func) {
return foldRightToStream((head, tail) -> stream(() -> func.apply(head), tail));
}
/**
* Filters element from the stream which don't match the supplied predicate.
*
* @param predicate the filter predicate
* @return the filtered stream
*/
public Stream<T> filter(Predicate<? super T> predicate) {
return foldRightToStream((head, tail) -> predicate.test(head) ? stream(() -> head, tail) : tail.get());
}
/**
* Lazily appends the supplied stream to the end of this one.
*
* @param stream the stream to append
* @return a stream with the supplier stream appended onto the end
*/
public Stream<T> append(Stream<T> stream) {
return append(() -> stream);
}
/**
* Lazily appends the supplied stream to the end of this one.
*
* @param stream the stream to append
* @return a stream with the supplier stream appended onto the end
*/
public Stream<T> append(Supplier<Stream<T>> stream) {
return foldRight(stream, (head, tail) -> stream(() -> head, tail));
}
/**
* A flat-map function over the stream.
*
* @param func the function to use for flat mapping
* @return a stream which has been flat-mapped
*/
public <E> Stream<E> flatMap(Function<? super T, Stream<E>> func) {
return foldRightToStream((head, tail) -> func.apply(head).append(tail));
}
/**
* Partially applied fold right which always reduces to a stream.
*
* @param func the reduction function
* @return the reduction stream
*/
public Boolean foldLeftToBoolean(boolean start, BiFunction<? super T, Supplier<Boolean>, Result<Boolean>> func) {
return foldLeft(() -> start, func);
}
/**
* Partially applied fold right which always reduces to a stream starting at an empty stream.
*
* @param func the reduction function
* @return the reduction stream
*/
public <E> Stream<E> foldRightToStream(BiFunction<? super T, Supplier<Stream<E>>, Stream<E>> func) {
return foldRight(Stream::empty, func);
}
/**
* Checks if this instance is the empty one.
*
* @return true if this is the empty stream, false otherwise
*/
public boolean isEmpty() {
return !value.isPresent();
}
/**
* Get's the head of this stream.
*
* @return the streams head value which may or may not be present
*/
public Optional<T> head() {
return value.map(Value::evalHead);
}
/**
* Gets the value optional.
*
* @return the value
*/
Optional<Value<T>> getValue() {
return value;
}
@Override
public String toString() {
return toList().toString();
}
}
|
STACK_EDU
|
MIDOP is by nature extensively customizable. Whenever you find a button “Edit”, by clicking it you will be able to directly modify the source code using a built-in source code editor.
Off course such modifications requires at least a basic PHP and HTML coding knowledge, but also novice users might understand it by reading existing code. A big effort while developing MIDOP is being putted on the coding style adopted: extensive use of comments, PHP variable with self-explaining names and simple text files for storing each managed website settings.
Available symbols used for plotting macroseismic intensities can be customized and new symbols can also be created by clicking the button “Edit” within the page “MDP map”: a popup window will open presenting a source code editor.
Have a look at the PHP coding standards and *DO* make a backup of your data!
Intensity symbols are organized in sets, each with as many symbols as the possible range of macroseismic intensities. Each intensity symbol within a set is defined using again four lines:
The built-in symbol sets used for representing MDP on a map are contained in the php file “settings \ symbols \ symbol_mdp.php”
Below the source code used for defining the 9th symbol of the “NERIES_NA4” set defining the represented intensity “4-5”:
$symbol_mdp['NERIES_NA4'] = new stdClass; $symbol_mdp['NERIES_NA4']->is = '4-5'; $symbol_mdp['NERIES_NA4']->symbol = '<circle cx="0" cy="0" r="1000" fill="#FDF323" stroke="#FF963F" stroke-width="600" />'; $symbol_mdp['NERIES_NA4']->onoff = 1;
A problem occurs if the original macroseismic intensity data adopts intensity diction that doesn’t match the defined intensity symbol set. If MIDOP found the intensity “IV-V” and a symbol linked to this intensity doesn’t exists the intensity point will simply not be rendered on the map.
In order to solve this problem you can automatically convert your custom intensity values instead of changing the whole intensity symbols set or altering the data contained in the intensity tables.
This conversion is based on the file “settings \ symbols \ symbol_conversion.php”. If for example you would like to convert the roman value “V-VI” into “5-6” two lines of code must be inserted:
$symbol_convert_cases = 'IV-V'; $symbol_convert_value = '4-5';
You can specify multiple conversion at once, so, if you would like to convert both the original values “V-VI” and “5.5” into “5-6” write something like:
$symbol_convert_cases = '4.5|IV-V'; $symbol_convert_value = '4-5';
Epicentre symbols are defined by four PHP lines:
Below the source code used for defining the rectangle:
$symbol_epicentre['SquareBlack'] = new stdClass; $symbol_epicentre['SquareBlack']->id = 'idSquareBlack'; $symbol_epicentre['SquareBlack']->symbol = '<rect x="-1500" y="-1500" width="3000" height="3000" stroke="#000000" stroke-width="500" fill="#FFFFFF" />'; $symbol_epicentre['SquareBlack']->onoff = 1;
Geographical layers in MODOP are plain text files containing SVG elements. Before trying to create such files you can find further information on the SVG specifications at the W3C website (http://www.w3.org/TR/SVG/).
Layers are stored in the folder “data” separately for the general earthquake map and for single earthquake intensity maps, and for each UTM zone and covered area following this structure:
An identical file structure is used for storing geographical layers for earthquake intensity maps in folder “data \ layers_mdp \”.
Layers files must follow some important rule:
Styling the layer is possible within the dedicated control panel window (below) available both for layers in the “EQ map” and “MDP map” page. Through the visual interface can specify both the fill and the stroke style and the layer opacity (transparency).
New geographical layers can be created for example from ESRI shapefiles (“.shp” extension).
These files must be already projected using the corresponding UTM zone to the geographical area where they are going to be used. The conversion can be done using the freely available “shp2svg” [Neumann, 2007] utility at the CartoNet website (http://carto.net/papers/svg/utils/shp2svg/) composed by two MS Windows executables “ogis2svg.exe” and “shp2pgsql.exe” that works in the Windows Command Prompt. The conversion is done entering the following command:
ogis2svg.exe --input your_shapefile --output svg_output_file.svg --roundval 1
When asked, answer “n” to every question. Below an example output of the conversion of the shapefile called “administrative_alps.shp”:
At the end of the conversion process, the output generated SVG file can be found in the same folder. In order to use such file in MIDOP as a geographical layer you must open the SVG file in a text editor and delete all the lines that don’t contain SVG elements and save the file with the “.layer” extension.
Below an example screenshot with the converted “administrative_alps.svg” file loaded into a text editor (enlighten in blue lines that must be deleted):
Once you have your file with the “.layer” extension you can upload it into MIDOP by clicking the button “upload a new layer” in the control panel.
Please, note that MIDOP will load the layer file as is and no geographical projection or other transformation will be performed.
For simple changes to the layers source code a text editor can fulfill the task.
If complicated SVG manipulation is required, you can use the freely available graphic tool Inkscape (http://www.inkscape.org/) which uses SVG as its native format of manipulating graphical object. Once you’ve done your changes within Inkscape remember to save the file as “plain SVG code” and, again, strip off all the unnecessary SVG lines of code.
MIDOP can load custom SVG objects that will be rendered in selected earthquakes. In order to obtain such feature you have to create a dedicated field within the earthquake catalogue table in which your SVG custom code will be stored. You then need to tell MIDOP to load such SVG code by selecting the created field in the “MDP page” at “SVG code from catalogue field” (see below) and specify at which layer level MIDOP will render it.
|
OPCFW_CODE
|
A code name or cryptonym is a word or name used, sometimes clandestinely, to refer to another name, word, project or person names are often used for military. Este website usa cookies para ajudar a disponibilizar os respetivos serviços ao continuar a navegar no website concorda com a utilização de cookies. Information on the uniform task-based management system, including the litgation, counseling, project, and bankruptcy code sets. Note: the codes with the this code is tradelocked icon are tradelocked (the pokemon possessed. From 2006-2016, google code project hosting offered a free collaborative development environment for open source projects projects hosted on.
© codeorg, 2017 codeorg®, the code logo and hour of code® are trademarks of codeorg powered by amazon web services. Visual studio code is a code editor redefined and optimized for building and debugging modern web and cloud applications visual studio code is free and available on. Roblox project pokemon : 2 new epic codes (shiny code + free ability) not expired - duration: 1:30 paralyze 49,846 views.
Codegov is a platform designed to improve access to the federal government’s custom-developed software. Create work breakdown structure (wbs) codes for tasks to mark their unique places in your project outline wbs codes can be used for reporting schedules. Freecode maintains the web's largest index of linux, unix and cross-platform software, as well as mobile applications. Cogeneration – case studies handbook the code project produced a handbook with best practice cases these examples show the diversity of cogeneration: sizes. Microsoft codenames are the codenames given by microsoft moving the windows desktop to a 32-bit code base and management system project based on.
Draw something awesome your code is saved as a project that you can return to at any time. Project: pokemon helpful pages project: more project pokemon wiki 1 mystery gift codes 2 hoopa 3 dialga explore wikis blade runner wiki we bare bears wiki. Android is an open source software stack created for a wide array of devices with different form factors the primary purposes of android are to create an. Code archive skip to content search google about google privacy terms. Google summer of code is a global program focused on introducing students to open source software development since its inception in 2005, the program has.
Veja as soluções do microsoft project para a gestão de projetos com o project, você gerencia equipes de projeto e analisa e planeja recursos e orçamentos. Project code: shift, as seen on microsoft's e3 2017 briefing - full version. This topic contains tables of error codes for the project server interface (psi) in project server 2013 the tables are arranged by functional area and by.
Codex learn to code in just one year codex is a full time, one year coding program in cape town that trains and places bright young talent as software developers. Free open source project hosting from microsoft it provides a source code repository with access over subversion, codeplex client, teamprise explorer, visual studio. The code2 project aims to realise europe’s identified potential for cogeneration by developing 27 national cogeneration roadmaps read more. Project nimbus: code mirai - jogo é anunciado para o ps4 veja o trailer e chega no começo de 2018 no ocidente.
|
OPCFW_CODE
|
Okay I gave it another try. I'll post about my Desktop and Netbook experiences separately.Desktop:
I connected my phone to give me internet access long enough to download the wifi drivers. While I was doing that, I realized why I was so sure the beta worked with my wifi adapter without a problem: Because I didn't have the internet then, so the only way I could
get online was to connect my phone. Yeah, I feel kind of silly now.
Also when I went to install the wifi drivers, it listed two NVidia drivers. I installed the recommended one but I still wasn't able to do all the desktop effects and stuff. So then I installed the not-recommended one and things worked.
I tried sticking a DVD in and seeing if it would play. It doesn't. It opens in Movie Player by default, says I'm missing a codec or something like that, and asks if I'd like to find it. I clicked yes but it couldn't find anything.
So I opened up Synaptics Package Manager and did a search for DVD, found a bunch of packages that were mostly to do with ripping or converting formats. One of them was called Ubuntu-Restricted-Extras, which installs support for MP3 files, Microsoft fonts, Java RE, Flash, LAME, and DVD Playback. But it says that it doesn't install a certain library and won't allow you to play encrypted DVDs. And links to the Playing DVDs
web page. Apparently in some countries it's illegal to watch your own DVDs due to the requirement of needing to decrypt them to watch them first. Sounds messed up to me but whatever. The instructions on that page were simple to follow, and after taking the 3 steps listed, DVD playback pretty much "Just Worked," but it's not exactly Linux n00b friendly.
Playing DVDs requires you to:
- Install a movie program that doesn't come with Ubuntu. (I chose VLC Player)
- Install the libdvdread4 package (for Ubuntu 9.04 and later)
- Enter a command in the Terminal
That's not exactly hard to do, but considering that I only found the URL for that in the first place because I had enough experience to use Synaptics Package Manager to search for DVD and found something I'd previously seen before, which linked to the site, etc. It's just a relatively, needlessly complex process for something as simple (in the mind of the end user) as sticking a DVD in and watching it.
I was thinking of what a hassle it was and how something as simple as this would keep me from recommending Ubuntu to my Grandma (for example), but then I realized I would never expect my Grandma to install any
OS and set things up for herself. I would either have her order one already set up by the manufacturer or I'd get things working myself and then let her at it.
So in that regard, I think Ubuntu just might be ready after all for the "average Joe/Jane" who only uses a PC for internet, e-mail, and word processing.
I still don't like the minimize, maximize, and close buttons on the left side. Not only are they on the left side, but they're out of order. It's close, minimize, maximize. So it's not just a mirror image. The other thing is that my mouse is never on the left side of the window. The UI practically demands the mouse to be on the right side of the window with scroll bars and things, so it makes no sense to make the user drag the mouse over to the other side of the screen to close an application. I know it seems like such a small thing, and if only the rest of the world were so lucky to only be upset over moving a mouse a few inches to the left, but it sure is annoying! Thankfully Chrome uses it's own custom skin and keeps the Windows style Min, Max, and Close buttons on the right side.
Another thing that continually frustrates me (especially on a fresh Ubuntu install) is how the package installer works. There are three main ways to install/update software: There's the Ubuntu Software Center, the Synaptics Package Manager, and the Update Manager. I don't know exactly how it works, but they all use the same thing and you can only use one at a time. Anything else you try to do to download/install software/components/updates also uses the same thing. So if you went to the Software Center first and queued up 15 applications to install, you have to wait until they're finished before you can install your proprietary drivers, or another language pack, or even check for updates to your software.
I can understand the need to only install one thing at a time, but it should at least be smart enough to add it to a queue so it will be taken care of, and not lock out the entire system from looking for more updates while something is downloading. In fact, it would be great if it could download things more than one at a time.
And once again, as it always happens with me when I try out Ubuntu, I don't know what else to do with it. The three main reasons I use a PC are for:
- Video Games
The internet works on Ubuntu, once you have your Wifi drivers installed. The video games I play pretty much require Windows. I haven't tried Wine or Play On Linux but I've heard they're not always very good. Besides that, most of my modern PC Games have been purchased via Steam, and I don't think that works well in Wine. And as for programming I typically program using an engine that doesn't support Linux.
I think that's enough writing for now. I'm going to take a break and write about my Netbook Remix Edition experiences later. But here's a hint:
|
OPCFW_CODE
|
Proof of Liabilities
Deribit holds a 1:1 reserve of all customer assets and the company is happy to provide full transparency into our holdings. A cryptographic proof of liabilities, verifiable by any party without relying on a trusted auditor, was first proposed by Greg Maxwell in 2013, and is known as the Maxwell protocol. This initial proposal disclosed information about the number and size of customer accounts, which is why Deribit is using a slightly modified version that protects client privacy and offers full transparency, preventing balances from being hidden.
Deribit constructed a binary Merkle hash overview with the leaves being the steganographed (cut in-pieces) balances of our individual users, broken up by asset. Clients can see exactly which leaves in the tree refer to their funds by using the unique hash built from their account information. With the individual liabilities established, it can be easily verified whether the aggregate of these liabilities is held by Deribit on-chain.
The daily snapshot file can be found here.
Proof of Assets
Below please find an overview of the key wallet addresses used by Deribit. Please note, the list below does not include addresses of assets held in third-party custodians, like Copper Clearloop and Cobo Loop.
Verification of your Assets
1. Every day Deribit takes a snapshot of the on-chain assets for all eligible account. Assets held by third party custodians cannot be included in the reporting as they are not in Deribit’s direct control. The data file that is fully accessible for all interested parties therefore only contains the accounts that hold assets on Deribit.
2. In contrast to Maxwell’s approach, Deribit’s modified proof of assets includes steganography (rearranging balance data) of our user balances to protect client privacy and prevent disclosure of actual balances and their links to blockchain addresses or identities. This is important as we disclose a full list of accounts. Only the client can recognise his or her’s assets in the list.
3. In the Deribit frontend (here) clients will find a hash allowing them to verify that their assets are included in the asset file at both the main and sub-account level.
4. All code used to create Deribit’s modified Merkle Tree is available below so clients can verify the accuracy of the frontend data. To prevent the same nonces being assigned to different users with comparable asset levels, thus reducing the size of liabilities, we give each account a unique Proof ID. Using the instructions below client can convert the Proof ID into the identifiers found in the daily Snapshot file and shown in the verification section of the Proof of Reserves page.
5. The aggregate of the assets included in the Asset File should always be less than the aggregate of assets available on-chain. The difference is the Deribit reserve ratio, which includes the insurance funds and Deribit revenues. If the total on chain balance is higher than the Asset File balance (visible in frontend and in file itself), then Deribit has Proof of Reserves.
1. User can find his or her Proof ID from the frontend. The user can verify their Proof ID is unique by performing the following steps:
A. Verify Proof Signature
– Get the User ID and Proof ID Signature from the Proof Of Reserves page
– Download the latest Proof Of Reserves snapshot and copy the Public Key (public_key field in json file)
– Use the Ed25519 signature algorithm to verify that the Proof ID Signature was used to sign the User ID
tool: https://ed25519.altr.dev/ (Base64)
– Message = User ID
– Signature = Proof ID Signature
B. Verify Proof ID
– Base64url decode the Proof ID and Proof ID Signature
tool: https://cryptii.com/pipes/base64-to-hex (Variant = Base64url, Format: Hexadecimal Group By = None)
– Verify if sha1(Proof ID Signature Base64url Decoded) = Proof ID Base64url Decoded
tool: https://emn178.github.io/online-tools/sha1.html (Input Type = Hex)
2. By calculating hashes a user can fetch all his entries from the “liability” field of the JSON file (to increase PartNumber until no more fetches). The sum of the entries is the sum included into the liability.
A. Join Table Seed and Proof ID:
format: TableSeed ++ “|” ++ Proof ID
output: 2022-12-02 12:37:32|accountProofId
B. SHA256 hash:
tool: https://emn178.github.io/online-tools/sha256.html (Input Type = Text)
input: 2022-12-02 12:37:32|accountProofId
output (dummy): cc9810645a0119723eb25f3afaab84ae6c219ec492bd04409b91da710c61d264
C. Join hash with Part Number (in Hex):
format: HashFromStep2 ++ “|” ++ PartNumber
– HashFromStep2: cc9810645a0119723eb25f3afaab84ae6c219ec492bd04409b91da710c61d264
– | in hex: 7c
– 1 in hex: 31
D. SHA256 hash:
tool: https://emn178.github.io/online-tools/sha256.html (Input Type = Hex)
output (dummy): 30d5635e4cc4fd315d38a4415801d5b3078f421263c9eb1f5e36b6d8c8e49bca
E. Base64 encode
tool: https://emn178.github.io/online-tools/base64_encode.html (Input Type = Hex)
F. Replace some characters:
‘=’ => ”
‘+’ => ‘-‘
‘/’ => ‘_’
3. Everyone can check that our total liability (sum of all liability entries) is less than disclosed on-chain reserves (addresses see below).
4. When total liability is less than or equal to the wallet reserves, it confirms that Deribit has provided Proof-of-Reserves as it is holding sufficient reserves. As the snapshots are taken daily, market volatility and the corresponding impact on customer portfolios may cause a temporary difference between snapshots.
Additional reserves (beyond liabilities)
The additional reserves (assets under Deribit’s control beyond client liabilities), or the Deribit reserve ratio, includes the Deribit insurance fund, Deribit’s daily revenues and accounts used for payments and general administration.
As Cobo Loop clients have the choice to withdraw assets via Cobo Loop or, Deribit will need to keep funds to facilitate direct withdrawals by Cobo Loop users. This is contrary to Copper Clearloop, where 100% of client assets are always held by Copper.
(Proof of Margins) locked
Finally, Deribit publishes an endpoint that shows cumulative margin locked (MM and IM per currency) for the entire user base. This endpoint shows in real-time how many assets are held on Deribit as margins for outstanding positions. The above sections provides specifics on the exact assets Deribit holds on behalf of clients and how users can verify the assets exist, hence this enpoint is an additional layer of transparency.
Please see the following real-time endpoint here.
|
OPCFW_CODE
|
Ignore null fields when DEserializing JSON with Gson or Jackson
I know there's lots of questions about skipping fields with a null value when serializing objects to JSON.
I want to skip / ignore fields with null values when deserializing JSON to an object.
Consider the class
public class User {
Long id = 42L;
String name = "John";
}
and the JSON string
{"id":1,"name":null}
When doing
User user = gson.fromJson(json, User.class)
I want user.id to be '1' and user.name to be 'John'.
Is this possible with either Gson or Jackson in a general fashion (without special TypeAdapters or similar)?
How will user.name be 'John'. if the example json has "name":null ? Are you asking if it can skip Null values in Json and not override the default in the class?
@jeffporter Yes that's exactly the question.
Did you find a pretty solution for this?
I have the same problem as well
Now it is 2019 )) Did you find a solution for it?
A lot of time gone by, but if you like me ran into this question and you are using at least Jackson 2.9 then one way you could sovle it is using JsonSetter and Nulls.SKIP:
public class User {
private Long id = 42L;
@JsonSetter(nulls = Nulls.SKIP)
private String firstName = "John";
@JsonSetter(contentNulls = Nulls.SKIP)
private String[] lastNames = { "Wick", "Woo", "Wayne" };
... cooresponding getters and setters
}
This way, when null is encountered, setter will not be called.
Note: more details can be found here.
This code actually does not compile. The JsonSetter line must be @JsonSetter(nulls = Nulls.SKIP)
In order to skip null values in collections, you could use @JsonSetter(contentNulls = Nulls.SKIP)
This is excellent! Thanks @Vytautas
What I did in my case is to set a default value on the getter:
public class User {
private Long id = 42L;
private String name = "John";
public getName(){
//You can check other conditions
return name == null? "John" : name;
}
}
I guess this will be a pain for many fields but it works in the simple case of less number of fields.
To skip using TypeAdapters, I'd make the POJO do a null check when the setter method is called.
Or look at
@JsonInclude(value = Include.NON_NULL)
The annotation needs to be at Class level, not method level.
@JsonInclude(Include.NON_NULL) //or Include.NON_EMPTY, if that fits your use case
public static class RequestPojo {
...
}
For Deserialise you can use following at class level.
@JsonIgnoreProperties(ignoreUnknown = true)
@JsonInclude(value = Include.NON_NULL) seems to be only working when serializing, not when deserializing.
Albeit not the most concise solution, with Jackson you can handle setting the properties yourself with a custom @JsonCreator:
public class User {
Long id = 42L;
String name = "John";
@JsonCreator
static User ofNullablesAsOptionals(
@JsonProperty("id") Long id,
@JsonProperty("name") String name) {
User user = new User();
if (id != null) user.id = id;
if (name != null) user.name = name;
return user;
}
}
|
STACK_EXCHANGE
|
How to build a data science project from scratch
A demonstration using an analysis of Berlin rental prices, covering how to extract data from the web and clean it, gaining deeper insights, engineering of features using external APIs, and more.
By Jekaterina Kokatjuhha, Research Engineer at Zalando.
There are many online courses about data science and machine learning that will guide you through a theory and provide you with some code examples and an analysis of very clean data.
However, in order to start practising data science, it is better if you challenge a real-life problem. Digging into the data in order to find deeper insights. Carrying out feature engineering using additional sources of data and building stand-alone machine learning pipelines.
This blogpost will guide you through the main steps of building a data science project from scratch. It is based on a real-life problem — what are the main drivers of rental prices in Berlin? It will provide an analysis of this situation. It will also highlight the common mistake beginners tend to make when it comes to machine learning.
These are the steps that will be discussed in detail:
- finding a topic
- extracting data from the web and cleaning it
- gaining deeper insights
- engineering of features using external APIs
Finding a topic
There are many problems that can be solved by analyzing data, but it is always better to find a problem that you are interested in and that will motivate you. While searching for a topic, you should definitely concentrate on your preferences and interests.
For instance, if you are interested in healthcare systems, there are many angles from which you could challenge the data provided on that topic. “Exploring the ChestXray14 dataset: problems” is an example of how to question the quality of medical data. Another example — if you are interested in music, you could try to predict the genre of the song from its audio.
However, I suggest not only to concentrate on your interests but also to listen to what people around you are talking about. What bothers them? What are they complaining about? This can be another good source of ideas for a data science project. In those cases where people are still complaining about it, this may mean that the problem wasn’t solved properly the first time around. Thus, if you challenge it with data, you could provide an even better solution and have an impact in how this topic is perceived.
This may all sound a bit too abstract, so lets find out how I came up with the idea to analyze Berlin rental prices.
“If I had known that the rental prices were so high here, I would have negotiated for a higher salary.”
This is just one of the things I heard from people who had recently moved to Berlin for work. Most newcomers complained that they hadn’t imagined Berlin to be so expensive, and that there were no statistics about possible price ranges of the apartment. If they had known this it beforehand, they could have asked for a higher salary during the job application process or could have considered other options.
I googled, checked several rental apartment websites, and asked several people, but could not find any plausible statistics or visualizations of the current market prices. And this was how I came up with the idea of this analysis.
I wanted to gather the data, build an interactive dashboard where you could select different options such as a 40m2 apartment situated in Berlin Mitte with a balcony and equipped kitchen, and it would show you the price ranges. This, alone, would help people understand apartment prices in Berlin. Also, by applying machine learning, I would be able to identify the drivers of the rental prices and practise with different machine learning algorithms.
Extracting data from the web and cleaning it
Getting the data
Now that you have an idea about your data science project, you can start looking for the data. There are tons of amazing data repositories, such as Kaggle, UCI ML Repository or dataset search engines, and websites containing academic papers with datasets. Alternatively, you could use web scraping.
But be cautious — old data is everywhere. When I was searching for the information about the rental prices in Berlin, I found many visualizations but they were old, or without any year specified.
For some statistics, they even had a note saying that this price would only be for a 2 room apartment of 50 m2 without furniture. But what if I am searching for a smaller apartment with a furnished kitchen?
Old data is everywhere.
As I could find only old data, I decided to web scrape the websites that offered rental apartments. Web scraping is a technique used to extract data from websites through an automated process.
My web scraping blogpost goes into the details of pitfalls and design patterns of web scraping.
Here are the main findings:
- Before scraping, check if there is a public API available
- Be kind! Don’t overload the website by sending hundreds of requests per second
- Save the date when the extraction took place. It will be explained why this is important.
Once you starting getting the data, it is very important to have a look at it as early as possible in order to find any possible issues.
While web scraping rental data, I included some small checks such as the number of missing values for all features. Web-masters could change the HTML of the website, which would result in my program not getting the data anymore.
Once I had ensured that all technical aspects of web scraping were covered, I thought the data would almost be ideal. However, I ended up cleaning the data for around a week because of not so obvious duplicates.
Once you starting getting the data, it is very important to have a look at it as early as possible in order to find any possible issues. For instance, if you web scrape, you could have missed some important fields. If you use a comma separator while saving data into a file, and one of the fields also contains commas, you can end up having files which are not separated very well.
ILLUSION vs REALITY
There were several sources of duplicates:
- Duplicated apartments because they had been online for a while
- Agencies had input errors, for example the rental price or the storey of the apartment. They would correct them after a while, or would publish a completely new ad with corrected values and additional description modifications
- Some prices were changed (increased and decreased) after a month for the same apartment
While the duplicates from the first case were easy to identify by their ID, the duplicates from the second case were very complicated. The reason is that an agency could slightly change a description, modify the wrong price, and publish it as a new ad so that the ID would also be new.
I had to come up with many logic-based rules to filter out the old versions of the ads. Once I was able to identify that these apartments would be the actual duplicates but with slight modifications, I could sort them by the extraction date, taking the latest one as the most recent.
Additionally, some agencies would increase or decrease the price for the same apartment after a month. I was told that if nobody wanted this apartment, the price would decrease. Conversely, I was told that, if there were so many requests for it, that the agencies increased the price. These sounds like good explanations.
Gaining deeper insights
Now that we have everything ready, we can start analyzing the data. I know data scientists love seaborn and ggplot2, as well as many static visualizations from which they can derive some insights.
It took me less than 30 minutes to create an interactive dashboard where one can select all the important components and see how the price would change.
Interactive dashboard of Berlin rental prices: one can select all the possible configurations and see the corresponding price distribution. (Data date: Winder 2017/18)
A fairly simple dashboard could already provide insights into the prices in Berlin for newcomers and could be a good user driver for a rental apartment website.
Already from this data visualization you can see that the price distribution of 2.5 rooms falls into the distribution of 2 room apartment. The reason for this is that most of the 2.5 room apartments aren’t situated in the center of the city which, of course, reduces the price.
Price distribution and number of apartments in Berlin.
This data was gathered in winter 2017/18 and it will also get outdated. However, my point is that the rental websites could frequently update their statistics and visualizations to provide more transparency to this question.
Engineering of features using external APIs
Visualization helps you to identify important attributes, or “features,” that could be used by these machine learning algorithms. If the features you use are very uninformative, any algorithm will produce bad predictions. With very strong features, even a very simple algorithm can produce pretty decent results.
In the rental price project, price is a continuous variable, so it is a typical regression problem. Taking all extracted information, I collected the following features in order to be able to predict a rental price.
These are the majority of the features used to predict the rental apartment price.
However, there was one feature that was problematic, namely the address. There were 6.6K apartments and around 4.4K unique addresses of different granularity. There were around 200 unique postcodes which could be converted into the dummy variables but then very precious information of a particular location would be lost.
Different granularity of the address: street with the house number, street with hidden house number and only a postcode.
What do you do when you are given a new address?
You either google where it is or how to get there.
By using an external API following the four additional features given, the apartment’s address could be calculated:
- duration of a train trip to the S-Bahn Friedrichstrasse (central station)
- distance to U-Bahn Stadtmitte (city center) by car
- duration of a walking trip to the nearest metro station
- number of metro stations within one kilometer from the apartment
These four features boosted the performance significantly.
Bio: Jekaterina Kokatjuhha is a Research Engineer at Zalando, focusing on scalable machine learning for fraud prediction.
Original. Reposted with permission.
- On-line and web-based: Analytics, Data Mining, Data Science, Machine Learning education
- Software for Analytics, Data Science, Data Mining, and Machine Learning
- Data Science Projects Employers Want To See: How To Show A Business Impact
- Text Preprocessing in Python: Steps, Tools, and Examples
- Are you buying an apartment? How to hack competition in the real estate market
|
OPCFW_CODE
|
import WithEntityLists from '@datahub/lists/components/with-entity-lists';
// @ts-ignore: Ignore import of compiled template
import template from '../templates/components/list-count';
import { layout, classNames, className } from '@ember-decorators/component';
import { DataModelEntity } from '@datahub/data-models/constants/entity';
import { computed } from '@ember/object';
import { singularize } from 'ember-inflector';
import { alias } from '@ember/object/computed';
import { capitalize } from '@ember/string';
export const baseComponentClass = `entity-list-count`;
/**
* ListCount is a bug for displaying the number of items in a list
* The bug should be displayed when there are items in the list and hidden or dismissed otherwise
* @export
* @class ListCount
* @extends {WithEntityLists}
*/
@layout(template)
@classNames(baseComponentClass)
export default class ListCount extends WithEntityLists {
baseComponentClass = baseComponentClass;
/**
* The type of DataModel entity being rendered
*/
entityType?: DataModelEntity;
/**
* Reference to the entity class
* Required for entity list trampoline
*/
@alias('entityType')
entity?: DataModelEntity;
/**
* Display value for the current list item
* @readonly
*/
@computed('entityType')
get displayName(): string {
const { entityType } = this;
return entityType ? `${capitalize(singularize(entityType.displayName))} List` : '';
}
/**
* The number of items in the list
*/
@className(`${baseComponentClass}--display`, `${baseComponentClass}--dismiss`)
@alias('entityListTrampoline.list.length')
count?: number;
}
|
STACK_EDU
|
Pytorch is set to have a big year in 2021 as the open source AI framework continues to gain popularity among developers. In this blog post, we’ll take a look at some of the most promising Pytorch 2021 projects.
Checkout this video:
Pytorch in 2021: The Year of AI
Pytorch is a deep learning framework that has been gaining popularity in recent years. It is used by some of the world’s leading companies, including Facebook, Google, and Microsoft. Pytorch is also used by many academic institutions, including Stanford and MIT.
In 2021, Pytorch will continue to be a leading deep learning framework, due to its ease of use and flexibility. Additionally, Pytorch will continue to gain popularity due to its strong support for both research and production use cases.
Pytorch: The Future of AI
In 2021, Pytorch is set to become the future of AI. Pytorch is a powerful, yet easy to use framework for building and training neural networks. With its ability to scale easily to large datasets, and its ease of use, Pytorch is the perfect tool for deep learning research. Additionally, Pytorch has been used successfully in a variety of real-world applications such as computer vision, natural language processing, and recommender systems.
Pytorch: The New Wave of AI
Pytorch is a free and open-source machine learning library for Python, based on Torch, used for applications such as computer vision and natural language processing. It is primarily developed by Facebook’s artificial intelligence research group.
Pytorch has seen significant adoption in the field of deep learning and is widely used by researchers working on cutting-edge techniques such as reinforcement learning and natural language processing.
The popularity of Pytorch is due to its ease of use and flexibility; it allows developers to create custom architectures with minimal code. Pytorch also has a strong community support, with many open-source libraries and tools available.
In 2021, Pytorch is set to become the new standard for AI development, due to its growing popularity and adoption by major tech companies.
Pytorch: The Next Generation of AI
Pytorch is a powerful AI framework that enables developers to create sophisticated AI models with ease. The framework is designed to be easy to use and understand, making it ideal for both experienced AI developers and newcomers to the field.
Pytorch has seen significant growth in recent years, and 2021 is shaping up to be a big year for the framework. Here are some of the key things to keep an eye on in the coming year:
– The release of Pytorch 3.0: Pytorch 3.0 is set to be released in early 2021, and it promises to be a major upgrade from previous versions of the framework. 3.0 will include significant improvements to performance, usability, and flexibility, making it even easier to develop complex AI models with Pytorch.
– The rise of Pytorch Lightning: Pytorch Lightning is a new library that makes it easier to develop and train Pytorch models. Lightning is designed to be easy to use, scalable, and extensible, making it a great choice for both small projects and large-scale enterprise deployments.
– increased adoption by industry: Pytorch has seen increasing adoption by industry in recent years, and this is set to continue in 2021. Major companies such as Facebook, Google, Microsoft, and Amazon are all using Pytorch to power their AI applications, and this trend is only going to continue as Pytorch becomes increasingly popular.
Pytorch: The Evolution of AI
In 2021, Pytorch will continue to be a powerful tool for AI researchers and developers. The library has seen significant development in the past year, with a number of new features and improvements.
One of the most exciting new features is the ability to use Pytorch with TensorFlow. This makes it possible to combine the strengths of both libraries, and gives users more flexibility when working with complex data sets.
Another significant improvement is the addition of support for CuDNN 7. This enables Pytorch to take advantage of the latest GPU hardware, and makes it possible to train larger and more complex models.
Finally, Pytorch now includes a number of tools for debugging and optimizing code. These tools will help developers ensure that their code is running efficiently, and will make it easier to find and fix errors.
With all of these new features, Pytorch is poised to continue its role as a leading tool for AI development in 2021.
Pytorch: The Power of AI
Pytorch is an open source machine learning framework that has been gaining popularity in the AI community for its ease of use and flexibility. Founded by Facebook AI, it has become the go-to tool for many researchers and practitioners.
Pytorch’s main advantage over other frameworks is its dynamic nature, which allows for easy construction of neural networks. This makes it much faster to iterate and experiment with different models. Additionally, Pytorch has strong support for GPUs, which makes it well-suited for large-scale training tasks.
2021 looks to be a big year for Pytorch, as more and more companies are adopting it for their AI needs. With its growing popularity and strong community support, Pytorch is poised to continue its dominant position in the AI landscape.
Pytorch: The Impact of AI
As we enter 2021, it is clear that artificial intelligence (AI) is having a profound impact on our world. We are seeing tremendous advancements in the field of AI, and Pytorch is leading the charge.
Pytorch is a powerful open source software library for deep learning that enables developers to easily create and experiment with neural networks. It has quickly become the go-to tool for many AI researchers and developers due to its flexibility and ease of use.
In 2021, we are likely to see even more adoption of Pytorch as the industry continues to move towards AI-powered solutions. Pytorch provides the perfect platform for developers to create innovative new applications that will have a transformative impact on our world.
Pytorch: The Promise of AI
Pytorch is a Python-based open source machine learning library for deep learning created by Facebook. It is one of the most popular libraries for deep learning and has been used by companies such as Google, Twitter, and Airbnb. Pytorch has seen a meteoric rise in popularity over the past few years and has become the go-to Deep Learning library for many researchers and practitioners.
In 2021, Pytorch promises to be even more popular with the release of Pytorch 1.0. This new version of Pytorch includes many features that will make it even easier to use, including a new C++ API, improved performance, and better support for distributed training. With these improvements, Pytorch is sure to be the library of choice for many AI applications in the coming year.
Pytorch: The Hope of AI
Pytorch is a powerful open-source software library for numerical computation and machine learning. It was created by Facebook’s AI Research lab in 2016 and has been widely adopted by the machine learning community.
Pytorch is known for its ease of use, flexibility, and scalability. It has been used to build some of the most popular machine learning models, including Google Translate, AlphaGo, and Facebook’s own recommendations system.
In 2021, Pytorch is the hope of AI. It is the software library that can power the next generation of intelligent applications.
Pytorch: The Future of Humanity
Every day, it seems, there is a new story about how artificial intelligence (AI) is changing our world.
In 2021, one of the most important things to watch in AI is Pytorch.
Pytorch is an open source deep learning platform that provides a seamless path from research prototyping to production deployment.
Pytorch has already been used by companies such as Facebook, Google, and Microsoft to power some of the most advanced artificial intelligence applications in the world.
And it is only going to become more popular in the coming year.
There are several reasons for this. First, Pytorch makes it easy to experiment with different models and algorithms. Second, Pytorch has excellent support for production deployment. And third, Pytorch is backed by a strong community of developers and researchers.
So if you are interested in AI, make sure to keep an eye on Pytorch in 2021. It is sure to be an exciting year for this cutting-edge platform.
|
OPCFW_CODE
|
Why does the definition of 2-cycle as $\ker\partial_2$ work so magically?
I am curious about why the definition of $\ker\partial_2$ as 2-cycles (a 2-dimensional hole or void) works so well. It seems like a mystery to me. (I am referring to the simplicial complex case.)
For $n=1$, I can intuitvely understand. For instance, for the loop connecting the vertices $v_0,v_1,v_2$, we have $\partial_1[v_0,v_1]=v_1-v_0$, $\partial_1[v_1,v_2]=v_2-v_1$ and $\partial[v_2,v_0]=v_0-v_2$. So adding them up produces a "telescoping sum" effect, which corresponds to "closing the loop":
$$v_1-v_0+v_2-v_1+v_0-v_2=0$$
For $n=2$, it seems like a mystery to me. Consider the tetrahedron (3 simplex with vertices $v_0,v_1,v_2,v_3$). There are 4 faces:
$$\partial_2[v_0,v_1,v_2]=[v_1,v_2]-[v_0,v_2]+[v_0,v_1]$$
$$\partial_2[v_0,v_1,v_3]=[v_1,v_3]-[v_0,v_3]+[v_0,v_1]$$
$$\partial_2[v_0,v_2,v_3]=[v_2,v_3]-[v_0,v_3]+[v_0,v_2]$$
$$\partial_2[v_1,v_2,v_3]=[v_2,v_3]-[v_1,v_3]+[v_1,v_2]$$
Question: The mystery to me is why is it so nice that:
$$\partial_2[v_0,v_1,v_2]-\partial_2[v_0,v_1,v_3]+\partial_2[v_0,v_2,v_3]-\partial_2[v_1,v_2,v_3]=0$$, which corresponds to the 4 faces enclosing a 2-dimensional hole? What is the mathematics behind it?
I roughly know that this means that the $\partial_2[v_0,v_1,v_2],\dots, \partial_2[v_1,v_2,v_3]$ are "linearly dependent" therefore they have a linear combination that makes zero. However this explanation is not satisfying enough for me.
Further question: This brings me to ask the next question, how about for $n\geq 3$? How are we so sure that $\ker\partial_n$ is a $n$-dimensional hole?
Thanks.
|
STACK_EXCHANGE
|
Elmar Vogt just posted up some nice statistical analyses of the Voynich Manuscript’s language, looking particularly at the problematic issue of line-related structure.
You see, if Voynichese is no more than a ‘simple language’ (however lost, obscure and/or artificial), there would surely be no obvious reason for words at the beginning or end of any line to show any significant differences from words in the middle of the line. And yet they do: line-initial words are slightly longer (about a character), second words are slightly shorter, while line-terminal words are slightly shorter than the average (though some of Elmar’s graphs get a bit snarled up in noise mapping this last case).
The things I infer from such line-structure observations are
(a) any fundamental asymmetry means that Voynichese can’t be a simple language, because simple languages are uniform & symmetrical
(b) it’s very probably not a complex language either, because no complex language I’ve ever seen has done this kind of thing either
(c) the first “extra” letter on the first word is either a null or performs some kind of additional function (such as a vertical “Neal key”, a notion suggested by Philip Neal many years ago)
(d) the missing letter in the second word is probably removed to balance the extra letter in the first word, i.e. to retain the original text layout, while
(e) the last word has its own statistics completely because words in the plaintext were probably split across line-ends.
In Voynichese, we see the EVA letter combination ‘-am’ predominantly at the right-hand end of lines, which has given rise to the long-standing suspicion that this might encipher a hyphen character, or a rare character (say ‘X’) appropriated to use as a hyphen character. For what conceivable kind of character would have a preference for appearing at the end of a line? In fact, the more you think about this, the stronger the likelihood that this is indeed a hyphen becomes.
But there’s an extraordinary bit of misinformation you have to dodge here: the Wikipedia page on the hyphen asserts (wrongly) that the first noted use of a hyphen in this way was with Johannes Gutenberg in 1455 with his 42-line-per-page Bible. According to this nice post, “Gutenberg’s hyphen was a short, double line, inclined to the right at a sixty degree angle”, like this:-
In fact, Gutenberg was straightforwardly emulating existing scribal practices: according to this lengthy online discussion, the double stroke hyphen was most common in the 15th century, single-stroke hyphens were certainly in use in 13th century French manuscripts (if not earlier), and that both ultimately derive from the maqaf in Hebrew manuscripts that was in use “by the end of the first millennium AD”.
So if you think Voynichese line-terminal ‘-am’ does encipher a hyphen, the original glyph as written was probably a double-stroke hyphen: moreover, I’d predict that Voynich pages containing many ‘-am’s were probably enciphered from pages that had a ruled right-hand line that the plaintext’s scribe kept bumping into! Something to think about! 🙂
|
OPCFW_CODE
|
In your Install/Setup guide, you’ve mentioned that you can setup influxdb either from the UI or command line; but if you set it up from the UI, you wouldn’t be able to see the token once you finished the setup process (there’s no token creation notification whatsoever; you are left with a created token under the API token tab).
If I set it up in the command line, however, I can see the new operator token because all of that information is saved directly to the local config file.
Did I miss anything here? If new user chooses to set up in the UI, the new user has a good chance to spend an hour or even more reading docs to no avail.
The difference between these two approaches are not documented.
It’s been a bumpy trial and error for me just to set up the database.
Thanks, I can see that; but under the API Token tab, all I can see is the token’s name and permissions it has, not the token itself; and when I go back to terminal and use influx config command, I was asked to provide the token itself, which I can’t see if I set it up via UI.
influx setup, on the other hand, would note down the setup info I entered into the configs file, which include the token itself.
I might have missed something when I do it from UI.
Yes, I am able to create all-access token from the UI and I can note down the token at the time of creation. It’s only when I setup my account the operator token doesn’t really show itself after creation like when we create an all-access token.
I tried several times today and read their documentation a few times. I believe they both have confirmed that we are creating an operator token at setup and all-access token is just an optional thing.
To be honest, I found InfluxDB’s documentation confusing at times.
Here’s an excerpt from InfluxDB’s setup guide on their website:
(Optional) Create an All Access API token.
During the InfluxDB initialization process, you created a user and API token that has permissions to manage everything in your InfluxDB instance. This is known as an Operator token. While you can use your Operator token to interact with InfluxDB, we recommend creating an all access token that is scoped to an organization.
To run setup with prompts for the required information, enter the following command in your terminal:
Complete the following steps as prompted by the CLI:
Enter a primary username.
Enter a password for your user.
Confirm your password by entering it again.
Enter a name for your primary organization.
Enter a name for your primary bucket.
Enter a retention period for your primary bucket—valid units are nanoseconds (ns), microseconds (us or µs), milliseconds (ms), seconds (s), minutes (m), hours (h), days (d), and weeks (w). Enter nothing for an infinite retention period.
Confirm the details for your primary user, organization, and bucket.
Once setup completes, InfluxDB is initialized with the user, organization, bucket, and operator token.
InfluxDB creates a default configuration profile for you that provides your InfluxDB URL, organization, and API token to influx CLI commands. For more detail about configuration profiles, see influx config.
I understand; but as a new user, I almost certainly would have to follow their installation and setup guides, and their guides really aren’t very straightforward; If a new user has to do trial and error with the setup guide, there’s room to improve.
|
OPCFW_CODE
|
[19:49] <airurando> Just read the thread on the loco contacts mailing list about the Ubuntu Drupal Theme.
[19:50] <airurando> Pity we never got the ubuntu ireland website up and running again.
[19:54] <zmoylan70b> someone would have to maintain it
[19:54] <airurando> I know zmoylan70b
[19:55] <airurando> never got someone with the skills interested
[19:55] <airurando> lots of initial interest but things always fizzled out
[19:56] <zmoylan70b> in other channels for podcasts i listen to they have put up sites and forums and then spend lots of time maintaining a site barely used
[19:57] <zmoylan70b> and keeping up with just security changes is an ongoing project
[19:57] <airurando> aye indeed
[19:57] <airurando> perhaps we are better off without it
[19:57] <airurando> still i lament its loss
[19:59] <zmoylan70b> as time goes by forums.and websites have given way to social media.
[20:01] <zmoylan70b> bought a win8 laptop last week. still looking at how to dual boot it
[20:02] <zmoylan70b> cheapo argos end of catalog special
[20:06] <zmoylan70b> normally i'd just install ubuntu but this time i need windows to run firmware updates for old hardware
[20:14] <airurando> let me know how you get one.
[20:14] <zmoylan> currently working on making a clone of the hard drive so that if it fails i can get back to win8 and start again
[20:15] <airurando> smart thinking
[20:15] <zmoylan> win8 does not make it easy
[20:15] <airurando> no experience of win8
[20:15] <zmoylan> it's horrible
[20:16] <airurando> :-)
[20:16] <airurando> A small bit of positive activity on the forum
[20:16] <airurando> http://ubuntuforums.org/showthread.php?t=2100291
[20:16] <airurando> we are terrible at keeping up with activity on the forum
[20:17] <airurando> I only check it every few months or so.
[20:17] <airurando> Glad Ste_JDM got the delayed replies
[20:19] <AndrewMcC> airurando: You can have ubuntuforums e-mail you daily if there have been changes
[20:20] <airurando> AndrewMcC: Told you I wasn't a geek. how do I do that then?
[20:20] <AndrewMcC> Can't remember :) One second...
[20:20] <airurando> I barely know how to post a reply
[20:22] <airurando> Hah I can't edit settings as I have fewer than 25 posts!
[20:24] <AndrewMcC> Me too :)
[20:24] <AndrewMcC> Okay, go to the "Ireland Team" forum.
[20:24] <airurando> right
[20:24] <airurando> there
[20:24] <AndrewMcC> Just above the list of threads, there's a "Forum tools" drop-down. Select "Subscribe to this forum"
[20:25] <AndrewMcC> Change notification type to whatever you want, such as "daily update by email".
[20:26] <airurando> cheers AndrewMcC
[20:26] <airurando> done :-)
[20:26] <AndrewMcC> BTW, airurando, were you ever down at the Carlow coderdojo. Went last week for the first time to see what it was like.
[20:27] <AndrewMcC> Think they're nearly at the end of the term (possibly no sessions over summer), but might be interesting to see where it goes next academic year.
[20:27] <airurando> no I've never been. I know about it though.
[20:28] <airurando> I am curious but can offer no practical mentor help
[20:28] <airurando> AndrewMcC: Set one up in Athy and I'll bring my kids along :-)
[20:29] <AndrewMcC> Sure, I'll sort it out tomorrow ;)
[20:29] <airurando> How did you find the carlow coderdojo last weekend?
[20:29] <airurando> busy
[20:29] <airurando> well run?
[20:29] <AndrewMcC> They're fully subscribed. They have more computer space (it's in the top of the Carlow IT library), but they haven't enough mentors.
[20:30] <AndrewMcC> They only allocated 15 tickets to the senior group, but really there's space in the room for more like 30-40.
[20:30] <airurando> what languages do they teach?
[20:30] <airurando> or mentor
[20:30] <AndrewMcC> Last week it was HTML/CSS.
[20:30] <AndrewMcC> Because spaces are scarce, there's a problem with continuity. Somebody this week mightn't get in next week.
[20:32] <AndrewMcC> Not sure there are many programmers mentoring there. I'd be happy to help with other languages, like Python, etc. Another suggestion was to look into some other graphical stuff like Blender's animation and game extensions.
[20:32] <AndrewMcC> Consequently I started working through some Blender video tutorials and am starting to understand it better.
[20:33] <airurando> great stuff
[20:35] <airurando> I suppose the nature of the coderdojos is fairly organic. People will come and go. If spaces are very limiting perhaps folks can't get tickets consistently.
[20:36] <AndrewMcC> Yep. The other guys there said the tickets were usually snapped up within minutes of becoming available each week.
[20:37] <airurando> yeah that is a problem. demand dramatically outstripping supply will surely hamper consistent attendance.
|
UBUNTU_IRC
|
Will the final release include the SDK and python part of the modding possibilities ?
No. Eventually things will be migrated over to Python, but initially it won't be present.
Somewhat semi-off-topic, but... are there any IDEs for Python that can be considered recommendable? As one of those pesky .NET developers, I'd prefer anything that remotely resembles a contemporary version of Visual Studio (I'm looking at you, Intellisense).
I'm really, Really, hoping it doesn't take them forever to get us the proper tools we'll need. There's a large chunk of the Dragonlance Mod I simply won't be able to do without the ability to access the Python to change some of the Core Mechanics or at least to "over-ride" them for the Mod. We'll also need the plug-ins and associated animation tools to be able to import our custom animations for the custom models into the game.
Indeed I completely understand it will take a good deal of time for them to get these things too us, but with the "Moddability" of Elemental being one of the key aspects of the game its-self getting us these tools should be really high up on their priorities of "things to do". There are a Lot of really great Mods being planned out, and a Lot of Total Conversion Mods, that won't even be possible until we have these tools. Also we'll be getting a bunch of new Mod Content with the "60 Days Later Modding Package" and I imagine there's other Mod Makers out there like my-self who will be waiting for this package to see exactly what content we'll be getting for use in our Mods. I don't think any of us doing "Major Mods" want to bust our ass doing a lot of work just to have that work made obsolete by a Mod Package that's coming 2 months from now.
I'm really hoping I'll be able to get the Mod done and released before I have to go in for my upcoming Hospital stay which should be some time in the next 6 months (hopefully sooner). Of course the financial aid for the Health Department here goes super slow so it's highly possible I could kick the bucket before I get the mod done..LoL.
I guess it depends on how they are going to allocate their resources. I imagine migrating everything into Python will be a pain in the ass (not difficult, just tedious).
This is true. Also as we do know they Do put a high importance on modding and for us to have the ability to make really good mods. I'd guess they'd have the resources we need ready for us at about the time Frogboy's Mod comes out. See, they have to hold back the good stuff long enough for him to have time to show off with it, then we'll get our hands on it...lol. Hopefully it'll be something we get either with the "60 Day Mod Package" or possibly some time before. (seriously hoping for "before")
Thanks a lot for the link
Ohh, neat. Thanks.
There are many great features available to you once you register, including:
Sign in or Create Account
Copyright © 2019 Stardock Entertainment and Valve Corporation. Elemental: Fallen Enchantress and Fallen Enchantress: Legendary Heroes are trademarks of Stardock Entertainment. Steam and the Steam logo are trademarks and/or registered trademarks of Valve Corporation in the U.S. and/or other countries. All rights reserved.
|
OPCFW_CODE
|
Resources for QRAM implemented as a subroutine for quantum algorithm
I am currently working on a project on a higher version of amplitude amplification and for that we want to store the initial state (which will be some sort of superposition) into a QRAM. Now we want to simulate the working of the algorithm and I am supposed to make a QRAM, but most of the papers which use QRAM as a subroutine have not shown implementation, furthermore the bucket brigade Quantum RAM is actually not able to give out a superposition of Quantum States which have been stored into a QRAM.
Ideally we want the QRAM to work something like this,
$$
\sum_{j=0}^{2^q-1} \alpha_j | \text{adr}_j \rangle|0\rangle \xrightarrow{\text{QRAM}} \sum_{j=0}^{2^q-1}\alpha_j |\text{adr}_j\rangle|m_j\rangle
$$
Anyone who has any idea which QRAM will be able to query out a superposition of states stored in memory cells, please provide some reference. Any help is greatly appreciated!!
Where is $m_j$ in the input state? The way you've written it, it's like the QRAM is preparing it instead of retrieving it. Did you actually mean to write the input state as $|adr_j\rangle|0\rangle|m_0\rangle|m_1\rangle \dots |m_{2^q-1}\rangle$?
Also you didn't include the write operation. (If you don't need to write, you want a QROM not a QRAM. QROM is cheaper.)
@CraigGidney So yes I agree that we can use a QROM too, but due to some reason the prof I am working under wants QRAM(perhaps at a later stage we will be modifying the states too). But If you have any ideas how it can be done using QROM, please go aheah.
The notation I have used can be seen in a lot of papers(https://arxiv.org/abs/2002.09340 one such paper) and the way I see it is, you input the superposition of address states and a 0 state for output qubit. But once it passes through the QRAM, the address qubits remain the same whereas the memory qubits contain the information in the cell.
Your prof explicitly said QROM was not good enough? It's common for people to use "QRAM" to ambiguously refer to a few different things, including QROM.
I totally get your point. If you have any idea how to query out a superposition of the states stored in QRAM. Please let me know
If you are looking for implementations of quantum memories, you can find a few in this QRAM library for Q#. I am not quite sure what you mean by the bucket brigade protocol is not able to readout entangled data, but as far as I understand you can do that with bucket brigade (we used it in this example that implements Grover's algorithm with a bucket brigade QRAM).
I agree with Craig, clarifying with your prof whether you need just read operations (QROM) or read/write (QRAM) is important here.
|
STACK_EXCHANGE
|
Make Closure Compiler merge several object property declarations as one big literal
I split my code into several files, and then run a script to merge and compile them (with ADVANCED_OPTIMIZATIONS). A big part of the functionality is implemented in a single object's prototype.
So when merged, it could look something like this:
(function(){
/** @constructor */ function MyConstructor() {};
MyConstructor.prototype = {};
MyConstructor.prototype['foo'] = function() { alert('foo'); };
MyConstructor.prototype['bar'] = function() { alert('bar'); };
MyConstructor.prototype['baz'] = function() { alert('baz'); };
window['MyConstructor'] = MyConstructor;
}());
If you put that code into Closure Compiler just like that, here's the output (pretty-printed):
function a() {
}
a.prototype = {};
a.prototype.foo = function() {
alert("foo")
};
a.prototype.bar = function() {
alert("bar")
};
a.prototype.baz = function() {
alert("baz")
};
window.MyConstructor = a;
The question is, is there some way I could tell Closure Compiler that it's ok to merge all of these in a single object literal, and even if there was code in-between (in this example there isn't, but there could be), so that no matter what, it made it all compile into one big object literal?
Here's a couple of solutions, and why they wouldn't work for me:
Solution 1: Simply declare them in one big object literal.
Wouldn't work because I have my code into several files, and I plan to make it so users can remove some of them (if they don't need them) prior to compilation. Object literals have comma-delimiters that would make this a nightmare.
Solution 2: Declare all functionality outside of the object (as private variables in the closure), and attach them into a simplified object literal at the end, which just has references to properties (such as {'foo':foo,'bar':bar,'baz':baz}).
Wouldn't work because, as said, the idea is to create something modular, and removing one file would make the reference break.
I'm open to ideas!
Edit: Some people could think that Closure Compiler can't do this. It can do this and much more, it's just that it has a bad attitude and does things when it feels like it.
Input this into Closure:
(function(){
var MyConstructor = window['MyConstructor'] = function() {};
var myProto = {
'foo': function() { alert('foo'); },
'bar': function() { alert('bar'); }
};
myProto['baz'] = function() { alert('baz'); };
MyConstructor.prototype = myProto;
}());
The result is:
(window.MyConstructor = function() {
}).prototype = {foo:function() {
alert("foo")
}, bar:function() {
alert("bar")
}, baz:function() {
alert("baz")
}};
See? But, this is code is very fragile in that it may compile into something completely different (and not that good) if modified slightly. For example even a variable assignment somewhere in the middle might cause it to output very different results. In other words, this doesn't work (except in this case).
Edit 2: see this jsperf. A big object literal is faster in Chrome (proportional to its size).
Edit 3: Closure Compiler bug report.
@wvxvw Closure is very different from regular minifiers, it is really a compiler. It does things that go way beyond what others do, at least with ADVANCED_OPTIMIZATIONS on. What I mean is that it already did many other much more complicated things that could potentially have broken my javascripts severely (and sometimes did, luckily I have unit tests to know when that happens). Also, there is a case in my code when Closure compiler DOES exactly this! I'll edit with an example.
@wvxvw It's not bad. There are three compilation modes. One removes WHITESPACE_ONLY (super safe), the other does SIMPLE_OPTIMIZATIONS (the kind of thing that should be safe in 99% of cases, think YUI compiler), and the one I like is when it does ADVANCED_OPTIMIZATIONS, in other words, there's a high chance it will break your code unless you're very careful, but on the upside, the code runs faster, and is waaay smaller than what your garden-variety minifier could do.
Why do you want it to be assigned using a single object literal syntax? Is it because of file size? If so, are you gzipping? If you are, I'm guessing that it won't make any difference.
@user1689607 I'm gzipping (in any case this is a library), but it's not only because of file size, there should be a small performance gain. I mean, compare creating an empty object and setting properties on it one by one, against just creating it once, with all properties set. It should be faster to create it only once. A bit of informal benchmarking on my part seems to confirm this, in Chrome.
@CamiloMartin: With all due respect, I certainly believe you that a direct comparison of the two approaches shows a single object assignment to be faster than an empty object assignment followed by multiple property assignments, but if you compare the entire loading of the library given the two approaches, I'd have a hard time believing that the difference is perceptible. If this were some code that needed to run multiple times in rapid succession, those minor optimizations may be worthwhile, but for a one-time operation, I wouldn't spend too much time trying to outsmart the compiler.
Not that I'm trying to minimize your question. But I've spent enough time trying to tweak my code for Closure Compiler to know that it has rarely made any real difference.
@user1689607 Well, your point does make sense, I guess it's a bit of stubbornness. If I can't find a way, I'll give up on it and just make it property assignments.
@wvxvw Do you mean how jsPerf works? It executes the code several times (like, in this case, thousands), so it makes a more or less representative average. There's no need to write the benchmark loops themselves with jsPerf, since it does that already. You might want to read its FAQ, or check out the engine it uses, Benchmark.js.
@wvxvw Well, the idea is to benchmark speeds. I understand that faster code might not be better code (sometimes it can be buggy or harder to mantain), but it is possible to measure its speed. It has the advantage of being an objective metric, and objectiveness sometimes creeps into places where a qualitative measure would be better. I won't say that's not the case with my micro-optimization effort...
@wvxvw Well, the number is in operations per second. So each snippet is ran (possibly with setup/teardown) in a loop, until it can average some stable number of operations per second. The bigger the number, the faster the code runs. If one item says "10% slower", it means "10% slower than the fastest snippet" - the idea is to compare similar code and see which one should be used based on speed (assuming they work the same or are compatible for a certain case).
You are correct in that the compiler does not do this optimizations currently. Can you post a feature request (and be sure to reference your jsperf results)? http://code.google.com/p/closure-compiler/issues/list
@ChadKillingsworth Yes, I'll do that and post a link here.
@wvxvw Well, I wouldn't say JsPerf is silly - if you have two chunks of code (potentially very big code, if you include a library), it can easily tell you which one runs faster. What I don't understand is why you say it isn't comparing them to one another, for me it is. For example, a result like this: 767,727 ±8.03% 61% slower means that a certain snippet runs at some 767,727 operations per second, within a 8% error margin, and is 61% slower than the fastest snippet.
@wvxvw Well, faster than the other snippet(s). So if you know know five ways of doing the same thing, you make five snippets, and run the test. The one which says "fastest" is the fastest one. (I can't see the word "faster" in the page, there's just "fastest" and "slower").
There is a workaround I'm doing. It might not apply, so this answer cannot be accepted. Still, this is what works in my case.
My folder structure looked like this:
src
├───components
└───core
And, before compilation, I merged src/intro.js, some files at the src level (in a specific order), then all of the files in core (any order), then all in components (any order), then outro.js.
Now, the folder structure looks like this:
src
├───components
│ ├───modules
│ └───plugs-literal
└───core
├───internal
├───modules
└───plugs-literal
And the compilation order is (note the part with arrows):
src/intro.js
a couple files in src/core, specific order.
All files in src/core/internal
src/core/plugs-literal-intro.js <--
All files in src/core/plugs-literal <--
All files in src/components/plugs-literal <--
src/core/plugs-literal-outro.js <--
All files in src/core/modules
All files in src/components/modules
src/outro.js
The idea is that one file contains the beginning of an object literal, another file has the closing of an object literal, and two folders contain properties. More or less like this:
src/core/plugs-literal-intro.js:
var myObjectLiteral = {
'someSimpleProp': 'foo',
'someOtherSimpleProp': 'bar',
'lastOneWithoutTrailingComma': 'baz'
src/core/plugs-literal/EXAMPLE.js:
,'example': function() { alert('example'); } // comma before, not after.
src/core/plugs-literal-outro.js:
};
If this introduces some unwanted problem, I'll know later. But then, I could assign a different folder to contain prototype properties declared individually.
|
STACK_EXCHANGE
|
Can't get twig filter in macros
Winter CMS Build
1.2
winter/storm dev-wip/1.2 91993e2
winter/wn-backend-module dev-wip/1.2 0258f85
winter/wn-cms-module dev-wip/1.2 a998ed7
winter/wn-system-module dev-wip/1.2 c5295ca
twig/twig v3.4.1
PHP Version
8.0
Database engine
MySQL/MariaDB
Plugins installed
No response
Issue description
Getting an error when using |page twig filter in a macro.
An exception has been thrown during the rendering of a template ("Undefined array key "this"").
Steps to replicate
composer create-project wintercms/winter winter12 "dev-wip/1.2 as 1.2"
php artisan winter:install
php artisan winter:env
php artisan winter:up
Then add {{ 'plugins' | page }} in default layout or home page.
Getting link to plugins page is ok.
If the same filter is call from within a macro, it fails.
Add a macro to layout and calling it :
{% macro test() %}
{{ 'plugins' | page }}
{% endmacro test %}
{% import _self as home %}
{{ home.test() }}
Fall in an error.
Workaround
No response
I can confirm this bug, I am using this trick on a link macro to mark the link as the currently visited one:
{% macro dropdownLink(href, label, target = '_self') %}
<li>
<a href="{{ href }}"
target="{{ target }}"
class="dark:hover:bg-jacarta-600 hover:text-accent focus:text-accent hover:bg-jacarta-50 flex content-center rounded-xl px-5 py-3 transition-colors text-lg font-display text-sm leading-normal
{% if (''|page == href) %}text-accent-dark dark:text-white{% else %}text-jacarta-700 dark:text-jacarta-200{% endif %}
">
<span class="leading-none">{{ label }}</span>
</a>
</li>
{% endmacro %}
I'm using this trick on many websites and this occurred when migrating from 1.1.8 to 1.2.
I made some investigations:
This is probably due to the PR https://github.com/wintercms/winter/pull/455 by @LukeTowers.
As I understand this PR, he dissociated the Twig base environment from the cms extension.
Doing so caused the CMS environment to not "overlaps" the Twig's one, making the this context key not available anymore from the original Twig's MacroNode context here.
I made a dirty test to convince myself, replacing the twig package line linked previously by this one:
->write("\$context = \$this->env->mergeGlobals(['this' => 'test',\n")
Doing so make the this key found (of course another error is raised because the controller key can't be found in the this array).
I looked into the Twig documentation and found how to add a global variable. I was able to fix this issue by modifying the initTwigEnvironment method from the cms Controller class:
protected function initTwigEnvironment()
{
$this->twig = App::make('twig.environment.cms');
$this->twig->addGlobal('this', ['controller' => $this]);
}
Unfortunately, I am not sure if this is the most efficient way to do, and if some other global variable should be injected to prevent other related issue?
Good find @RomainMazB !
Whether or not the global var is the right fix, at least we have a good pointer on where the problem lies.
@LukeTowers I confirm this resolves the problem I was having in this discussion:
https://discord.com/channels/816852513684193281/816852817267785758/988861851582464000
@LukeTowers @bennothommo @jaxwilko can you give your thoughts about the suggested fix from @RomainMazB above?
I can submit a PR for it in the morning.
Thanks.
On the outset it looks fine, but Luke's original PR was trying to avoid too many global attributes polluting the Twig environment.
And I'm presuming that the | page filter works outside the macro, so I'm not sure why it would not work within given that they're supposed to share the same scope.
@mjauvin, actually I agree with @bennothommo on the fact that this is probably not the right fix (I would have directly submit a PR for that if I was not convinced of that).
But I disagree with @bennothommo too when he says that the macro share the same scope that the page or layout.
To me there is actually no more scope since the PR #455, all the page and layout variables are filled here:
https://github.com/wintercms/winter/blob/wip/1.2/modules/cms/classes/Controller.php#L309-L317
and then passed to the template renderer by parameter here:
https://github.com/wintercms/winter/blob/wip/1.2/modules/cms/classes/Controller.php#L419-L433
which means that the page and layout receive their context through the Twig's render method parameter, not as a scope.
The macro node doesn't even go through the Controller class: because macro is a Twig native feature, the macro is just compiled "as is" without nothing else but its parameters (and some global variables that we don't want to introduce).
To me a good fix would be to create a custom MacroNode inside the cms module to modify the compiler rendering. And the same way the PageNode uses the extension pageFunction method which is actually just a "proxy" to the Controller's renderPage method, the MacroNode should rely on an extension macroFunction method which would call a Controller's renderMacro method which would inject these variables.
This is more or less how the partial node works, and so does the content and component nodes work (without any context for contents and the component context for the components).
Hmm, sounds interesting @RomainMazB, as long as @bennothommo doesn't have any objections I'd be interested in seeing a PR that implements those suggestions. Sounds like we need to also expand our testing suite a bit in this area as well.
I'll wait for Ben's feedback before working on this. I could submit a PR including tests this week.
@bennothommo can you give your thoughts on this so that we can resolve this?
Thanks!
I'm happy for you guys to proceed with a PR. Might have a couple of comments then, but will be better to test things in action.
@damsfx @RomainMazB @mjauvin @bennothommo I believe I've come up with an acceptable fix in https://github.com/wintercms/winter/commit/5b8d189df4cd1f7fdf8e318f111543b85a0e2613, please test and verify (note: splitting script needs to be run first to make it available via composer, will try to run that myself a bit later tonight but if it's not available yet when you test either apply the changes locally or if you're @bennothommo run the split script for me please 😜)
Well, looks like we have a winner!
This resolve the issue with the DynamicPDF plugin as well.
This issue is still present in the 1.1.9 branch, as well as #557 .
Still need to downgrade twig/twig from v2.15.1 to v2.14.3.
@damsfx which part of the issue? There were a few different bugs at play here, what error are you getting?
@damsfx which part of the issue? There were a few different bugs at play here, what error are you getting?
@LukeTowers the issue due to calling page filter in macros seems to have gone after running composer update and a cache clean.
The only one remaining is calling the trans filter {{ 'anything' | trans }}
Object of class Winter\Storm\Foundation\Application could not be converted to string
P:\_Sites\_Labs\winter\vendor\twig\twig\src\Node\Expression\CallExpression.php line 304
@damsfx apply https://github.com/wintercms/storm/commit/1b866f037ea928f73dca51e9d38fe4dc3efe5484 and see if that fixes it. If it does then feel free to submit a PR to the 1.1 branch backporting that temporary fix and you can use the dev-1.1 version in your composer.json to pull it in after the PR is merged.
@damsfx apply wintercms/storm@1b866f0 and see if that fixes it. If it does then feel free to submit a PR to the 1.1 branch backporting that temporary fix and you can use the dev-1.1 version in your composer.json to pull it in after the PR is merged.
Done :
https://github.com/wintercms/storm/pull/98
|
GITHUB_ARCHIVE
|
using System;
using Grpc.Core;
using MagicOnion.Client;
using MagicOnionTestService.LobbyMessageTest;
namespace ConsoleClient
{
class Program
{
static void Main(string[] args)
{
//初始化
var channel = new Channel("localhost:12345", ChannelCredentials.Insecure);
var receiverChat = new ReceiverChat();
var chatHub = StreamingHubClient.Connect<IChatHub, IChat>(channel, receiverChat);
chatHub.JoinAsync("Console Player", "Console Room");
Console.ReadLine();
}
class ReceiverChat:IChat
{
public void OnJoin(string name)
{
Console.WriteLine($"{name} Join.");
}
public void OnLeave(string name)
{
Console.WriteLine($"{name} Leave.");
}
public void OnSendMessage(string name, string message)
{
Console.WriteLine($"{name}: {message}");
}
}
}
}
|
STACK_EDU
|
Building for voice is more than writing a simple program and you're done. A good voice skill or action has many components that work together. Mark and Allen discuss what some of those components are, how they integrate, and what you should think about as you write them.
Number Spies System Components: https://markvoicedev.medium.com/creating-an-alexa-game-number-spies-system-components-overview-41bf142d0b3c
Even before you start with a blank editor, you're faced with coming up with the idea. When it comes to Voice - what inspires us? Allen and Mark talk about where our ideas come from and how they start to shape our #VoiceFirst experiences.
Mark writes about what inspired him to create Number Spies: https://markvoicedev.medium.com/creating-an-alexa-game-the-spark-of-inspiration-for-number-spies-7f2b5a073a41
With all the confusion about Daylight Saving Time transitions finally behind us, Mark and Allen discuss all sorts of ways to handle time on Alexa, the Google Assistant, and Bixby. (And some tools and tips that make it easier!)
Where we are now has been shaped by our past. In light of this, Allen and Mark discuss how we got to this moment. What technologies and jobs have we held in our careers, and what lessons have they taught us that have helped us when it comes to voice.
Main docs page: https://developers.google.com/assistant
Community forum: https://www.reddit.com/r/GoogleAssistantDev/
Stack Overflow: actions-on-google and actions-builder tags
Actions on Air video / podcast series
Follow other GDEs - many have tutorials about various topics.
ES Docs: https://cloud.google.com/dialogflow/es/docs
ES community Forum: https://groups.google.com/g/dialogflow-essentials-edition-users
Stack Overflow: dialogflow-es and dialogflow-es-fulfillment
Developer Docs: https://developer.amazon.com
Alexa Skills Kit blog: https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit
Dabble Lab: https://dabblelab.com/
Voice First Tools: https://voicefirst.tools/
APL Simulator: https://tools.alexaskills.dev/
Main page: https://www.jovo.tech/
Jovo Community: https://github.com/jovo-community
Authentication and Authorization are some of the more difficult concepts that most developers end up having to deal with at some stage. Mark and Allen discuss the high level concept of Account Linking - connecting your auth system to the voice agents auth system. Alexa and Google Assistant offer some tools to help with this, and explore how some of the tools are similar, but others offer significantly different experiences for both users and developers.
As developers, the more information we can get about people talking with our skills or actions, the better the conversation will be. But privacy is a serious issue! (And one the platforms take seriously, too.) How does Alexa and the Google Assistant balance our need for more information, and the need for privacy? And how can we ask for permission to get the information? There are surprising differences and similarities that Allen and Mark explore.
We never know where our conversations go, sometimes. This time, Mark and Allen chat about Intents, Slots, Types, Entities, Parameters, and the whole conversational model built around them, especially the slight differences between how Actions and Skills have to treat them.
Just because our skills and actions are Voice First doesn't mean they are voice Only. Alexa and the Google Assistant have a long history of supporting displays in addition to the audio interactions. Mark and Allen dive into all the visual options available for Alexa, Assistant, and Bixby and the interesting differences between the three platforms.
Did you notice some Actions were having problems last week? Allen and Mark certainly did! So this week, we're talking about what seems to have caused the outage, how this fits in to the overall storage capability for Actions, and how Alexa and Jovo approach session and user storage.
In an audio-first environment, you want to sound like a movie or TV soundtrack... but with interaction and dynamic responses. With Google's flavor of SSML and Alexa's APLA, you can create these responses. Mark and Allen explore how these two methods are similar, and where they differ.
For more info:
Google's SSML "par" and "media" tags: https://developers.google.com/assistant/conversational/ssml#par
Nightingale SSML editor: https://actions-on-google-labs.github.io/nightingale-ssml-editor/
Alexa's APLA: https://developer.amazon.com/en-US/docs/alexa/alexa-presentation-language/apla-interface.html
We both have open source projects that we contribute to in the voice community. We talk about our two top ones, Speech Markdown and Multivocal, what they are, and how we feel they're contributing to the growing #VoiceFirst environment.
Mark's Projects: https://github.com/rmtuckerphx
Allen's Projects: https://github.com/afirstenberg
Mark and Allen chat about tools we use to build conversations for Alexa and the Google Assistant. Ranging from new tools, such as Alexa Conversations and Google's Actions Builder, to more mature tools, such as Jovo and Dialogflow. We got so excited about the topic, we just couldn't stop!
Alexa Conversations: https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2020/07/introducing-alexa-conversations-beta-a-new-ai-driven-approach-to-providing-conversational-experiences-that-feel-more-natural
Actions Builder: https://developers.google.com/assistant/console/builder
Learn about Action Links for Google Assistant and Quick Links for Amazon Alexa. A comparison of the features for each voice assistant platform.
Action Links - https://developers.google.com/assistant/engagement/action-links
Quick Links - https://developer.amazon.com/en-US/docs/alexa/custom-skills/create-a-quick-link-for-your-skill.html
|
OPCFW_CODE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
I’ve now posted a revised version of this guide. Click here to go to the newer post. Author’s Note (12/26/2012): This post has been updated to take advantage of.
PS: If you know the answers to questions that someone else has posted and wish to answer them, you are most welcome. Please use the comments. of Bankxxxxx” and new N/Party at POD. The 1st set did not pass the bank.
Apr 7, 2012. I could not figure out why the following did not fix grub: sudo update-grub. set root=(hd0,6) set prefix=(hd0,6)/boot/grub insmod normal normal.
Stuck at "prefix not set" and cannot install :: Steam Universe – OK, so as the subject says, I'm stuck at the Grub welcome screen, with a "prefix" not set error. I'm installing on bare metal (ie not a VM), on a system with EFI.
This all happened more or less "automatically;" when you installed Linux it set up and configured Grub. error, and then remember, what was "real" and what wasn’t. At this point I could at least get either Linux or Windows booted, but this.
Political Talk – Inspired by a spot of Sunday-night television watching, not to mention centuries of British tradition. 574-page pocket-size book is that many listings included the old 566 prefix for most City Hall numbers, although the exchange was.
I'am new to Ubuntu, and this is the first time that I'am trying it. Well I installed Ubuntu 11.10 on my system using Wubi. I installed it on.
GRUB2 booting: efidisk read error & prefix is not set. up vote 1 down vote favorite. everyone!. background_image /boot/grub/background.jpg set default=0. set.
A collection of Unix/Linux/BSD commands and tasks which are useful for IT work or for advanced users.
Chapter 1, A Telephony Revolution. This is where we chop up the kindling and light the fire. Welcome to Asterisk! Chapter 2, Asterisk Architecture
In terms of warnings/messages, there's the one grub boot message about the "prefix not being set". This is new in Natty, but it is a known issue and harmless.
Marker Error For File Causes of Marker.exe Errors. Marker.exe problems can be attributed to corrupt or missing files, invalid registry entries associated with Marker.exe, or a virus. The GNU Fortran Compiler: Read/Write after EOF marker. on allowing READ or WRITE after the EOF file marker in order to find the end of a file. GNU Fortran normally rejects these
Jun 18, 2013. error: "prefix" is not set. error: efidisk read error. USB stick is formatted in FAT32 and has a partition table msdos. Here is my /boot/grub/grub.cfg.
Welcome to GRUB! error: unknown filesystem. set prefix="(hd0,0)/boot/grub" set root=". Welcome to GRUB2! error: file not found. grub rescue>ls
Just before GRUB loads, an error appears for a fraction of a second ERROR: " prefix" not set What does this mean? Nothing wrong is.
Before I disabled the ACPI (because it is not supported by Ubuntu) Ubuntu can't boot (obviously!) At that time when I boot Ubuntu, I always get these 2 messages.
Then once I restarted my laptop it said "unknown filesystem" "Grub Rescue" & I. ex: set prefix=(hd0,msdos2)/boot/grub; If you are not getting an error: Congrats,
Jul 4, 2014. set prefix command. Many GRUB 2 commands will not work until the correct path is set. Example: If the Ubuntu system is on sda5, enter:set prefix=(hd0,5)/ boot/grub. An error message usually means the path is incorrect.
After an initial welcome message. Only prefixes which belong to the Loc-RIB are considered. Other prefixes have the lowest preference. OpenBGPD – bringing full views to OpenBSD since 2004 2. Paths that are marked as not loop.
Just before GRUB loads, an error appears for a fraction of a second ERROR: "prefix" not set What does this mean? Nothing wrong is happening!
|
OPCFW_CODE
|
I would like if there is some project developed in Java to learn Swing best practices. I mean an open source project hosted on the Internet through SVN or similar. I've been reading some questions in Stackoverflow about this topic but I would see some projects. Thanks.
closed as off-topic by Kevin Panko, Joshua Taylor, Henry Keiter, showdev, Jarrod Roberson Aug 1 '14 at 21:57
This question appears to be off-topic. The users who voted to close gave this specific reason:
The way I learned Swing best practices was reading the Swing source code in the JDK and practice. Follow Sun's practices and you'll be on the right path.
Read the implementations of JTable, JTree, JScrollPane, the various LookAndFeels, SwingWorker, SwingUtilities. Their event handling and MVC patterns are extremely complex but very readable, maintainable, and extensible. Essentailly, every time you use a component, go read the source code and understand what and why they do it. Eventually, you'll start doing the same thing.
And most importantly, code. Write some large programs, and you'll start seeing things that don't seem right or optimal. Come on to Stack and find out what you're doing wrong or could do better. Write something else, and do the same.
That said, the following projects have (or probably have) good code:
SwingX - The maintainers of SwingX were Sun/ex-Sun people, and I've always thought of it as unofficial incubator for future Swing features.
Squirrel (A JDBC client) - I haven't actually looked at their source, but I've used the program for years and it doesn't show any of the common flaws in badly written Swing programs. With how powerful it is, and how well it works, I would put money that's its extremely well written.
InfoNode (A powerful docking framework) - I've gone through quite a bit of their code, and its pretty solid.
JFreeChart (A powerful charting library) - Not the greatest code, but much better than average, especially taking into account its an old project that was never really intended to do everything people are using it for. That said, it is very extensible, and I've never had a problem molding it to my needs (which are much, much more than its original intentions).
GlazedLists (A highly performant event-based list library) - Not exactly Swing, but it's event handling is similar to Swing. They also have implementations of Swing models and some custom components that are extremely fast.
JIDE Common Layer: A massive collection of custom components that serves as the basis for their commercial components. I haven't gone through their code, but their components are beautiful, and since their primary focus is commercial Swing components, once again, I'd put money that their code is solid.
I found it very interesting to see a professional-quality application written in Swing, such as
|
OPCFW_CODE
|
I have always hated shared hosting and I’m never going back to BlueHost, HostGator,etc because they’re always very un-reliable and the problem with shared hosting is that they’re just too crowded, also it doesn’t allow you to customize your environment, and has many limits like email sent per hour, max bandwidth, file size limit, etc. You can use sameid.net to check how many websites are actually hosted on the same server by entering the server’s IP address on it. below what you can see is one of the servers of Hostgator. some of them even have more than 300 websites on the same server And then I thought, what about dedicated servers? nope, They’re just too expensive. Also you don’t need 8 or 16Gigabytes of RAM & 1TB of HDD for only a tiny WordPress site So, I moved to IaaS cloud hosting. It wasn’t so cheap neither too expensive. but it was worth the price. So, I signed up for Digitalocean and installed … [Read more...] about How Can You Scale WordPress On Openshift Origin?
Github wordpress plugin
Bluehost is officially recommended hosting company for WordPress platform by WordPress.org. In the recent times, it also raised a lot of question from experts around the globe regarding its contribution to WordPress open source community. Even I had the similar question until, during 2015 State of the Word address, WordPress co-founded shared how Bluehost tackled their 1.6 million outdated WordPress installation. There are a lot of takeaways here for all the WordPress users & hosting companies. Also, it’s a vote of trust for existing Bluehost users. Bluehost hosts about 2 million WordPress sites & in a recent audit they realize almost 80% i.e. 1.6 million WordPress installs are outdated. This is a major security risk & one of the major reason for WordPress sites on Bluehost being hacked. Needless to mention, a hacked site requires immediate support from hosting company which increases the running cost for the company & that also creates negative emotions on … [Read more...] about How Bluehost Secured 1.6 Million Outdated WordPress sites
We have various ways to install WordPress, like we can download the official .zip WordPress installation file, and install WordPress manually. Hosting companies like Bluehost or Hostgator, offer one click WordPress installation with the help of scripts. One common thing after installing WordPress is, we need to install essential WordPress plugins. By default WordPress comes packed with plugins like Akismet, Holly dolly, but to extend the feature of WordPress, we have a long list of plugins that we need to install. Many users whose job is to install multiple WordPress on a given day, it adds a lot to their task. WordPress do offer profile feature, where you can mark plugins as your favorite, and quickly install it, or you can take advantage of bulk plugin installation plugin, to install multiple plugins at once. Anyways, you need to do lot of manual work, and we all know “Saving time is saving money“. Today, I will be talking about one WordPress tool call WPRoller, which … [Read more...] about Create Custom WordPress Installation with Plugins
I first started creating web pages on Yahoo GeoCities, Tripod, etc. back in early 2000s. It was fun. Later, I tried blogging platforms like Blogger.com and WordPress.org and also started creating HTML web pages without any coding skills (thanks to Microsoft FrontPage). And that’s how I started web development. Now with the advent of web 2.0 people want more. People no longer like static HTML web pages with dull designs. They want rich features with a great user experience. That’s how blogging platforms became massively popular. I get a lot of emails (because of this Make Money Writing blog post) from my readers asking for a platform to share their articles or work online. And I used to recommend a blog — a WordPress blog to be more specific. But then I realized that a blog may not be relevant for everyone as things changes from one person to another. If you’re a technology enthusiast then starting a blog makes sense. But if you’re a writer and want to … [Read more...] about 51 Blogging And Publishing Platforms To Showcase Your Awesomeness
This is 2017 and the competition amongst webpreneurs continue to be fierce, and only the fittest, well-informed, and action-takers have the chance to survive. And to help the less privileged who might not have the budget for running huge and fearsome advertisement like the top guns in the industry, Google decided to introduce the application known as Accelerated Mobile Pages (AMP), which strictly helps to improve the mobile experience of end-users. Understanding that not so many people know about this let alone know how to go about applying it on their WordPress blogs to start enjoying the benefits themselves, I decided to write this guide on what Google AMP is about, why it is important, how to install it and the configurations. I hope you will find it useful. What is Google AMP? Google AMP is the short form for Google Accelerated Mobile Pages, which is simply geared towards improving the loading speed of mobile websites, just so it can function optimally and maximally, thereby … [Read more...] about Beginners Guide To Setting Up Google AMP For WordPress Blogs
|
OPCFW_CODE
|
I’ve been spending most of the week redesigning the current communications schema between BLIde and any BlitzMax application being debugged, and next BLIde version will be using the following schema:
In this new schema, BLIde uses 3 threads at the same time while an application is being debugged. This way, BLIde keeps its main thread for user interaction and responsiveness, and also adds a secondary thread to deal with all data being collected from the Standard Error Pipe of the application being debugged.
This Debug Reader thread, collects all information being thrown by the internal debugging process of the debugged application and brings this information back to the main thread, in order to display this information to the user. This internal data exchange is done using synchronized event invocation, so everything is paralleled properly. The important thing on this new design is that, if the Standard Error pipe gets overloaded because a missing flush on the debugged program, or because the debugged program crashed, BLIde will still be 100% responsive and will be able to track this issue and eventually kill the debugged application process in a clean way, releasing any associated resource.
In addition to the Debug Reader Thread, there is also a Output Data Reader Thread. This thread gets ride of the standard output stream of any BlitzMax application. Any data added to the default output (as instance, any Print command) will be captured by this thread. This thread will perform also all the needed Unicode Conversions and, only when all the data is ready to be displayed on the BLIde console, this data will be passed to the BLIde Main Thread, to be shown to the BLIde user. This separate thread ensures that any data added to this pipe is not causing parsing errors on the standard error pipe and also, any synchronized operation will not make any other read/write/interact operation to be stopped.
The BLIde Main Thread will also send any debug request (such as stack-traces, object dumbs, etc.) to the BlitzMax application being debugged. This main thread implements a internal queue system to keep object dumb request in parallel to object dumb responses being reported by the Debug Reader Thread.
OK, but what are the advantages of this new design?
First of all, tones of them. To name two that are, from my point of view, the most interesting ones:
1.- BLIde responsivenes while debugging is drastically improved at a zero cpu cost, compared with previous debugging design, even on core-solo computers (as long as they have hiper-threading technology).
2.- If you’re debugging a server application and an unhandled exception occurs. You’ll be able to get a compete call stack and exception information even if the application has been closed by BLIde. This is very important becouse server-like application are not always monitorized while running. BLIde will also show the exact line of code where the exception did happen.
3.- If an application being debugged full screen raises an unahandled exception, BLIde will still be responsive even if not shown by the OS (because of the graphics context hiding anything else). In that case, users will be able to press Ctrl+F8 and BLIde will kill the debugged application but informing on what was the exception, when and where did it happen, a complete call stack and a basic object dumb information on the general program status.
|
OPCFW_CODE
|
Online Courses to Learn Robotics for FREE
Learn about robot mechanisms, dynamics, and intelligent controls. Topics include planar and spatial kinematics, and motion planning; mechanism design for manipulators and mobile robots, multi-rigid-body dynamics, 3D graphic simulation; control design, actuators, and sensors; wireless networking, task modeling, human-machine interface, and embedded software.
Focus on a future area to maximize your employability
Robotic engineering opens doors to the world of engineering in various fields: automotive, IT, logistics, aviation, health, research and more. It adapts according to the skills and passions of each to different trades.
- Study robotics online from some of the top universities in the world on edX. Learn robotics engineering, dynamics, locomotion, machine learning and more.
2. Robotics Specialization on Coursera
Learn the Building Blocks for a Career in Robotics. Gain experience programming robots to perform in situations and for use in crisis management
3. Begin Robotics on Futurelearn
Learn robotics by exploring the history, anatomy and intelligence of robots and test drive robots using exciting simulations.
4. Become a Robotics Software Engineer
5. Digital Electronics: Robotics, learn by building module II
6. Robotics Course on Udemy
The course contains university level, short video lessons along with fully online courses that will help you understand and prepare for the robotic technology of the future. There are over 200 lessons available for you to access any time and in any order. The courses are divided into master classes, single lessons, and online courses.
8. Robotics at Universal Robots Academy
At Universal Robots, we constantly strive to make the advantages of collaborative robots (cobots) in the workplace accessible to all. With Universal Robots Academy’s online modules, we’ve lowered the automation barrier by making core programming skills available to cobot users regardless of their robotics experience or backgrounds.
The course provided by this publication includes an overview of robot mechanisms, dynamics, and intelligent controls and the topics include are planar and spatial kinematics, and motion planning; mechanism design for manipulators and mobile robots, multi-rigid-body dynamics, 3D graphic simulation; control design, actuators, and sensors; wireless networking, task modeling, human-machine interface, and embedded software.
10. Control of Mobile Robots on Coursera
This is a course that focuses on the application of modern control theory to the problem of making robots move around in safe and effective ways. The structure of this class is somewhat unusual since it involves many moving parts - to do robotics right, one has to go from basic theory all the way to an actual robot moving around in the real world, which is the challenge we have set out to address through the different pieces in the course.
11. Robot Mechanics and Control, Part I & II
A mathematical introduction to the mechanics and control of robots.
https://www.edx.org/course/robot-mechanics-and-control-part-iRobot Mechanics and Control, Part II
A mathematical introduction to the mechanics and control of robots.
12. Learn Robotics on Arduino
13. INSTRUCTABLES IS A PLACE THAT LETS YOU EXPLORE,
DOCUMENT, AND SHARE YOUR CREATIONS.
14. Create Robots from Scratch without experience.
Want to get into the world of robotics, but don’t know how to get started? This course will teach you how to build robots from start to finish using circuit components, motors, and electronics.
15. Artificial Intelligence for Robotics
FREE COURSE on Udaacity
Learn how to program all the major systems of a robotic car. Topics include planning, search, localization, tracking, and control.
16. The Robot Academy: An open online robotics education resource
17. Become a Robotics Engineer
Robotics Engineers are responsible for designing, developing, testing and operating robotics systems that are used in performing a wide range of tasks. They typically create robotics solutions for industrial applications. Reviewing robotics designs and analysing if it suits the requirements, is an integral part of their job role. They also calculate the time and cost estimates for the development of a given design.
18. CS223A - Introduction to Robotics
The purpose of this course is to introduce you to basics of modeling, design, planning, and control of robot systems. In essence, the material treated in this course is a brief survey of relevant results from geometry, kinematics, statics, dynamics, and control.
Learn how to design, build, and program dynamical, legged robots that can operate in the real world.
20. Control of Mobile Robots by Georgia Institute of Technology (Coursera)
This course will teach you modern control theory and its applications in robotics. It focuses on the issues of safety and effectiveness.
22. Introduction to Robotics Specialization (University of Pennsylvania)
This course is focused on the behavior of robots in the real world and the challenges they experience while running or flying and meeting unexpected situations and objects. You will learn more about the environment robots work in.
Coursera offers the University of Pennsylvania course on Aerial Robotics that you can join to learn about mechanics of flight and the design of quad-copter drones.
The Technische Universität München (Technical University of Munich) course on edX is best for learning basic concepts of autonomous quad-copter navigation including 3D geometry, probabilistic state estimation, visual odometry, SLAM, 3D mapping, and linear control. It also teaches you the methods to use sensor readings to navigate the drone along a trajectory and locate its position.
This course is offered by the University of Pennsylvania via Coursera teaches you how to get robots to incorporate uncertainty into estimating and learning from a dynamic and changing world. It includes topics like probabilistic generative models, Bayesian filtering for localization and mapping, and machine learning for planning and decision making.
The University of Pennsylvania course at Coursera will teach you how robots use their motors and sensors to move in unstructured environments. You will understand how you can design such robots, which have maximum mobility in the complex and dynamic world.
The University of Pennsylvania via Coursera will teach you the understanding of how grasping objects is facilitated by the computation of 3D posing of objects and navigation can be accomplished by visual odometry and landmark-based localization.
The IoT course from the University of California, San Diego on Coursera offers information on real world devices that communicate with smartphones. You can learn better about the sampling frequencies, and bit-width requirements for different sensors along with their interfacing methods with the DragonBoard 410c hardware.
This MIT course on edX will teach you how to control non-linear and under-actuated mechanical systems with a focus on computational methods.
The Massachusetts Institute of Technology course on Feedback Control Theory via edX will teach you the design strategies behind temperature controllers, quad-copters, and self-balancing scooters.
|
OPCFW_CODE
|
In January 2016 I gave a presentation at the Canberra Linux Users Group about my journey developing my own Open Source home automation system. This is an adaptation of that presentation for my blog. Big thanks to my brother, Tim, for all his help with this project!
Comments and feedback welcome.
Why home automation?
- It’s cool
- Good way to learn something new
- Leverage modern technology to make things easier in the home
At the same time, it’s kinda scary. There is a lack of standards and lack of decent security applied to most Internet of Things (IoT) solutions.
Motivation and opportunity
- Building a new house
- Chance to do things more easily at frame stage while there are no walls
Some things that I want to do with HA
- Respond to the environment and people in the home
- Alert me when there’s a problem (fridge left open, oven left on!)
- Gather information about the home, e.g.
- Temperature, humidity, CO2, light level
- Open doors and windows and whether the house is locked
- Electricity usage
- Manage lighting automatically, switches, PIR, mood, sunset, etc
- Control power circuits
- Manage access to the house via pin pad, proxy card,
voice activation, retina scans
- Control gadgets, door bell/intercom, hot water, AC heating/cooling, exhaust fans, blinds and curtains, garage door
- Automate security system
- Integrate media around the house (movie starts, dim the lights!)
- Water my garden, and more..
My requirements for HA
- Prefer DC only, not AC
- High Wife Acceptance Factor (important!)
There’s no existing open source IoT framework that I could simply install, sit back and enjoy. Where’s the fun in that, anyway?
Three main options:
- Combination of both
- Dominated by proprietary Z-Wave (although has since become more open)
- Although open standards based also exist, like ZigBee and 6LoWPAN
- Lots of different gadgets available
- Gadgets are pretty cheap and easy to find
- Easy to get up and running
- Widely supported by all kinds of systems
- Wireless gadgets are pretty cheap and nasty
- Most are not open
- Often not updateable, potentially insecure
- Connect to AC
- Replace or install a unit requires an electrician
- Often talk to the “cloud”
So yeah, I could whack those up around my house, install a bridge and move on with my life, but…
- Not as much fun!
- Don’t want to rely on wireless
- Don’t want to rely on an electrician
- Don’t really want to touch AC
- Cheap gadgets that are never updated
- Security vulnerabilities makes it high risk
- Proprietary systems like Clipsal’s C-Bus
- Open standards based systems like KNX
- Custom hardware
- More secure than wireless
- More future proof
- DC only, no need to touch AC
- Provides PoE for devices and motors
- Can still use wireless (e.g. ZigBee) if I want to
- Convert to proprietary system (C-Bus) if I fail
- My brother is a certified cabler 🙂
Technology Choice Overview
So comes down to this.
- Z-Wave = OUT
- ZigBee/6LoWPAN = MAYBE IN
- C-Bus = OUT (unless I screw up)
- KNX = OUT
- Arduino, Raspberry Pi = IN
I went with a custom wired system, after all, it seems like a lot more fun…
Stay tuned for Part 2!
|
OPCFW_CODE
|
How to make a python program to ask user a day to respond in an if statement to give a true answer?
Write a Python program to ask the user for a day of the week. Then, depending on the day entered by the user, the program should display one of the following messages:
If it is Friday then, “You have your CS 140 class today. That means lots of Python programming!”
Otherwise, “No Python programming today. Of course, you can always practice at home.”,
This is how to do certain things in Python:
friday_message = "TGIF" else_message = "Oh no!" user_input = input("What DOW is it? ") if user_input == "Friday": # do something else: # do something else
Pat Clor masterfully uses a character, Cindy Stark, a 12 year old girl from an imaginary world to personify his reality check in his extraordinary work of literary genius, "The Reason We're Waiting." Cindy is Clor's "Alice" and Pangaea is where she lives, in "Wonderland." Clor is like Lewis Carroll writing his doctorate thesis in Parmenides' Greek philosophy course. 토닥이
There will be a lot to be claimed for being able to skillfully administer the full human body massage. Even if you don't need to produce that a occupation, which incidentally is incredibly lucrative, it could be used by you as something to greatly help clear your spouse or any family members of a variety of cramps and tensions. There's a suitable technique to a full human anatomy massage, and if performed effectively, it surely qualifies being an art. 한국출장안마
Are you looking for the best packers and movers in Ambattur Chennai? Yes! Allianz Packers and Movers provides the most reputed packers and movers in Ambattur Chennai that know how to moving and packing valuable in the most professional way as possible. Packers and Movers in Jogeshwari Mumbai
Beruntungnya bermain judi online di agen resmi seperti STARS77, selain memiliki win rate kemenangan yang besar, STARS77 juga merupakan agen judi slot online resmi yang telah menyediakan banyak jenis judi online yang dapat Anda pilih dengan sesuka hati. Jenis permainan judi online yang dimiliki STARS77 tentunya merupakan pilihan terbaik. Jenis-jenis judi online pada situs judi slot online STARS77 meliputi stars77
Explore THE FEEL GREAT SYSTEM: HISTORY!Today, Unicity has 18 products listed in the PDR, 60+ markets in countries all over the world, and a growing selection of more than 400 products. Our mission is to help people everywhere enjoy a beautiful, healthy, fulfilling live through our exceptional products, a fun and rewarding business opportunity, and a culture of family, health, and gratitude. feel great system
Summertime brings sunflowers’ flowering, which represents devotion, loyalty, and longevity. It’s a “happy” flower, in a sense. So, while deciding on a design for your summer sunflower nails, sunflower is one of the beautiful and significant designs you may employ. summer sunflower nails
Explore about 10 facts about-coffee you didnt know - If you have to start your day with coffee, you are feeding your caffeine addiction, before you will get any of the benefits you drink coffee for. The number one thing your first sip will do is push back caffeine withdrawal symptoms instead of giving you a boost.
With all these types of black magic spells there is only one sure thing: the person who created the black magic in order to harm others will also receive what he created. If you wish me to remove all these types of black magic, please contact me here. Also, if the person suffers by the black magic issues, she/he can can use a sincere prayer or use the audio recordings from Judy Satori.
|
OPCFW_CODE
|
#region Copyright © 2010 ViCon GmbH / Sebastian Grote. All Rights Reserved.
#endregion
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.Windows.Forms.VisualStyles;
namespace Smithgeek.Windows.Forms
{
public static class TreeNodeExtensions
{
public static bool isChecked(this TreeNode node)
{
return node.Checked && node.StateImageIndex != 2;
}
public static String getParentString(this TreeNode node, String seperator)
{
String path = String.Empty;
TreeNode tempNode = node;
while (tempNode.Parent != null)
{
tempNode = tempNode.Parent;
path = seperator + tempNode.Text + path;
}
return path;
}
}
/// <summary>
/// Provides a tree view
/// control supporting
/// tri-state checkboxes.
/// </summary>
public class TriStateTreeView : TreeView
{
// ~~~ fields ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ImageList _ilStateImages;
bool _bUseTriState;
bool _bCheckBoxesVisible;
bool _bPreventCheckEvent;
// ~~~ constructor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/// <summary>
/// Creates a new instance
/// of this control.
/// </summary>
public TriStateTreeView()
: base()
{
CheckBoxState cbsState;
Graphics gfxCheckBox;
Bitmap bmpCheckBox;
_ilStateImages = new ImageList(); // first we create our state image
cbsState = CheckBoxState.UncheckedNormal; // list and pre-init check state.
for (int i = 0; i <= 2; i++)
{ // let's iterate each tri-state
bmpCheckBox = new Bitmap(16, 16); // creating a new checkbox bitmap
gfxCheckBox = Graphics.FromImage(bmpCheckBox); // and getting graphics object from
switch (i)
{ // it...
case 0: cbsState = CheckBoxState.UncheckedNormal; break;
case 1: cbsState = CheckBoxState.CheckedNormal; break;
case 2: cbsState = CheckBoxState.MixedNormal; break;
}
CheckBoxRenderer.DrawCheckBox(gfxCheckBox, new Point(2, 2), cbsState); // ...rendering the checkbox and...
gfxCheckBox.Save();
_ilStateImages.Images.Add(bmpCheckBox); // ...adding to sate image list.
_bUseTriState = true;
}
}
// ~~~ properties ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/// <summary>
/// Gets or sets to display
/// checkboxes in the tree
/// view.
/// </summary>
[Category("Appearance")]
[Description("Sets tree view to display checkboxes or not.")]
[DefaultValue(false)]
public new bool CheckBoxes
{
get { return _bCheckBoxesVisible; }
set
{
_bCheckBoxesVisible = value;
base.CheckBoxes = _bCheckBoxesVisible;
this.StateImageList = _bCheckBoxesVisible ? _ilStateImages : null;
}
}
[Browsable(false)]
public new ImageList StateImageList
{
get { return base.StateImageList; }
set { base.StateImageList = value; }
}
/// <summary>
/// Gets or sets to support
/// tri-state in the checkboxes
/// or not.
/// </summary>
[Category("Appearance")]
[Description("Sets tree view to use tri-state checkboxes or not.")]
[DefaultValue(true)]
public bool CheckBoxesTriState
{
get { return _bUseTriState; }
set { _bUseTriState = value; }
}
// ~~~ functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/// <summary>
/// Refreshes this
/// control.
/// </summary>
public override void Refresh()
{
Stack<TreeNode> stNodes;
TreeNode tnStacked;
base.Refresh();
if (!CheckBoxes) // nothing to do here if
return; // checkboxes are hidden.
base.CheckBoxes = false; // hide normal checkboxes...
stNodes = new Stack<TreeNode>(this.Nodes.Count); // create a new stack and
foreach (TreeNode tnCurrent in this.Nodes) // push each root node.
stNodes.Push(tnCurrent);
while (stNodes.Count > 0)
{ // let's pop node from stack,
tnStacked = stNodes.Pop(); // set correct state image
if (tnStacked.StateImageIndex == -1) // index if not already done
tnStacked.StateImageIndex = tnStacked.Checked ? 1 : 0; // and push each child to stack
for (int i = 0; i < tnStacked.Nodes.Count; i++) // too until there are no
stNodes.Push(tnStacked.Nodes[i]); // nodes left on stack.
}
}
protected override void OnLayout(LayoutEventArgs levent)
{
base.OnLayout(levent);
Refresh();
}
protected override void OnAfterExpand(TreeViewEventArgs e)
{
base.OnAfterExpand(e);
foreach (TreeNode tnCurrent in e.Node.Nodes) // set tree state image
if (tnCurrent.StateImageIndex == -1) // to each child node...
tnCurrent.StateImageIndex = tnCurrent.Checked ? 1 : 0;
}
protected override void OnAfterCheck(TreeViewEventArgs e)
{
base.OnAfterCheck(e);
if (_bPreventCheckEvent)
return;
OnNodeMouseClick(new TreeNodeMouseClickEventArgs(e.Node, MouseButtons.None, 0, 0, 0));
}
protected override void OnNodeMouseClick(TreeNodeMouseClickEventArgs e)
{
Stack<TreeNode> stNodes;
TreeNode tnBuffer;
bool bMixedState;
int iSpacing;
int iIndex;
base.OnNodeMouseClick(e);
_bPreventCheckEvent = true;
iSpacing = ImageList == null ? 0 : 18; // if user clicked area
if ((e.X > e.Node.Bounds.Left - iSpacing || // *not* used by the state
e.X < e.Node.Bounds.Left - (iSpacing + 16)) && // image we can leave here.
e.Button != MouseButtons.None)
{ return; }
tnBuffer = e.Node; // buffer clicked node and
if (e.Button == MouseButtons.Left) // flip its check state.
tnBuffer.Checked = !tnBuffer.Checked;
tnBuffer.StateImageIndex = tnBuffer.Checked ? // set state image index
1 : tnBuffer.StateImageIndex; // correctly.
OnAfterCheck(new TreeViewEventArgs(tnBuffer, TreeViewAction.ByMouse));
stNodes = new Stack<TreeNode>(tnBuffer.Nodes.Count); // create a new stack and
stNodes.Push(tnBuffer); // push buffered node first.
do
{ // let's pop node from stack,
tnBuffer = stNodes.Pop(); // inherit buffered node's
tnBuffer.Checked = e.Node.Checked; // check state and push
for (int i = 0; i < tnBuffer.Nodes.Count; i++) // each child on the stack
stNodes.Push(tnBuffer.Nodes[i]); // until there is no node
} while (stNodes.Count > 0); // left.
bMixedState = false;
tnBuffer = e.Node; // re-buffer clicked node.
while (tnBuffer.Parent != null)
{ // while we get a parent we
foreach (TreeNode tnChild in tnBuffer.Parent.Nodes) // determine mixed check states
bMixedState |= (tnChild.Checked != tnBuffer.Checked | // and convert current check
tnChild.StateImageIndex == 2); // state to state image index.
iIndex = (int)Convert.ToUInt32(tnBuffer.Checked); // set parent's check state and
tnBuffer.Parent.Checked = bMixedState || (iIndex > 0); // state image in dependency
if (bMixedState) // of mixed state.
tnBuffer.Parent.StateImageIndex = CheckBoxesTriState ? 2 : 1;
else
tnBuffer.Parent.StateImageIndex = iIndex;
tnBuffer = tnBuffer.Parent; // finally buffer parent and
} // loop here.
_bPreventCheckEvent = false;
}
/// <summary>
/// Removes all checked nodes.
/// </summary>
/// <param name="nodes">Nodes to look for checks and then remove.</param>
public void RemoveCheckedNodes(TreeNodeCollection nodes)
{
List<TreeNode> checkedNodes = GetCheckedNodes(nodes);
foreach (TreeNode checkedNode in checkedNodes)
{
nodes.Remove(checkedNode);
}
}
/// <summary>
/// Gets all of the checked nodes in the collection.
/// </summary>
/// <param name="nodes">The nodes to search</param>
public List<TreeNode> GetCheckedNodes(TreeNodeCollection nodes)
{
List<TreeNode> checkedNodes = new List<TreeNode>();
foreach (TreeNode node in nodes)
{
if (node.isChecked())
{
checkedNodes.Add(node);
}
checkedNodes.AddRange(GetCheckedNodes(node.Nodes));
}
return checkedNodes;
}
/// <summary>
/// Adds a list of strings as node to the tree view.
/// </summary>
/// <param name="nodes">The list of nodes in string format</param>
/// <param name="parentNode">The parent node to start adding the child nodes to.</param>
public void AddNodes(List<String> nodes, TreeNode parentNode, bool checkState)
{
for(int i = 0; i < nodes.Count; ++i)
{
TreeNode currentParent = parentNode;
String path = nodes[i];
String[] nodeParts = path.Split(new char[] { '\\' }, StringSplitOptions.RemoveEmptyEntries);
foreach (String node in nodeParts)
{
if (!currentParent.Nodes.ContainsKey(node))
{
TreeNode newNode = new TreeNode(node) { Name = node, Checked = checkState };
currentParent.Nodes.Add(newNode);
currentParent = newNode;
}
else
{
currentParent = (TreeNode)currentParent.Nodes[node];
}
}
}
}
}
}
|
STACK_EDU
|
You may not be aware of it but today programming for lawyers is very essential to survive in this competitive world. The turn of this century has witnessed great developments in the field of technology. We have reinforced law as a profession with legal tech and are pursuing it with renewed vigor. The latest debate is, however, regarding a lawyer’s need to code, and honestly, it is tough to pick sides. Let’s explore Programming 101 for Lawyers.
Though “Programming for lawyers” seems to be a fool’s errand, we should not dismiss the idea completely. Truly, the critical skills for an advocate are his drafting and argumentative competence. The art of coding does not feature anywhere, and yet we should not be too quick to dismiss it. For, as the legal sector becomes increasingly data-driven, it would not hurt to embrace programming and coding.
While programming knowledge is clearly not an absolute necessity, lawyers can approach coding with the intention of becoming “coding literate”. This means advocates and legal professionals should strive to know enough, to understand the rudimentary concepts of programming. It is a great asset to have, provided you have the right temperament.
Incidentally, the understanding of technology will allow lawyers to embrace technological trends better and apply them to their law practice. Besides, the art of lawyering may be hardwired to suit the art of programming or coding. Nonetheless, you should realize that programming is an extensive sea. As a lawyer, you need only know enough to assist your practice.
Need of Programming for lawyers:
As the industry currently stands, a legal professional has no fundamental use for coding. A lawyer should invest his professional time in developing legal-specific skills. Furthermore, you cannot code and extend lawyerly functions simultaneously.
Alternatively, the knowledge of programming will help you appreciate what is “under the bonnet”, and provide legal counsel in tech-related matters. Thus, the essence of studying coding is in its understanding. Yet, before you foray into the world of coding and programming, ask yourself the following questions:
- Why do you want to learn to code?
- How will it assist you?
- Will it help your career in law?
- How do you plan on pursuing the knowledge of programming?
- Will you be able to devote appropriate time to coding?
Read Also – Legodesk Affiliate Program Terms of Service
You must be very clear about the “Why” aspect of coding. As a legal professional, coding will certainly not be within your job profile or default skill-set. Yet, before you decide to move forward, ensure that you are clear about your motivation. Honestly, you will not have enough time to pursue both, advocacy and coding, professionally and parallelly.
Besides, different coding languages require varying levels of understanding of maths and statistical analysis. Incidentally, many advocates hail from a non-science background. Consequently, they can face serious difficulty in learning how to code due to their stuttering understanding of algebra or statistics. This does not imply that programming for lawyers is impossible, rather, it may be impractical.
Read Also – 5 Effective Problem-solving Tips For Lawyers
Compatibility of coding and lawyering
The processes of lawyering and coding or programming share some similarities. That is why many opine that perhaps lawyers can be bred to be competent coders as well. Let us look at the similarities.
Drafting and Code-creation
Proper legal drafting must follow a proper structure. It is all about communicating the essence of the document in an orderly and methodical fashion. Similarly, writing a good code requires order and method. Coders and programmers must adhere to a proper structure. Moreover, a good draft or code is one that serves its purpose without being unnecessarily long-drawn or repetitive.
Read Also – Article 343 of Indian Constitution
Both programmers and lawyers are essentially problem-solvers. They process information, to:
- Identify issues;
- Locate the reasons behind such issues;
- Research plausible solutions to tackle those issues;
- Ideate, design, and apply effective solutions to solve the issues.
Additionally, a critical aspect of good lawyering is foreseeing potential problems. Advocates and legal professionals try to come up with solutions that settle a problem completely and effectively. Thus, they use predictive tools and other strategies to identify and solve problems. Incidentally, good programmers too must create codes that solve present problems while also being secure from future threats.
Both programmers and lawyers ought to have an analytical bent of mind. The logical reasoning capability of these professionals matters to a great extent. Interestingly, the law is nothing save a ‘codified’ set of rules. In this context, it is perhaps more than just a mere coincidence. A lawyer’s arguments ought to follow a trail of logic, just as a programmer’s code. Moreover, lawyers are experts of the language and so are coders.
Resultantly, the similarities between lawyering and coding theoretically allow the idea of programming for lawyers. Nonetheless, it is true that comparing programming and lawyering is like comparing apples and oranges. Though both the latter are connected as fruits, they have vividly different tastes, smells, textures, and growing conditions. Similarly, there are stark differences between both the former.
Why Apples and Oranges?
Incidentally, there are multiple reasons why lawyering and programming are not complementary to one another.
Programming has its very own language. Yes, the code may be written using the letters from regular language scripts, yet programming language is different from normal. One would have to invest time and continued effort to master it.
Furthermore, coding requires a solid grasp of intermediate mathematics and statistics. ‘Intermediate’ denotes the level of knowledge about the subject and not about the educational qualification. In fact, coding is more about implementing mathematics in computer science. Consequently, this is not something that you will be able to focus on as a serious lawyer.
Becoming code-ready is tough
Programming is no child’s play. Though many courses teach the basics of coding, such knowledge is too minuscule for lawyers to apply. It would take great pains to be ready to apply all the programming skill-sets into producing applicable codes and programs.
Read Also – 4 Ways Technology is Transforming the Law
Legal problems still require legal solutions
At the end of the day, the role of a lawyer is the provide legal solutions to legal problems. No amount of automation can completely do away with the requirement for a legal professional.
The Way Forward
It is safe to say that a lawyer’s need to code, borders on the improbable. One expects a lawyer to be a person proficient in the laws of man. Incidentally, programming is not a primary function for a law professional. Furthermore, there are various legal technology solutions available for the legal sector at present. Law professionals can explore these legal tech solutions and make their profession smarter.
Moreover, specific problems deserve specific professionals looking at the same. Just as IP professionals handle IPR related issues, and tax consultants advise in taxation matters, technologists should prescribe tech solutions. Yes, lawyers can perhaps master programming 101, with dedication and a positive spirit, but that seems like an oversimplification.
This is because you cannot do justice to programming on a part-time basis; whereas law is a full-time profession. Thus, rather than programming, lawyers should focus on being ‘tech-literate’, to understand the applications of such legal tech. Incidentally, there is an argument that some legal software is unable to grasp a lawyer’s needs. However, that does not imply that lawyers should begin coding.
There is a big difference between learning rudimentary programming and launching scalable applications. An individual’s temperament plays a huge role when trying to master something new, and programming for lawyers is as novel as it can get. Rather than trying to be the jack of all trades, it would suit the lawyers to master just one (advocacy).
Read Also – Comprehensive Tech Plan for Indian Law Firms
Try our Debt Resolution solutions today Request a Demo
|
OPCFW_CODE
|
Quite a while ago, nearly I year now, I installed Debian GNU/Linux onto a hpt370 software raid array. Yes even though it's a piece of hardware it's still software raid. IIRC the driver was version 0.01 and I think I was using 2.4.18 linux kernel. I haven't really got the time to write a proper mini-HOWTO on the subject but hopefully this will help you installing linux onto it.
I've recently had an email telling me that this page has been useful for getting a Suse 9 system to work with the hpt372
The rest of the system doesn't really matter
Hopefully I haven't forgetten anything.
To mount your array you should use the mount point /dev/ataraid/dXpY. For example 'mount -t ext2 /dev/ataraid/d0p2 /mnt/hpt'
You might not have the mount points, so to make them....
When I installed my system, I did a basic install onto a small hard drive. Then copied my whole filesystem to the hpt370 raid array in single user mode. It is possible to install the system straight onto the raid array, although it's a little more compilcated and I haven't done it. Then tricky bit was finding a lilo config that would boot. In the process or working it out I got lots or error on booting. Usually "L 01 01 01 01 01...." or "L 04 04 04 04 04...." or something similar.
My working /etc/lilo.conf file is....
You should note that you'll need different information for your hard drive, unless it's the same as mine. ie the number of cylinders etc...
Once upon a time I did have a couple of lines to make MS Windows boot.
It was something like.....
But I may as well cat /dev/urandom > /dev/ataraid/d0p1
As I was going to be booting off a different root filesystem I had to do a chroot with lilo, with the command 'lilo -r /dev/ataraid/d0p2'
Remember to RTFM, as it was ages ago since I did this.
When I installed Debian onto the hpt370 there were no doc about it that I could find. At least now there some hints about now when doing a google search. It took me a painful six days on and off to finally crack it, hopefully with this small guide it'll take you less time. Although when I first did this I'd only been seriously using linux for about two months, so I was still a newbie.
A lot of my linux knowledge was first gained from guru's. Who have given me the time of day so that I could learn various linux stuff. I'd like the thank them for all the help they have given me, not just with the hpt370 raid controller but other stuff as well. They are...
|
OPCFW_CODE
|
Fast tensor operations using a convenient Einstein index notation.
- Index notation with macros
- Cache for temporaries
Install with the package manager,
pkg> add TensorOperations.
- A macro
@tensorfor conveniently specifying tensor contractions and index permutations via Einstein's index notation convention. The index notation is analyzed at compile time.
- Ability to optimize pairwise contraction order using the
@tensoroptmacro. This optimization is performed at compile time, and the resulting contraction order is hard coded into the resulting expression. The similar macro
@tensoropt_verboseprovides more information on the optimization process.
- A function
ncon(for network contractor) for contracting a group of tensors (a.k.a. a tensor network), as well as a corresponding
@nconmacro that simplifies and optimizes this slightly. Unlike the previous macros,
@ncondo not analyze the contractions at compile time, thus allowing them to deal with dynamic networks or index specifications.
- Support for any Julia Base array which qualifies as strided, i.e. such that its entries are layed out according to a regular pattern in memory. The only exception are
ReinterpretedArrayobjects (implementation provided by Strided.jl, see below). Additionally,
Diagonalobjects whose underlying diagonal data is stored as a strided vector are supported. This facilitates tensor contractions where one of the operands is e.g. a diagonal matrix of singular values or eigenvalues, which are returned as a
- Support for
CuArrayobjects if used together with CUDA.jl, by relying on (and thus providing a high level interface into) NVidia's cuTENSOR library.
- Implementation can easily be extended to other types, by overloading a small set of methods.
- Efficient implementation of a number of basic tensor operations (see below), by relying on Strided.jl and
gemmfrom BLAS for contractions. The latter is optional but on by default, it can be controlled by a package wide setting via
disable_blas(). If BLAS is disabled or cannot be applied (e.g. non-matching or non-standard numerical types), Strided.jl is also used for the contraction.
- A package wide cache for storing temporary arrays that are generated when evaluating complex tensor expressions within the
@tensormacro (based on the implementation of LRUCache). By default, the cache is allowed to use up to the minimum of either 1GB or 25% of the total memory.
TensorOperations.jl is centered around 3 basic tensor operations, i.e. primitives in which every more complicated tensor expression is deconstructed.
addition: Add a (possibly scaled version of) one array to another array, where the indices of the both arrays might appear in different orders. This operation combines normal array addition and index permutation. It includes as a special case copying one array into another with permuted indices.
The actual implementation is provided by Strided.jl, which contains multithreaded implementations and cache-friendly blocking strategies for an optimal efficiency.
trace or inner contraction: Perform a trace/contraction over pairs of indices of an array, where the result is a lower-dimensional array. As before, the actual implementation is provided by Strided.jl.
contraction: Performs a general contraction of two tensors, where some indices of one array are paired with corresponding indices in a second array. This is typically handled by first permuting (a.k.a. transposing) and reshaping the two input arrays such that the contraction becomes equivalent to a matrix multiplication, which is then performed by the highly efficient
gemmmethod from BLAS. The resulting array might need another reshape and index permutation to bring it in its final form. Alternatively, a native Julia implementation that does not require the additional transpositions (yet is typically slower) can be selected by using
- Make it easier to check contraction order and to splice in runtime information, or optimize based on memory footprint or other custom cost functions.
|
OPCFW_CODE
|
Newtonsoft.Json.Linq.JProperty does not contain a definition for forename
I am extracting data from the following JSON
string strResponse =
@"[
{
'forename': 'Harry',
'surname': 'Potter',
'house': 'Gryffindor'
},
{
'forename': 'Draco',
'surname': 'Malfoy',
'house': 'Slytherin'
},
{
'forename': 'Luna',
'surname': 'Lovegood',
'house': 'Ravenclaw'
}
]";
The C# I am using is as follows:
dynamic dynJson = JsonConvert.DeserializeObject(strResponse);
foreach (var item in dynJson)
{
string output = string.Format("{0} {1} {2}", item.forename, item.surname, item.house);
Console.WriteLine(output);
}
This works fine and the output is as expected.
However, when the JSON is in a slightly different format such as:
string strResponse =
@"{
'People': [
{
'forename': 'Harry',
'surname': 'Potter',
'house': 'Gryffindor'
},
{
'forename': 'Draco',
'surname': 'Malfoy',
'house': 'Slytherin'
},
{
'forename': 'Luna',
'surname': 'Lovegood',
'house': 'Ravenclaw'
}
]
}";
I get the following error message:
An unhandled exception of type
'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in
System.Core.dll
Additional information: 'Newtonsoft.Json.Linq.JProperty' does not
contain a definition for 'forename'
I know this is to do with the structure of the new JSON string and in particular the People section. But I do not know how to adapt my code to handle this, please help.
Why would you expect different JSON to behave the same way? The first is an array of objects with the properties you expect, the second is an object with a People property that contains an array of objects as before.
@DavidG Apologies, I am trying to learn this by building up from a simple example (first JSON) and refactoring my code to see what i would need to change for other JSON strings. I have seen JSON displayed in the format as in the second JSON string and wanted to work out how i needed to change my code.
The best way is to avoid using dynamic completely, every time you use dynamic, a kitten dies... Instead create a set of concrete C# classes that match your structure. That way you have strong, compile-time type checking. In Visual Studio, copy the JSON, then go into the edit/paste special menu and paste JSON as classes.
Here you have a working example
JObject dynJson = JsonConvert.DeserializeObject(strResponse) as JObject;
dynJson.Dump();
foreach (var item in dynJson["People"])
{
string output = string.Format("{0} {1} {2}", item["forename"], item["surname"], item["house"]);
Console.WriteLine(output);
}
|
STACK_EXCHANGE
|
A bug? - kernel: arpresolve: can't allocate linfo for xx.xx.xx.xx
I'm pulling my hair out. From what I read in other posts this issue seems to be unsolved? And, unexpected? Could it be a bug?
kernel: arpresolve: can't allocate linfo for xx.xx.xx.xx
as soon as I add a OPT1 gateway.
My setup is this:
LAN ip - 192.168.0.1
WAN fixed ip 10.0.0.2 connected to ADSL router with fixed ip 10.0.0.1 with NAT to outside DSL ISP1 with DHCP assigned IP.
OPT1 fixed ip 10.0.1.2 connected to ADSL router with fixed ip 10.0.1.1 with NAT to outside DSL ISP2 with DHCP assigned IP.
WANGW fixed ip 10.0.0.1
OPT1GW fixed ip 10.0.1.1
All devices on LAN within subnet 192.168.0.x with unique IPs assigned by Pfsense.
ISP1 DHCP assigned IP subnet is outside of any of the above.
ISP2 DHCP assigned IP subnet is outside of any of the above.
ISP1 and ISP2 are not on same subnet
As soon as I assign the OPT1 interface, even before I create OPTGW I get hundreds of kernel: arpresolve: can't allocate linfo for 10.0.1.1 messages.
Also, connectivity to the internet is immediately severed, even though the WANGW stays online and active. I've even set the default LAN allow rule to use WANGW, thinking it might not know which gateway to use. But that had no effect.
Disabling the OPT1 interface has no effect. In order to restore connectivity the OPT1 interface must be deleted and machine restarted before connectivity is restored and kernel: arpresolve: can't allocate linfo for 10.0.1.1 messages stop.
I've tested both ISPs separately. When connected to router with IP 10.0.0.1 the gateway is up and internet accessible.
I've also swapped out the routers, such that WANGW router with IP 10.0.1.1 is connected to WAN. And, OPTGW router with IP 10.0.0.1 is connected to OPT1. Same result, internet inaccesible, WANGW reports to be online and kernel: arpresolve: can't allocate linfo for 10.0.0.1 messages, now with the other IP, reappear.
During all of this OPTGW never reported to be online - even though the router itself was in fact logged in to the ISPx.
Without the second gateway the setup works like a charm. And, it doesn't seem that anyone has managed to solve this specific issue. Even though it has been reported a number of times.
Sticky connections not enabled.
Setting up a loadbalance group and routing lan default allow through the loadbalance group makes no difference.
Anyone with ideas? Please, please help.
It seems many people will be helped when this is solved.
PS: @heper - I noted you also had the same issue. Did you manage to solve it?
Since it is obviously a network error - will it help if I set the ADSL modems to bridge mode, such that they function as old school modems?
Then use PFsense to do the login for me?
heper last edited by
It all seemed to be because my cable isp expired the dhcp lease before pfsense renewed it. Then i got assigned a local a.b.c.x ip (or so i think).
This a.b.c.x ip might have conflicted with my WAN2 dsl router subnet ….
I've solved it by putting my WAN2 subnet to something different and adding reliable NTP servers to my ESXI vm environment
In my case i had this problem once a week or so ....
Your case seems different as you can't even assign the interface.
Are you sure that you subnet is correct ?? did you set it 10.0.0.2/24 & 10.0.1.2/24 (do the same on the dsl router end)
If for example you'd have set it to 10.0.0.2/8 then you'd have conflicting adress'
have fun figuring it out :)
By the looks of it, indeed, your problem seems to be very different from mine. Congrats on solving your network conflict - sounds like your issue is difficult to replicate, so all the more difficult to debug.
Back to the problem I'm experiencing, I think you've solved this for me ;D - I indeed specified 10.0.0.2/8 and not 10.0.0.2/24. I'm going to try it this afternoon.
It would be kind if you could explain to me why there'll be a conflict in the case of /8? I'm lacking in IT network education - I'm mostly involved in R&D radar electronics, so your input would help a lot.
Aha - figured out why the conflict in case of /8 ;D
I had it the wrong way around i.e. /8 = 2^8 = 256 addresses and /24 = 2^24 = 16M. I'm more used to the 255.0.0.0 format, which makes sense intuitively.
But in fact: /8 = 2^(32-8) = 16M and /24 = 2^(32-24) = 256.
I'll change all the netmasks /24 - and now I agree, it'll probably solve the problem. FreeBSD forums all say that the above error usually indicate a physical network error or conflict. So this makes perfect sense then.
|
OPCFW_CODE
|
Kubernetes LoadBalancer service type creates "k8s service" instead of "Load Balancer"
Rancher Version: 1.2.0-pre2 and 1.2.0-pre3
Environment Type: Kubernetes
Steps to Reproduce:
Spin up a kubernetes service with type LoadBalancer.
Example yaml:
apiVersion: v1
kind: Service
metadata:
name: apigateway
labels:
spec:
ports:
- port: 443
protocol: TCP
targetPort: 443
selector:
app: apigateway
type: LoadBalancer
Results:
In kubernetes-load-balancers, a "K8s Service" is created. The service can only be accessed at it's randomly assigned nodePort
Expected:
Prior to upgrading to v1.2.0-pre2, when we created a service of this type, a "Load Balancer" was created in kubernetes-load-balancers and it opened the port 443 using tcp as it's protocol. We could also increase the Scale which is not possible with the current "K8s Service".
Using your yaml, the load balancer service was created, with the open port 443, and am able to scale it. I ran it in both in 1.1.3 and 1.2.0-pre3 and compared them, and the same exact services were created.
Did you check your host to ensure that host 443 was still available?
What version of kubernetes are you running in 1.2.0-pre2/3? Can you share the image name of your kubernetes system service?
This would help us identify which template you are using.
@tfiduccia Yes I did. It was only available on the randomly assigned nodePort on that host. No where else.
@deniseschannon I created the clusters from scratch when we upgraded to 1.2.0-pre3 and 2 and it took the Kubernetes rancher/v1.3.0-rancher3 images when I setup Kubernetes. Is this the info you're looking for?
I have similar problem.
Rancher v1.2.0-pre3 cluster 2 nodes Ubuntu 14.04 Docker 1.10.3. v1.3.0-rancher3 kubernetes images.
Also got exception in controller-manager logs every time i'am trying to create service with type Loadbalancer:
10/25/2016 1:46:12 PMI1025 10:46:12.714873 1 rancher.go:118] Can't find lb by name [lb-a3f30f59e9aa011e6bf1e02ac83a1aac]
10/25/2016 1:46:12 PMI1025 10:46:12.869534 1 servicecontroller.go:305] LB doesn't need update for service default/mysql
10/25/2016 1:46:13 PMI1025 10:46:13.140027 1 rancher.go:131] EnsureLoadBalancer [lb-a3f71e40b9aa011e6bf1e02ac83a1aac] [""] [[]api.ServicePort{api.ServicePort{Name:"", Protocol:"TCP", Port:80, TargetPort:intstr.IntOrString{Type:0, IntVal:80, StrVal:""}, NodePort:32070}}] [[da-docker-host1 da-docker-host2]] [None]
10/25/2016 1:46:17 PMpanic: runtime error: invalid memory address or nil pointer dereference
10/25/2016 1:46:17 PM[signal 0xb code=0x1 addr=0x28 pc=0x1202921]
10/25/2016 1:46:17 PM
10/25/2016 1:46:17 PMgoroutine 929 [running]:
10/25/2016 1:46:17 PMpanic(0x25ce8e0, 0xc82001a070)
10/25/2016 1:46:17 PM /usr/local/go/src/runtime/panic.go:481 +0x3e6
10/25/2016 1:46:17 PMk8s.io/kubernetes/pkg/cloudprovider/providers/rancher.(*CloudProvider).waitForLBAction.func1(0xc820fe4ba0, 0x3281a48, 0x0, 0x0)
10/25/2016 1:46:17 PM /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/rancher/rancher.go:430 +0x321
10/25/2016 1:46:17 PMk8s.io/kubernetes/pkg/cloudprovider/providers/rancher.(*CloudProvider).waitForAction.func1(0xc820fe4ba0, 0xc8210b84e0, 0x2b6b070, 0xf)
10/25/2016 1:46:17 PM /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/rancher/rancher.go:460 +0x80
10/25/2016 1:46:17 PMcreated by k8s.io/kubernetes/pkg/cloudprovider/providers/rancher.(*CloudProvider).waitForAction
10/25/2016 1:46:17 PM /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/rancher/rancher.go:472 +0x71
Here is screenshot:
@tehdeadone have you tried this with a more recent version of rancher?
@aemneina I haven't. Our team has dropped back to the official stable releases (v1.1.4) for the time being (and the problem hasn't manifested since). We're in a critical pre-release phase, so we won't be able to try the pre-branch for a few weeks.
Sounds good I'll close this out.
|
GITHUB_ARCHIVE
|
Errors with WFRPTableConfig sheet
Describe the bug
Template does not show or preserve settings for
name, description, icon
roll formula
options to draw with replacement or display roll to chat
table key, table column
A model validation error is thrown, and sometimes other errors (such as no results available)
Console errors are also thrown (see comments).
To Reproduce
Either
i. Create a new table inside WFRP4e system <IP_ADDRESS> with a couple of results.
ii. Import table, eg, Dark Whispers from GM Toolkit.
Open the table if not already open. The name should be a placeholder.
Add a name and roll formula
Roll (without updating).
Replace the name. Update. Settings are cleared, but the name entered is adopted as the application window title. No error is thrown.
The table is drawn from, but settings are reverted.
Tables behave as expected in DnD5e system.
Screenshots
Using WFRP4e Rolltable sheet
Using native FTT Rolltable sheet
Settings revert after drawing from table
Application window title updated
Version Numbers
Foundry: 10.277
wfrp4e: <IP_ADDRESS>
GM Toolkit v10 development version
Console errors
foundry.js:55888 Model Validation Errors
[RollTable.name]: may not be a blank string
fetch @ foundry.js:55888
notify @ foundry.js:55815
error @ foundry.js:55851
_preUpdateDocumentArray @ foundry.js:12292
_updateDocuments @ foundry.js:12206
update @ commons.js:6212
await in update (async)
updateDocuments @ commons.js:5567
update @ commons.js:5664
_updateObject @ foundry.js:62353
_onSubmit @ foundry.js:5637
submit @ foundry.js:5928
_onRollTable @ foundry.js:62281
dispatch @ jquery.min.js:2
v.handle @ jquery.min.js:2
The following error appears with the above
foundry.js:709 Error: RollTable [9rXQv4uJcQoLBitt] Model Validation Errors
[RollTable.name]: may not be a blank string
at SchemaField._validateType (commons.js:3576:15)
at SchemaField.validate (commons.js:3303:37)
at RollTable.validate (commons.js:4925:35)
at ClientDatabaseBackend._preUpdateDocumentArray (foundry.js:12290:13)
at ClientDatabaseBackend._updateDocuments (foundry.js:12206:33)
at ClientDatabaseBackend.update (commons.js:6212:24)
at async RollTable.updateDocuments (commons.js:5567:23)
at async RollTable.update (commons.js:5664:23)
onError @ foundry.js:709
_preUpdateDocumentArray @ foundry.js:12293
_updateDocuments @ foundry.js:12206
update @ commons.js:6212
await in update (async)
updateDocuments @ commons.js:5567
update @ commons.js:5664
_updateObject @ foundry.js:62353
_onSubmit @ foundry.js:5637
submit @ foundry.js:5928
_onRollTable @ foundry.js:62281
dispatch @ jquery.min.js:2
v.handle @ jquery.min.js:2
Roll Formula is incorrectly cleared, resulting in the following expected error.
foundry.js:55888 There are no available results which can be drawn from this table.
No longer an issue with FVTT 10.283 and WFRP4e 6.1.1
|
GITHUB_ARCHIVE
|
Make a Gift
I’m a final-year DPhil student in the Computer Science department. I completed my undergraduate studies at McGill University in Canada, where I studied Mathematics and Computer Science. I’ve done research internships at Google Brain and DeepMind. My research focuses on generalization in machine learning (ML), and uses ideas from a broad range of fields such as causal inference, Bayesian deep learning, and reinforcement learning.
At Trinity, I teach Linear Algebra, Discrete Mathematics, and Continuous Mathematics. I have also co-supervised MSc students in the computer science department.
I’m particularly interested in studying how the learning dynamics of ML systems affect generalization and convergence properties. For example, it is widely observed that often models which can quickly fit their training data have better generalization properties than those which take longer. I’ve worked on using ideas from Bayesian model selection to understand this phenomenon and to propose new performance estimators that let us more efficiently search for good neural network architectures.
I also work on deep reinforcement learning, which is concerned with how learning systems can interact with the world to achieve goals. This setting yields much more complex and unstable learning dynamics, and as a result many of the strategies that people use to train neural networks for supervised learning tasks, where the goal is to fit a fixed set of input-label pairs, fail when applied to reinforcement learning problems. I’ve worked on both theoretical analysis of reinforcement learning algorithms and the development of new learning algorithms which improve training stability and generalization.
One unifying theme in this work is the idea that by learning the right causal structure of the world, ML systems will generalize better and be more robust when they’re deployed. Humans have useful intuitions about cause and effect that help us navigate the world reasonably robustly, but translating these intuitions into machines is surprisingly challenging.
Further information can be found on my website here.
Lyle, Clare, Marc G. Bellemare, and Pablo Samuel Castro. ‘A comparative analysis of expected and distributional reinforcement learning.’ Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.
Lyle, Clare, Lisa Schut, Robin Ru, Yarin Gal, and Mark van der Wilk. ‘A Bayesian Perspective on Training Speed and Model Selection.’ Advances in Neural Information Processing Systems 33 (2020).
Wang, B., C. Lyle, and M. Kwiatkowska. ‘Provable guarantees on the robustness of decision rules to causal interventions.’ In Proceedings of the International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2021.
Lyle, Clare, Mark Rowland, Georg Ostrovski, and Will Dabney. ‘On The Effect of Auxiliary Tasks on Representation Dynamics.’ In International Conference on Artificial Intelligence and Statistics, pp. 1-9. PMLR, 2021.
|
OPCFW_CODE
|
As I said on the gnome-utils mailing list, the GNOME Dictionary codebase sucks.
Well, it sucks to the very end of it.
It really shows its age (it’s more than 5 years old), and suffers of what I’m used to call design by accretion. This particular technique of software design works like a black hole: particles are attracted to the singularity, and form the “accretion disk”, which is nothing but a big mass of… er… mass, graviting around, attracting other mass, and accelerating – thus getting hotter and hotter – until it eventually hits the event horizon. In software, it works the same way; new code is added, and it becomes relevant up to the point of becoming critical. The code becomes a little bit ugly, and it attracts other ugly code in order to make things work and to add new features. Time passes, and the codebase becomes a uglier mess at each iteration. At one point, everything simply collapses, because one of the “oh-so-hackish” portions of code simply passes the point of being understandable by any human being – including the original writer. Also, the overall entropy of the software increases, because no-one is smart enough to understand the code, let alone sanitize it; this is what I call the “design by accretion runaway syndrome”. At this point, the only option for a developer is to toss everything out of the window in disgust, and begin from scratch; which means time, effort, and skills needed for features and bug-fixing are instead spent on rewriting stuff. One way to stop a “design by accretion” before it reaches its “runaway syndrome” is to check for hacks, kludges, ugly portions of code and exterminate them at each iterations. Hacks are what they are: they might be clever, well-thought or a demostration of coding skills; but they are hacks nonetheless, and the shorter they live, the better codebase will result in the end.
The dictionary application and applet is the best result of this particular software design “technique”; not only the high-level code is a collection of circular references and inclusions – also the low-level implementation of the dictionary protocol has become hackish enough to include at least two API, one of which is an almost complete implementation of RFC 2229, while the other is like a remnant of a previous implementation.
I tried to put a thin GObject layer around it, in order to avoid having to write my own implementation of the dictionary protocol; so far, the results are discouraging. Ergo, the best solution is to throw away the low-level stuff, create a new, GObject-oriented implementation of the dictionary protocol, and build up from there.
In the meantime, I’ll have to pass a couple of exams, and begin porting the BookmarkFile and RecentManager/RecentChooser code under GLib and GTK; also, the FileChooser code should be adapted to support recently used files in OPEN mode, and the shortcuts be saved using the BookmarkFile object inside a default location – something like
|
OPCFW_CODE
|
Delphi Encryption Compendium
The Delphi Encryption Compendium (DEC) is a cryptographic library for Delphi, C++ Builder and Free Pascal. It was originally developed by Hagen Reddmann, made compatible with Delphi 2009 by Arvid Winkelsdorf, and has now been ported to Free Pascal.
The following changes have been made with respect to the 2008 release:
- Added a pure Pascal version for all methods that were previously coded in x86 assembly only.
- Syntax compatibility with Free Pascal in Delphi mode.
- Un-nested procedures in all places where they were passed as function pointers (which is not portable, not even in Delphi Win64).
- Modified shift operand for all shl/shr operations to be in range 0-31 (architectures like ARM do not support shifts >= 32).
- Test cases in DECTest are handled by a class now in order to get rid of an assembly hack to call nested procedures.
The following environments have been tested:
- Delphi XE2 Win32
- Delphi 10.2 Win32 & Win64
- FPC 2.6.4 Linux x86_64
- FPC 3.1.1 Linux ARM
- FPC 3.1.1 Win32
Technically Delphi 7+ and FPC 2.6+ should be compatible (possibly very minor changes required for old versions).
This project is licensed under a MIT/Freeware license. You are free to use the library for personal and commercial use but at your own risk. See LICENSE for details.
DEC mainly consists of the following units:
- CPU.pas: Queries information about an x86-based processor (Win32/Win64 only).
- CRC.pas: Cyclic Redundance Check implementation for many common lengths and polynomials.
- DECCipher.pas: Implementation of symmetric ciphers and most common operation modes.
- DECFmt.pas: Formatting classes for all common data formats.
- DECHash.pas: Implementation of hash functions.
- DECRandom.pas: Secure protected Random Number Generator based on Yarrow.
- DECUtil.pas: Utility functions for dealing with buffers, random numbers etc.
Furthermore the DECTest project is included, which provides test cases for all algorithms.
The original DEC also contained the units ASN1 and TypeInfoEx - those are not required for the core functionality of DEC and have not been ported. If you need them, please get them from an older release.
- Cast128, Cast256
- RC2, RC4, RC5, RC6
- Rijndael / AES
- 1DES, 2DES, 3DES, 2DDES, 3DDES, 3TDES
- TEA, TEAN
Block cipher operation modes
Check DECCipher.pas for more details on these modes.
- MD2, MD4, MD5
- RipeMD128, RipeMD160, RipeMD256, RipeMD320
- SHA, SHA1, SHA256, SHA384, SHA512
- Haval128, Haval160, Haval192, Haval224, Haval256
- Whirlpool, Whirlpool1
- Snefru128, Snefru256
- 1:1 Copy
- Hexadecimal Uppercase
- Hexadecimal Lowercase
- PGP (MIME/Base64 with PGP Checksums)
- UU Encode
- XX Encode
- Escaped Strings
AES-CBC-128 encode/decode example:
const STATIC_KEY: array[0..15] of Byte = (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15); var IV: array[0..15] of Byte; Plaintext: Binary; Ciphertext: TBytes; begin RandomSeed; Plaintext := 'abcdefghijklmnopqrstuvwxyz'; with TCipher_Rijndael.Create do try Mode := cmCBCx; RandomBuffer(IV, 16); Init(STATIC_KEY, 16, IV, 16); SetLength(Ciphertext, Length(Plaintext)); Encode(Plaintext, Ciphertext, Length(Plaintext)); Done; // only needed when same object will be used for further operations FillChar(Plaintext, Length(Plaintext), 0); Decode(Ciphertext, Plaintext, Length(Ciphertext)); Assert(Plaintext = 'abcdefghijklmnopqrstuvwxyz'); finally Free; end; end;
Note: If the plaintext isn't padded, DEC will pad the last truncated block with CFB8! PKCS padding is not supported by DEC. Using DEC in conjunction with other crypto libraries is of course possible, but you need to make sure to preprocess (i.e. pad) the plaintext properly.
Also note: DEC's Binary type is defined as RawByteString. If you are in a unicode environment (e.g. Delphi 2009+), care is advised when dealing with variables of type
string. Never directly pass them into a function that takes Binary!
SHA-256 examples with formatting
var InputBuf: TBytes; InputRawStr, Hash: Binary; begin SetLength(InputBuf, 4); FillChar(InputBuf, 4, $AA); Hash := THash_SHA256.CalcBuffer(InputBuf, 4, TFormat_MIME64); // -> 2+0UzrAB0RDXZrkBPTtbv/rWkVR1qboHky0qwFeUTAQ= InputRawStr := 'My message'; Hash := THash_SHA256.CalcBinary(InputRawStr, TFormat_HEXL); // -> acc147c887e3b838ebf870c8779989fa8283eff5787b57f1acb35cac63244a81 Hash := THash_SHA256.CalcBinary(InputRawStr, TFormat_Copy); // -> Hash contains ac c1 47 ... raw bytes. Can be copied to a 32 bytes array using Move(Hash, HashBytes, 32); end;
Each hash class also has functions called KDF2 and KDFx for key derivation (e.g. for use as session keys in ciphers).
Writeln(TFormat_MIME64.Encode('My message')); // -> TXkgbWVzc2FnZQ== Writeln(TFormat_MIME64.Decode('TXkgbWVzc2FnZQ==')); // -> My message Writeln(TFormat_PGP.Encode('Hello, how are you today?')); // -> SGVsbG8sIGhvdyBhcmUgeW91IHRvZGF5Pw== <line break> =nUAA
Encode and Decode are both overloaded to also take an untyped input buffer.
|
OPCFW_CODE
|
I am using Pytorch to perform non-linear regression. I have been using the Adam optimizer. It seems that no matter what data I use, or how small I make the learning rate, eventually the loss plot becomes noisier and noisier as the epochs go on. I am wondering why this happens. I have included an example plot to show what I mean.
Depending on what data I’m using or how large I make the learning rate, this increase in the noisiness can be more or less severe. I also noticed that this increase in the noise happens for both the training loss and the testing loss, but that the magnitude of the noise is always greater for the test data.
So is this something that is just inherent to the optimization process, and how it searches through the loss landscape? Is this just an unavoidable aspect of the optimization?
Hello, just a shot in the dark here but is it possible that your datapoints are differently noisy from one another? If so, depending on whether you happen to over- or under-sample the noisier datapoints in a given epoch, you may get more or less randomness in your parameter changes, leading to jitter in MSE. If you’re using a random sampler, this may well be happening. This could be due to the raw datapoints themselves, or to the transformed ones (in case you’re applying some transforms).
If this is being induced by the transforms, you can try training without transforms (maybe starting at a certain epoch) and see if the stutter goes away.
If it’s due to the raw datapoints being inherently noisy in some way, you can try to specifically diagnose this. It’s a bit of work, but you can keep a measure per datapoint of whether the number of times a particular datapoint is included in an epoch is correlated with the subsequent change in validation error. If you find that some datapoints spike on this measure (meaning, the more often they are sampled in the epoch, the higher the subsequent validation / test MSE increase) then you can have a look at them and decide if they are perhaps worth excluding from training beyond a certain epoch.
Expanding on from what @Andrei_Cristea has already said, you need to realize that the magnitude of these oscillations is pretty small. The current loss is on the order of 1e-5, and these oscillations are similarly on the order of 1e-5. This could be because of the stochastic nature of the sampling process, or it could be a result of the optimizer.
Could you try this again but with say
torch.optim.SGD and see if the same oscillations occur?
Because Adam uses moments of the gradient to precondition your learning rate by a preconditioning factor, and near the local minima these values will become near zero. In the denominator of this preconditioning factor, you’ll have something like
torch.sqrt(second_moment) + epsilon. In the limit of
epsilon (which defaults to 1e-8) being larger than
torch.sqrt(second_moment) you’ll effectively be multiplying the numerator of the preconditioning factor by
1/epsilon (which will default to 1e8). This could cause the optimizer to overshoot a local minima which will lead to these oscillations. As the optimizer constantly tries to update towards the local minima.
Also, the loss minimization isn’t guaranteed to be monotonic so seeing oscillations within the loss is expected behavior. You could try running
torch.optim.Adam with a learning rate scheduler and see if the oscillations diminish. But then again, having a loss of
1e-5 is perfectly acceptable already.
@AlphaBetaGamma96 You gave me an idea. Since I’m plotting the loss on a log scale, perhaps the noisiness is actually constant throughout the epochs, but it only becomes visible once the scale reaches 1e-5. I’ll look into this, and if this isn’t the cause, then I’ll try running with SGD. I’ll report back on my findings
|
OPCFW_CODE
|
On 02/06/2012 03:54 PM, Sarah Diehl wrote:
> I'm only guessing here, but maybe the issue is that the parameter names
> are all the same (namely "model"). Did you try with only unique names?
> Especially this one:
> <param name="models" type="select" label="model source"
> help="History or installed models?" value="local">
> should get it's own name.
Thank you, that was the first step to the solution. In fact I had used
different names before and failed, but what I left out was using
$ModelSource as a prefix to the actual variable name.
I had to rename the selection parameter "model" for local or history,
the two alternative parameters could stay with the same name ("model"),
since they are used mutually exclusive. Then, using $ModelSource.model
would give the correct result.
> On 02/06/2012 03:38 PM, Holger Klein wrote:
>> Dear all,
>> I'm working on a tool wrapper which for a sequence scoring tool.
>> It's supposed to score sequences either using a library installed to
>> galaxy (tool-data/models.loc) or datasets from the user history.
>> I tried to implement this behavior using the<conditional> /<when
>> value> mechanism, simplified code follows below.
>> Using locally installed models (from models.loc) fails with "NotFound:
>> cannot find 'models'", although in the details view of the failed tool
>> run "model database" points to the right file.
>> Using the model file from the history works.
>> Defining _only_ locally installed models from models.loc also works
>> (removing the<conditional> stuff and leaving only the part inside<when
>> value='local'> </when>).
>> The commandline might look a bit strange but is correct.
>> Can anybody spot what is going wrong here?
>> calcModels --scoreFasta -- --fa $fasta_in --bgFa $fasta_background
>> --models $models> $output_table
>> <param format="fasta" name="fasta_in" type="data" label="Input Fasta
>> File" />
>> <param format="fasta" name="fasta_background" type="data"
>> label="Background Fasta File" />
>> <conditional name="ModelSource">
>> <param name="models" type="select" label="model source"
>> help="History or installed models?" value="local">
>> <option value="local">Locally installed models</option>
>> <option value="history">Models from your history</option>
>> <when value="local">
>> <param name="models" type="select" label="model database">
>> <options from_file="models.loc">
>> <column name="name" index="1"/>
>> <column name="value" index="2"/>
>> <when value="history">
>> <param name="models" type="data" format="tabular" label="model
>> database" />
Dr. Holger Klein
Core Facility Bioinformatics
Institute of Molecular Biology gGmbH (IMB)
Tel: +49(6131) 39 21511
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
|
OPCFW_CODE
|
A person has selected a host name which was forced to him at the time of installation then, later on, he will definitely think of changing it. It happens many times when a person is forced to select a hostname because of non-availability of his required name. This creates dissatisfaction many times for most of the people because of unavailability factor. Later after using that not so wished name again & again, he wants to change it. Now the problem arises how to do this.
Ways to change Hostname
There is a need to alter host name many times. Changing hostname in Linux operating system via hostname command line is quite easy. It is convenient but temporary one. It will come to its original form when the system restarts. It is because with command line host name changes in system only & not registered on the server. That is the reason because of which hostname is not permanent & need to set every time.
Difference between Hostname & Domain name
The host name is entirely different from a domain name. The hostname is set at the kernel & it maintains the current hostname. The domain name is determined by the resolver system ordinarily from the host’s database or through DNS.
Change Hostname Permanently
Linux system works on the various operating system. Few of them are like Debian-based Linux system, RedHat based Linux system, etc. As Linux operates on a different operating system so the way to change hostname in those operating system. To change & save hostname in a Debian based is quite easy & simple. An individual just needs to open file /etc/hostname to read the hostname of the system.
He needs to do it at the time of rebooting and then he needs to set it up using the init script /etc/init.d/hostname.sh/etc/hostname server. Therefore, on a Debian-based Linux system an individual can edit the file /etc/hostname and change the name of the system and after giving required name, an individual needs to run /etc/init.d/hostname.sh, start to activate the changes. The hostname saved in the file /etc/hostname. Saving hostname permanently doesn't mean that if a person wants to change it in future, then he cannot change it; he can.
Hostname change on RedHat based Linux Systems
In RedHat based Linux system changing hostname is as trouble-free as in other Linux based systems. It uses the file /etc/sys config/network to read the saved hostname at starting of a system. This is also set using the init script similar to Debian system. it is /etc/rc.d/RC.sys init/etc/sys config/network. After this, an individual needs to fill necessary things required to fill. Therefore in order to preserve an individual's changed hostname on the system. He needs to reboot & edit this file and then he needs to enter the appropriate name using the hostname variable.
Myth about changing Hostname
Changing hostname is very simple & is a myth that in a Linux-based operating system it cannot be changed. There are many other Linux-based operating systems accessible. These are Slackware, Ubuntu etc. & in these systems also it is possible to reset hostname in as simple way as it is in other systems discussed.
To change the hostname of a Linux-based system, it is necessary to change it on the main server. Otherwise, an individual needs to change it every time.
|
OPCFW_CODE
|
Rumors are buzzing this morning that Microsoft’s Live Search may be rebranded as Kumo in 2009. The sleeping giants in Redmond, Washington have apparently been busy snapping up all the variants of Kumo: com, .net, .jp, .fr, .ru and so on. Right now the only information on most of these domains is they are being held by CSC Corporate Domains. According to LiveSide.net, watchers of all things Windows Live (hey, somebody’s got to do it), CSC is Microsoft’s domain registrar of choice at the moment.
Now this wouldn’t mean much except that Microsoft has recently outed itself as the owner of Kumo.com, and CSC is the registrar for that name as well. It’s not much of a leap, then, to assume that all the other Kumo variants CSC has locked up are also for Microsoft.
So what does this have to do with Live Search? Well, not much at the moment. The rest is pure conjecture. But we see indications that Microsoft is looking to become more competitive. Recall, of course, all the brouhaha over the company’s failed acquisition attempts of Yahoo! The other red flag is that Live owns a mere 8.9 percent of the search market in the most recent numbers from ComScore; so Microsoft really needs to step up its game if it wants to compete with Google.
However, is Kumo the new name for Microsoft search? Possibly. But according to LiveSide.net and this 2001 article from Salon.com, Kumo can mean “cloud” or “spider.” Some have taken that as further indication Kumo is linked to search, but why? Since when has a search platform been a major player in the concept of cloud computing? When we talk about the “cloud” we usually mean Internet applications and data storage systems like Google Docs, MobileMe, and LaLa, but not search.
So what could Kumo be? I have a suggestion, but hold on, because we are about to make a huge leap over Speculation Gulch. Here we go . . . Microsoft Chief Software Architect Ray Ozzie gave some clues during the recent Professional Developers Conference, when Microsoft described its cloud computing strategy and introduced Azure (a cloud OS), Zurich (a set of management tools for Azure), LiveMesh (online file sync) and Office Web Apps. Steven Levy drags a few more details out of Ozzie in the current (16.12) print edition of Wired (not yet available online).
Could Kumo be the official name for any of these? Why not? It makes more sense from a branding point of view, and–more importantly than a new search engine–Microsoft needs to drag its core business into the 21st century’s Internet-based world. Crazy speculation? Probably. But wasn’t it more interesting than boring old search?
|
OPCFW_CODE
|
hi all, i need a little help...i'm really a newbie to this world and i am italian so sorry for my poor english !!
i bought this fantastic software few days ago and tried to install...
when i complete the third step of the install.php i recieve this message:
Failed MySQL Query: INSERT INTO users (userid,usergroupid,username,password,email,joindate) VALUES(NULL,'5','Infinity','286fad179bb963d3ff43aa5e1bcc1fec','infinity@Immaginate.it','1131893308') / Table 'Sql85038_2.users' doesn't exist
i know that all mySQL settings in the other step are correct...user-password ip address...
i think the problem is that i have 5 fixed name of Sql Database and PP try to change this name...it's like that ?
i remove the Sql prefix in the first configuration table because when i leave the default prefix a message error of not found sql files occure !
please help me...
It would appear to me if everything else works up until this point on the initial step if install where you specified the database information that on the user database information something was typed incorrect. Please check to ensure since it is an internal install that the same information entered for photopost database is the same you entered on the user database portion of the configuration.
Apparirebbe a me se tutto lavora altro su finché questo punto sul passo iniziale se installa dove lei ha specificato le informazioni di base di dati che sulle informazioni di base di dati di operatore qualcosa è stato battuto a macchina inesatto. Per favore l'assegno di assicurare poiché è un interno installa che le stesse informazioni entrate per la base di dati di photopost è lo stesso lei è entrato sulla porzione di base di dati di operatore della configurazione.
Thank you...for fast answer....
i resolved...the trouble to reinitialize all SQL databases...
now i done the configuration of all site...it's nice...but when i try to upload a photo i recieve a message like this...
Error creating thumbnail! Error code: 127
"/usr/bin/X11/mogrify" +profile "*" -size 100x75 -quality 80 -geometry 100x75 -unsharp 10 '/web/htdocs/www.immaginate.it/home/data/501/thumbs/Donzella_Pavonina_1.JPG'
what i have forgot ?
Hello you probally did not set the proper path to mogrify on install which is easy to do just go to admin and edit your config and type the proper path.
If you want to use gd2 just go to admin => global options and select gd2 as the processor instead ;)
sure...i forgot to set to GD2
i'm sorry to ask all this questions !!!
i've just uploaded my first photo !!! wonderfull !!!
but when i try to upload a second photo of only 1,78 MB the system tell to me:
Fatal error: Allowed memory size of 12582912 bytes exhausted (tried to allocate 2400 bytes) in /web/htdocs/www.immaginate.it/home/image-inc.php on line 115
wich is the matter now ?
That is a PHP memory exhaustion error. GD also uses PHP\'s memory
I would suggest one of two things. You can edit your php.ini and raise the memory limit or you can do this in uploadphoto.php add the line in bold.
Thanks Chuc !!
it\'works...but for some big image i set this value to 64M !!
thanks again !!
|All times are GMT -5. The time now is 05:52 AM.|
Powered by vBulletin® Version 3.8.1
Copyright ©2000 - 2013, Jelsoft Enterprises Ltd.
Search Engine Friendly URLs by vBSEO 3.2.0
|
OPCFW_CODE
|
Note: These articles are copyrighted so I have restricted access to
students in the class. Email me if you need access.
- R. P. Dick, Multiobjective
Synthesis of Low-Power Real-Time Distributed Embedded Systems,
Ph.D. Dissertation, Dept. of Electrical Engineering, Princeton University,
Nov. 2002. Chapters 1–3 provide background material. Chapters
1–3 due 20 September.
- Stephen Edwards, Luciano Lavagno, Edward A. Lee, and Alberto
Sangiovanni-Vincentelli, “Design of embedded
systems: formal models, validation, and synthesis,”
Proc. IEEE, Mar. 1997. Due 20 September.
- M. R. Garey and D. S. Johnson, Introduction to “Computers and
Intractability: A Guide to the Theory of NP-Completeness,” 1979.
Due 22 September.
- O. Coudert, “Exact coloring of
real-life graphs is easy,” in Design Automation, pp.
121–126, Jun. 1997. Due 22 September.
- C. L. Liu and J. W. Layland, “Scheduling
algorithms for multiprogramming in a hard-real-time environment,” in
J. of the ACM, vol. 20, no. 1, Jan. 1973. Due 27 September.
- R. P. Dick, “Reliability, thermal, and power modeling and
optimization,” in Proc. Int. Conf. Computer-Aided Design,
Nov. 2010, pp. 181–184. Due 29 September.
- Yu-Kwong Kwok and Isfak Ahmed, “Benchmarking and Comparison of Task Graph
Scheduling Algorithms,” J. Parallel and Distributed
Computing, Mar. 1999. Due 4 October.
- L. Yang, R. P. Dick, H. Lekatsas, and S. Chakradhar, “High-performance operating system
controlled on-line memory compression,” in ACM Trans. Embedded
Computing Systems, Mar. 2010, pp. 30:1–30:28. Due 6 October.
- P. R. Panda, N. D. Dutt, and A. Nicolau, “On-chip vs. off-chip memory: the data partitioning problem in
embedded processor-based systems,” in ACM Trans. Embedded
Computing Systems, Jul. 2000, pp 682–704. Due 11 October.
- M. Tim Jones, “Anatomy
of real-time Linux architectures,”, Apr. 2008. Due 14 October.
- Joseph Polastre, Robert Szewczyk, Alan Mainwaring, David Culler, and John
Anderson, “Analysis of
wireless sensor networks for habitat monitoring,” Wireless sensor
networks, pp. 399–423, 2004. Due 20 October.
- B. W. Cook, S. Lanzisera, and K. S. J. Pister, “SoC Issues for RF Smart
Dust,” in Proc. IEEE, vol. 94, no. 6, Jun. 2006. Due 27
- Srivaths Ravi, Anand Raghunathan, Paul Kocher, and Sunil Hattangady, “Security in Embedded Systems: Design
Challenges,” in ACM Trans. Embedded Computing Systems,
pp. 461–491, Aug. 2004. Due 1 November.
- Joo-Young Hwang, Sang-Bum Suh, Sung-Kwan Heo, Chan-Ju Park, Jae-Min Ryu and
Seong-Yeol Park, and Chul-Ryun Kim, “Xen
on ARM: System Virtualization Using Xen Hypervisor for ARM-Based Secure Mobile
Phones,” in Proc. Consumer Communications and Networking
Conf, pp. 257–261 Jan. 2008. Due 1 November.
- Mian Dong and Lin Zhong, “Self-Constructive High-Rate System Energy
Modeling for Battery-Powered Mobile Systems,” in
Proc. Int. Conf. Mobile Systems, Applications, and Services,
Jun. 2011. Due 3 November.
- Mihail L. Sichitiu and Chanchai Veerarittiphan, “Simple, Accurate Time
Synchronization for Wireless Sensor Networks,” in Proc. Wireless
Communications and Networking Conf., Mar. 2003. Due 3 November.
- K. Lorincz, Bor-rong Chen, G. W. Challen, A. R. Chowdhury, S. Patel, P.
Bonato, and M. Welsh, “Mercury: A
Wearable Sensor Network Platform for High-Fidelity Motion Analysis,”
in Proc. Conf. on Embedded Networked Sensor Systems, Nov. 2009. Due 8
- Dominique Guinard, and Vlad Trifa, “Towards the Web of Things: Web
Mashups for Embedded Devices,” in Proc. Wkshp. on Mashups,
Enterprise Mashups and Lightweight Composition on the Web, April 2009.
Due 8 November.
- Norbert Seifert and Nelson Tam, “Timing Vulnerability Factors of
Sequentials,” in IEEE Trans. on Devices and Materials
Reliability, vol. 4, no. 3, September 2004. Due 10 November.
- Justin M. Bradley and Ella M. Atkins, “Computational-Physical State
Co-Regulation in Cyber-Physical Systems,” in
Proc. Int. Conf. Cyberphysical Systems, April 2011. Due 15
- Byung-Gon Chun, Sunghwan Ihm, and Petros Maniatis, “CloneCloud: Elastic Execution between
Mobile Device and Cloud,” in Proc. The European Professional
Society on Computer Systems, April 2011. Due 17 November.
- Koen De Bosschere, Wayne Luk, Xavier Martorell, Nacho Navarro, Mike
O'Boyle, Dionisios Pnevmatikatos, Alex Ramirez, Pascal Sainrat, André
Seznec, Per Stenström, and Olivier Temam, “High-Performance Embedded
Architecture and Compilation Roadmap,” in Springer Trans. on
High-Performance Embedded Architectures and Compilers I, vol. 4050, 2007, pages
5–29. Due 22 November.
- André DeHon and Helia Naeimi, “Seven Strategies for Tolerating Highly
Defective Fabrication,” in
IEEE Design and Test of Computers, pp 306–315, vol. 22, no. 4,
July 2005. Due 29 November.
- Sravanthi Chalasani and James M. Conrad, “A Survey of Energy Harvesting
Sources for Embedded Systems,” in Proc. SoutheastCon, pp
442–447, April 2008. Due December 1.
- “DC–DC Converters: A
Primer,” Jaycar Electronics Technical Report, 2001. Read by
December 1, but no summary needed.
Page maintained by
|
OPCFW_CODE
|
4-20mA loop energy harvester
I'm designing a board which is powered from 4-20 mA Hart loop. Only two wire.
Actually I had 19~20V due to the 250ohm resistor of the master side.
I would like to power two electronic stages 12V@4mA and 5V@50mA.
I was trying to use a buck converter to generate 12V in series with an LDO for the 5V to power a microcontroller, sensor and wireless solution.
By simulating the power stages, I saw that the input current is much higher than 20mA. By adding a bulk capacitor, it had been reduced but still > 20mA. I'm worried that will cause problems.
I tried to use some dc dc converter like the TSM2405S from traco power and i was able to power the rest of the electronic but it was not a good solution because i got less than 5v in output because the input source is less than 24v. but it works and i did not lose the hart communication On the board there will not be any variation of the current. i need only the hart signal flying on the 4-20 mA loop
Is the architecture of the power stage good, or should I change the topology? What is the best way to get power from a 4-20mA loop?
I'm pretty sure you can't count on a 4-20 mA loop for the amount of power you're looking at here.
Normally if you are getting power from a 4-20mA loop you can only count on about 3.6mA, since the signal can be anywhere from 4-20mA nominally, and you want to allow it to go under and over-range since those are also important- so with (say) 12V drop you'd have about 43mW. Even with a 24V drop you'd have less than 100mW. That requires some care if you need galvanic isolation since there's not a lot of room for quiescent current (and it's generally considered rude to drop too much voltage since the user might want to put other things in series).
If your output signal is related to the input signal (for example, a npn-inverting isolator) you can, of course, have more output current for more input current.
Thank you for your reply.
I tried to use some dc dc converter like the TSM2405S from traco power and i was able to power the rest of the electronic but it was not a good solution because i got less than 5v in output. but it works and i did not lose the hart communication
On the board there will not be any variation of the current. i need only the hart signal flying on the 4-20 mA loop.
|
STACK_EXCHANGE
|
Android application out of memory without dealing with images
I'm getting an out of memory exception thrown when trying to send an email with an attachment.
Unfortunately unless I increase the size of heap allocation I can't seem to diagnose/fix the problem. The attachment as far as I can tell is not particularly large and it works on some installations but not others.
07-18 15:29:58.912 2471-21587/uk.co.nwhub.nwtapp E/art: Throwing OutOfMemoryError "Failed to allocate a 37440820 byte allocation with 16777120 free bytes and 18MB until OOM"
07-18 15:29:58.922 2471-21587/uk.co.nwhub.nwtapp E/AndroidRuntime: FATAL EXCEPTION: IntentService[EmailService]
Process: uk.co.nwhub.nwtapp, PID: 2471
java.lang.OutOfMemoryError: Failed to allocate a 37440820 byte allocation with 16777120 free bytes and 18MB until OOM
at java.lang.AbstractStringBuilder.enlargeBuffer(AbstractStringBuilder.java:95)
at java.lang.AbstractStringBuilder.append0(AbstractStringBuilder.java:133)
at java.lang.StringBuilder.append(StringBuilder.java:124)
at libcore.net.UriCodec.appendEncoded(UriCodec.java:119)
at libcore.net.UriCodec.encode(UriCodec.java:133)
at java.net.URLEncoder.encode(URLEncoder.java:57)
at com.amazonaws.util.HttpUtils.urlEncode(HttpUtils.java:74)
at com.amazonaws.auth.AbstractAWSSigner.getCanonicalizedQueryString(AbstractAWSSigner.java:173)
at com.amazonaws.auth.AWS3Signer.sign(AWS3Signer.java:112)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:326)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:199)
at com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClient.invoke(AmazonSimpleEmailServiceClient.java:2630)
at com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClient.sendRawEmail(AmazonSimpleEmailServiceClient.java:1525)
at uk.co.nwhub.nwtapp.services.EmailService.onHandleIntent(EmailService.java:126)
at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:65)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:145)
at android.os.HandlerThread.run(HandlerThread.java:61)
All the help/guidance I am reading about state what to do in this scenario when you are dealing with images. I am however not using any images within this activity. What can I do to diagnose/try to resolve this?
Check Android documentation. I'm gonna go out on a limb and say you have a pretty large image (https://developer.android.com/training/displaying-bitmaps/load-bitmap.html). You can (and I'd probably recommend) you use an Image loading library like Glide, Fresco, UIL, PhotoView...
AWS's code is trying to allocate 37440820 bytes = ~36MB. That is not going to work reliably. Use whatever AWS offers for support, showing them your code and stack trace.
You can set break point at com.amazonaws.util.HttpUtils.urlEncode(HttpUtils.java:74),
then run application by "Debug app", and check the very large string
I can't do that if this is a production app which has been signed with a different key to my Android studio/development key though can I? I can't uninstall the app from the tablet as that would mean losing the app's data.
Ok. I thinks there is some options. (1) Change package name. Then do "Debug Run" (2) Do "Debug Run" on another device.
|
STACK_EXCHANGE
|
Can a X.509 Certificate be signed by two CAs with the same key?
I have set up a private CA to sign local certificates. In the CA certificate is a, let's say, typo that I want to fix in future versions of the CA but I want to keep the already issued certificates trusted.
So let's say I gave Alice the root CA cert so my issued certs are trusted by her. Now I make the new version of the CA without the typo with the same private key that I generated the first CA with. I give the new CA version to Bob.
Is it possible to have a single issued certificate be trusted by each Alice and Bob?
How would the OpenSSL command for that look like?
A certificate's uniqueness is defined by the combination of its public key and subject. If you change either, it is a different certificate. If you keep both the same, but change something else, then they are considered the same certificates. The same goes for a CA, which is again defined by a the combination of its public key and subject in its own certificate, and the issuer field and signature field (which is derived from the CA's key amongst other things) in issued certificates.
A certificate has only one issuer field and one signature field, which means a single certificate can only be singed by one CA. Note that an entity can have multiple certificates, signed by different CAs, but that's not relevant here.
In your case, as your public key is fixed, if you change the subject then you have a different certificate and hence it is a different CA.
If the typo is, for example, the wrong year in the expiry date then you could do as you're suggesting. You then issue a single certificate to your end-entity from either CA and it will be trusted by both Alice and Bob. The chain of trust will, of course, expire at different times in this example.
This is still a bit of a hack though. If you want to fix this, issue the same Root CA certificate to both Alice and Bob and remove the original form Alice's trust-anchor store.
IF in addition to the keypair, the name of the CA (in Subject and Issuer fields) remains exactly the same (no 'typo' there), and if (any of) the existing child certs used the issuer+serial form of AKI (which is likely if you used openssl) then also the serial number, then YES. See e.g.:
https://security.stackexchange.com/questions/17331/is-it-possible-to-modify-a-ca-without-having-to-reissue-all-of-the-derived-certi
https://security.stackexchange.com/questions/234547/impact-of-root-certificate-renewal
https://serverfault.com/questions/861975/re-issuing-self-signed-root-ca-without-invalidating-certificates-signed-by-it
To do this with openssl there are 3 approaches:
follow the usual procedure for creating a selfsigned cert from an existing key: openssl req -new -x509 specifying the key in the config file or with -key (but not -newkey). Specify the subject (and issuer) name in the config file with prompt=no or with -subj -- but either way you must get it exactly right. Specify serial if needed (or wanted) with -set_serial. Specify validity with -days. Specify extensions in the config file (selected by x509_extensions in the config or -extensions on the commandline), or in 1.1.1 only with -addext.
The config file for req must be accessible as a file by the program, but not necessarily a 'real' (permanent) file. In particular on Unix with some shells you can use process substitution <(...) (or in zsh =(...)) to create a temporary file with possibly dynamic contents. There are quite a few As on several stacks about using that technique to supply SAN on the commandline before (or without) 1.1.1.
use openssl x509 -x509toreq -in oldcert -signkey keyfile -out csr (you can use stdin and stdout redirection instead of -in -out) followed by openssl x509 -in csr -req -signkey keyfile [-set_serial n] -days d -extfile file [-extensions section] -out newcert (ditto). You can combine these as a pipe: openssl x509 -x509toreq ... | openssl x509 -req .... For the second step you can instead use openssl req -in csr -x509 (without -new) and other options the same as #1, but in 1.1.0 up you must specify the input CSR file explicitly not by redirection or piping.
use openssl x509 -in oldcert -signkey keyfile -clrext -extfile file [-extensions section]. This will automatically keep the serial; you can't change it even if you want. This also has an option -preserve_dates in 1.1.1 only to keep the existing validity period instead of setting a new one starting now.
To be clear, the result will enable a child cert to be trusted only if it is in itself a valid CA cert. If your 'typo' fix is to change BasicConstraints from ca:true to ca:false -- which I would not consider a 'typo' -- then the result will chain correctly with an existing child but will not validate it, because a child cert issued by a non-CA cert is not valid.
|
STACK_EXCHANGE
|
Count the number of records of a section
I would like to add a script in the modify_homepage plugin to show the current number of records in specific sections. I assume that I need to add a PHP script, like
<?php echo COUNT("Variable"); ?>
Unfortunately, all my efforts of editing a "Variable" have failed. May I have a hint?
There are a few ways that you can display the number of records in a section. A very simple method is with the mysql_count() function:
<?php echo mysql_count('section_name'); ?>
Note that his function will return the number of all records in a section, including any that have been hidden/disabled. If you want to make sure that the number reflects only the records that are returned by getRecords(), you can use the metadata returned by getRecords() itself, something like this:
<?php list($sectionRecords, $sectionMetaData) = getRecords(array( 'tableName' => 'section', 'loadUploads' => true, 'allowSearch' => false, )); echo $sectionMetaData['totalRecords']; ?>
You'll want to use the same options here as you use for that section on your front-end pages.
Let me know if you have any other questions!
I believe something like this should work:
global $CURRENT_USER; echo mysql_count('section_name', ['createdByUserNum' => $CURRENT_USER['num']]);
This assumes that the section has been set up to use the createdByUserNum field, though most sections should by default.
Let me know if this works for you!
This is great! It seems it does work. And a final issue. I need to count the number of records from the current section (i.e. section_a) that a field takes the value of this specific field from the $CURRENT_USER. In my case each $CURRENT_USER works in a 'hospital' (field name). I want to count the records from the specific section (i.e. 'section_a') where the field 'hospital' has the value of the 'hospital' of the specific '$CURRENT_USER'. Based on your script, I am using the following one, but it turns out 0 (zero).
global $CURRENT_USER; echo mysql_count('section_a', ['hospital' => $CURRENT_USER['hospital']]);
Obviously, there is a mistake.
You're using the right approach, however, there is clearly a mismatch somewhere.The first things that come to mind are:
- Are the 'hospital' fields single- or multiple-choice? This can impact how we perform the query.
- Are the 'hospital' choices configured the exact same between the user and section_a?
It worked. Actually, the script was correct but turned out 0 because the username I used was not assigned to any hospital!! When I used a different username (assigned to a specific hospital) it gave the correct number.
Thank you very much for your assistance.
I would appreciate if you gave me an extra hint on the previous issue.
If I wanted to limit the count in a certain period of time (for example after January 1, 2019), should I use the following code?
mysql_count('section_name', ['hospital' => $CURRENT_USER['hospital']], ['date_of_treatment' >= '01/01/2019']);
or, between Jan 1, 2019 and June 1, 2019
mysql_count('section_name', ['hospital' => $CURRENT_USER['hospital']], ['date_of_treatment' >= '01/01/2019' && 'date_of_treatment' <= '01/06/2019')];
|
OPCFW_CODE
|
After seeing a comment on another thread about not knowing how to perform smooth acceleration of objects, I decided to throw together a quick demo of how I implement such things.
This may not be the most idiomatic IntyBASIC code. But, looking at the compiled output, it's not too bad.
nanochess: If you think this is worth including in the contrib directory, be my guest.
The heart of the code is the UpdatePhysics procedure. It implements some really straightforward equations:
Velocity1 = Velocity0 + Acceleration
Position1 = Position0 + Velocity1
The devil, of course, is in the details: Shifting, rounding, handling signed vs. unsigned values, and of course, handling wraparound on the screen.
'' ======================================================================== '' '' UpdatePhysics '' '' '' '' This computes the new position and velocity given the current velocity '' '' and acceleration. '' '' '' '' AX, AY give the new acceleration input for the selected object. '' '' The acceleration inputs are zero for all other objects. '' '' ======================================================================== '' UpdatePhysics: PROCEDURE ' Compute new velocity for selected object. #V = VX(SEL) + AX IF #V > 127 THEN #V = 127 IF #V < -127 THEN #V = -127 VX(SEL) = #V #V = VY(SEL) + AY IF #V > 127 THEN #V = 127 IF #V < -127 THEN #V = -127 VY(SEL) = #V ' Compute new position for all 8 objects. FOR I = 0 to 7 #V = VX(I) * 4 #PX(I) = #V + #PX(I) ' Stay on visible display by keeping X in [0, 168]. IF #PX(I) >= 168*256 THEN #PX(I) = (168*256 XOR (#V > 0)) + #PX(I) #V = VY(I) * 4 #PY(I) = #V + #PY(I) ' Stay on visible display by keeping Y in [0, 104]. IF #PY(I) >= 104*256 THEN #PY(I) = (104*256 XOR (#V > 0)) + #PY(I) NEXT I END
Also, the code implements its own 16-direction disc direction decoder, since I didn't see how to do that with IntyBASIC's built-in primitives. I wanted to be sure I clearly separated each input into keypad vs. action vs. disc, while also supporting all 16 directions. So, I sample CONT once (ok, technically twice), and do the rest of the decode myself. This gave more consistent results than when I sampled CONT multiple times.
|
OPCFW_CODE
|
//
// RightAlignedIconButton.swift
//
//
// Created by Enes Karaosman on 13.01.2021.
//
import UIKit
/// Text is centered in button, icon is right aligned.
public class RightAlignedIconButton: Button {
public override func layoutSubviews() {
super.layoutSubviews()
contentHorizontalAlignment = .right
semanticContentAttribute = .forceRightToLeft
}
public override func titleRect(forContentRect contentRect: CGRect) -> CGRect {
let titleRect = super.titleRect(forContentRect: contentRect)
let imageSize = currentImage?.size ?? .zero
let availableWidth = contentRect.width
- imageEdgeInsets.right
- (CGFloat(2) * imageSize.width)
- titleRect.width
return titleRect.offsetBy(dx: -round(availableWidth / 2), dy: 0)
}
}
|
STACK_EDU
|
What kind of culture would an undead nation develop?
Imagine a high fantasy world where some powerful necromancy was developed. After a massive war, a few liches began raising undead from the bodies and souls of the fallen warriors using a lengthy ritual. These undead are intelligent, mentally independent, have some of the same drives as their original selves, but lack a memory of their past life. The undead have banded together into a small nation.
The typical undead citizen is a skeleton, although a few are ghosts or zombies. If properly trained they can become liches themselves, and can raise their own undead. This creates a sort of hierarchical family, with the lich as the head of an extended family.
What kind of culture would arise in this nation of undead? How would it be different from a kingdom of humans?
I think it could develop almost any culture you desire. A lot might be based on circumstances, and the main 'drives' might be crucial to determining the outcome. To sum it up, I'll explain it in terms of writing or pure worldbuilding - the result might simply be what you want it to be, as long as you keep being acceptably consistent in the long run.
You described a clan-based structure. Were I resurrected into a clan with a chief unworthy of my loyalty...and had to serve that same chief forever...I might consider it more akin to perpetual servitude.
I voted to close this question because it needs clarity. groups form governments to meet their needs. The question needs to posit the needs and wants of the Undead for there to be an answer that isn't opinion based
I think this is the basis for a really interesting worldbuilding query. Too bad it's gotten some answers, because now it can't be fixed. I think if you were to refine your wording and spell out some constraints, you could have an interesting question on the very nature of unalive culture, as opposed to what elements an unalive culture might comprise.
@elemtilas - open to ideas how to fix the question. I thought it was pretty clear and it got some great answers so far. It is sad that "opinion-based" is over-used, especially for fictional world-building.
I assume that these undead are the typical ones: they do not age or die of old age.
As a consequence, the biggest cultural differences would be those in reproduction and inheritance.
Where do they get the bodies to create new members? Are they on relatively civil terms with their neighbors and can trade for them? Are they continually under attack, or conversely continually invading, and therefore can use the corpses from the battlefield? Depending on how often new ones are created, this can be a major part of their culture, or a minimal one, far below the importance of raising children.
The only undead who need to be educated are the newly created ones, and these are created fully grown. Education would be geared toward them, and given the concentration of their creation to near the parental liche, educational institutions would be severely localized. A liche who created dead over a region would probably have specific outposts to do so, precisely so they can be outfitted to deal with new undead.
On the other hand, their immensely long "lives" let them learn many things a living being would have little time for. You can learn how to weave in the manner of an ancient civilization from someone who actually learned it from the practitioners. They may regard themselves as the truly civilized, the preservers of the knowledge of all ages.
Finally, there is no way up in the cultural hierarchy except through the violent death of your superiors or expansion. If the vast majority of your undead are happy to "live out lives" of culture and knowledge and use, that can produce a stable society. (Perhaps the odd-balls are sent out to adventure and get themselves "killed.") If any sizable number are not happy, you have a society of intense internal violence (overt or covert) or an expansionist one. How rapidly it expands depends on whether their long "lives" increase their patience. A skeleton that knows in that a couple of millennia he can become the Supreme Emperor of the Delectable Islands can work more slowly than a human who will have at most thirty years to conquer -- but if the skeleton does not adjust to his undead timeframe, he may act as if he had decades.
You can break down a culture into practical solutions to problems, and answers to humans questions.
Who we are, where do we come from, how to live life...etc.
The other aspect is to solve actual problems.
If it's too hot you develop architecture and technology so that you exist in the place, then with time so you can actually enjoy your time.
Like it's too cold outside, so we developed heating so we can enjoy an evening at home while it's freezing outside.
But if your skeleton does not feel the cold, would it need heating?
So you have to simply remove the vast majority of things in any human culture as it is a result, whether direct or not, of needs.
Obviously there is other layers.
For example a functional 2000$ car does the job of a 200000 car, but owning the expensive one has other benefits.
However neither would exist in the first place if the need to travel did not exist.
So to conclude I think you have one of two ways.
Create a clear and reasonable need or want for your dead.
And only after you have done that start thinking of how to develop a culture around it.
Like how a lot of human civilization started around fresh clean water, or how iron was an important resource, or how agriculture changed society...etc.
Like a magical resource that they can use.
This resource enables them to control more land or have more power or raise more dead...etc.
And thus around it grows all sorts of things.
Like if it's a crystal, then going around wearing robes made of super expensive crystals becomes a symbol of power.
The second is much darker, and interesting.
Just have your dead emulate humans without the actual need or want.
Silk robes while they don't feel touch, expensive rings but they don't see color, comfortable beds but they don't even sleep, super exotic meals they can't even consume...etc.
So the whole society is a bit of joke as they pile and pile more expensive and rare stuff that means nothing to them in a mockery of their humans selves or in in an inescapable pit of consumerism even after death.
That's how I see it anyway
|
STACK_EXCHANGE
|