Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Now a days, we all are pretty much accustomed with using the CSS3 Transition effects. All of the modern browsers are having this feature support( Except some of the specific versions of IE ). Basically we use this to make our sites more beautifully functional. It makes our life easy. As we all know that it has reduced a bulk amount of effort which was being performed by jQuery or other libraries previously.
We know that in every Drupal version there are lots of core and third party modules to develop a website. But many time these core and third party modules are not enough for a requirement. At that moment we need to create our own custom module to complete the require task. In this post I will discuss about creating our own custom module in Drupal 8.
Now a days all we are totally crazy about watching videos on YouTube. Sometimes we want something to be unique. We place our videos into our website with embedded YouTube videos, which is pretty cool actually. We all handle manually that stuffs. But sometimes we feel like we need some more. If we could control the player with one of our customized Pause/Play option, it would be fun 🙂 If we can Pause/Stop the video while doing other event in the same page, it would be more dynamic too 🙂 But the fact is, these all are available. So now that’s a pretty cool stuff 🙂
So without wasting time, lets start.
Initial Solr Setup:
1. Install the latest Java JDK from http://www.oracle.com/technetwork/java/javase/downloads/index.html.(Make sure to select 64bit version if you need it.)
2. Download Solr 1.4.1 from one of the mirrors at http://archive.apache.org/dist/lucene/solr/1.4.1/(at the time of writing, not all mirrors seem to be hosting 1.4.1, but most seem to have at least 1.4.0)
Few years back we were facing too much trouble to use our favorite fonts for any text in our website. But now @font_face has made our life very easy. We are now able to use our favorite fonts in any text and for any browser.
So without wasting any time, lets get started.
First of all what is @font_face :
In real life what we understand about Routing – this is nothing but the perfect way to select the exact path to execute any process/function. If we want to relate the routing with Drupal then how it will look. Nothing to worry, its already in Drupal setup. In Drupal 7 the Routing is handled through hook_menu(), but in this article we will get to know how the Routing handle by routing.yml file in Drupal 8. If we want to work with hook_menu() in Drupal 8 then we have to create a “module_name.routing.yml” file in your Module directory. So lets check with an example:
Example to use hook_menu() in Drupal 7:
|
OPCFW_CODE
|
Marked for Deletion
Posted 06 June 2011 - 10:16 AM
Posted 08 June 2011 - 01:21 PM
Posted 08 June 2011 - 06:55 PM
Posted 09 June 2011 - 04:50 PM
Posted 10 June 2011 - 08:42 PM
Posted 11 June 2011 - 08:33 AM
todays woe video is postponed as i have alot to do this week, will have it up asap
Don't work to hard and stay supa cool
Posted 12 June 2011 - 10:31 AM
http://www.rolegends.com - (Under Construction)
Please post here or PM a Legends member in our guild spot for recruitment. Thank You.
Guild: The Legends
Level requirement: None
Guild spot: Alde West of kafra
The Legends is a guild where you can have fun, meet new people and just enjoy the game as it was meant to be enjoyed and not turned into some whose E-ego is bigger; With that said there is minimal to no drama and I can really say that makes a difference.
We spend a lot of our time chatting either in guild or on ventrilo while we level, MvP, Item hunt, Quest, Afk, or just hang out. Also We are always looking for new ideas for guild events and opinions of what you would like to do.
Legends is a WoE guild we need people who will show up to WoE at least once a week and preferably around during non WoE hours as WoE is only one aspect of this game, it's important not to forget about the rest! But anyway with WoE being as messed up as it is right now we try to make the best of it avoiding the larger overpowered guilds when possible and fighting the rest of the server for some fun. We usually only pull 10-20 people right now but we can still kick ass when we organize!
I don't want to add too much more, however. It's hard to express how much fun I've had being in this guild over the past 4 years. Until you've experienced Legends for yourself you shouldn't rule us out!
Edited by Aishu, 12 June 2011 - 10:33 AM.
Posted 12 June 2011 - 05:36 PM
Posted 15 June 2011 - 05:10 AM
Posted 17 June 2011 - 06:54 AM
Posted 17 June 2011 - 08:00 PM
Posted 19 June 2011 - 05:02 AM
Posted 20 June 2011 - 01:38 PM
so come join!
Posted 20 June 2011 - 02:22 PM
Posted 21 June 2011 - 06:15 AM
Posted 21 June 2011 - 06:36 AM
Hello, do guys have room for an assassin, Tirizan is the name, lvl -94/50. Its getting hard and slow to level. I'm a casual player, just looking for a good group of people to play with. I'm also working on a knight.
If you havent yet, stop be alde. We usually hang out near the kafra.
Posted 21 June 2011 - 06:46 AM
Posted 22 June 2011 - 01:16 PM
Posted 23 June 2011 - 11:35 AM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
Downloading Vim Vim is available for many different systems and there are several versions. The current stable version is 2.
It was a very surprising amount of work to add this support so please report any problems found to netbeans. It seems like there is an odd bug with the new 1511 version with turning off the Store.
Md RVM For Windows. CODE & HOST Code with Devserver & Host with Webserver PHP PostgreSQL, Apache, MySQL, PhpMyAdmin, Python, MongoDB, Xdebug, Nginx Ruby.
Step 1: Configure development environment for Ruby development. Free Download Windows 7 Home Premium Build 7601.
How do I install RVM on Windows 7. This tool is for managing which version of Android will be installed in the system.
This page aims to collect some of the general knowledge and lessons that have been unearthed by Windows users. The latest version of Nexus Repository OSS, providing cutting- edge support for the formats below.
You will need to configure your development environment with the prerequisites in order to develop an application using the Ruby. RVM ( “ Ruby Version Manager” ). 9 syntax support. It allows you to capture anything on the screen including windows rectangle regions, freehand- selected regions , full screen, objects scrolling windows/ web pages.
This page will help you decide what to download. Download the free trial version below to get started.
Select a user as a manager. It gives you everything you need to set up a full Ruby development environment on Windows.I downloaded the package from the mention site and changed the direct. The basic requirements are bash gpg2 , curl, overall GNU version of tools - but RVM tries to autodetect it install anything that is needed.
On Windows machines, you can use RubyInstaller. I cant seem to figure out why the windows.Ruby Version Manager ( RVM) RVM is a command- line tool which allows you to easily install manage work with multiple ruby environments from interpreters to sets of. AWS add- ins for Microsoft System Center extend the functionality of your existing Microsoft System Center implementation.
I tried to install GEM on my PC by following the steps given in the site org/ pages/ download. Free software download The biggest software directory for freeware shareware download at brothersoft.Install a version of Ruby ( eg 2. 01/ 19/ ; 2 minutes to read; Contributors.
Md Redis on Windows. Ruby version manager windows download.NET Framework Version 4. While Windows is not an officially- supported platform, it can be used to run Jekyll with the proper tweaks.
Please be sure to read Ruby’ s License. This page describes how to use major package management systems third- party tools for managing , installing Ruby how to build Ruby from source.
Double- click the downloaded file to install the software. The application itself is written in C# targets the.© Oracle Corporation its affiliates. The add- ins are software that you download install for use with Microsoft System Center Operations Manager Microsoft System Center Virtual Machine Manager.
|
OPCFW_CODE
|
Java and Python are the two most trending and Powerful languages of recent times and it is quite common to get confused when it comes to picking one out of the two. Now the most common question asked by the beginners is which one is better Java or python. This is Sayantini from Edureka and in today’s session, I will talk about how both the languages differ from one another and which one fits your goal better. So let’s get started. The number of programming languages used in production and day-to-day life has seen an enormous growth in the last decade now from those bustling numbers. We are going to narrow our Focus to the two most popular languages that have created quite a buzz among the developers as well as the beginners.
So let’s begin with a brief introductionof both the languages. Java is one of the mostfundamental languages that produces softwarefor multiple platforms. And the best thing is that it is machine independent and can be writtenonce and run anywhere python on the other hand isa simple easy to read and high-levelprogramming language, but program is mostly fall in love with it because ofthe increased productivity that it provides both of these have been the two most popular and controversiallanguages of the decade. So let’s move ahead and take a look at the variousaspects of comparison that will help us to findan answer to the question which one is betterout of the two now if we take a look at the speedof java and python, the former is a staticallytyped programming language, which makes it faster. Where is the lateris an interpreter which determines the type of data at runtime thusmaking it slower comparatively when it comes to Legacy Javas historyin the Enterprise.
And the world was coding style are typically largerand more numerous. Where is python hasless Legacy problem, which makes it difficult for the organization’s to copy and paste codes now boththe languages are pretty simple and easy to write but if we look at the length of both the codes pythonconsists of less number of lines or shorter codes as compared to Java which are also easy to understand anothercharacteristic is the databases Java database connectivityis most popular and widely used to connect whereas pythons databaseaccess layers are weaker than jdbc that is why itis rarely used in Enterprises. Now if I lookat the Practical agility Java provides more undeviatingrefactoring support then python because of itsstatic type system and universality of IDEfor the development of mobile and web applications. But python has becomea popular choice for all the recent Technologies, like data science, machine learning, iot and artificialintelligence. Next up, If we look at the search resultsof US and India in the last five years UShas seen a drastic drift in the domination of boththe languages there has been a significant growthin the search for python whereas the other has seena gradual decrease in the graph. India has also seen a growthin case of python.
The next feature of comparisonsis the salary growth of java engineers and python Engineersbased on their experience. We can see that therehas been a steady growth in both the situations overa certain period of time now if we compare the growthof both the engineers in case of fresherspython has a little Edge over java due toits increased demand in the recent times nowadays. The jobs are mostrelated to Automation and artificial intelligence. Which prefer Know what Java and that’s exactly why wecan see the shift in the graph. Whereas if we lookat the expansion just in case of experienced Engineers Javadominates over the time because Java has been in use way before python became popular and therefore the experienced Engineersfind it convenient for them to stick to their comfort zone insteadof moving to a new language.
Now, let us have a look at oneof the most important aspects that makes Java and pythondifferent from each other and this might help you to finally decidethe winner out of the to let’s have a look at someof the basic differences. Java is a compiledprogramming language and the source codeis compiled down to bytecode by the Java compiler and the byte code is executedby a Java virtual machine on the other hand pythonis an interpreted language as the translationoccurs at the same time as the program is being executed now Java supportsencapsulation inheritance. Polymorphism and abstraction, which makes itan object-oriented language python is alsoan object oriented language, but it has an added Advantage. It is also a scripting language and it is easy to writescripts in Python. Now statically typed programminglanguages do type checking at compile timeas opposed to run time. Whereas the dynamically typed programming languagesdo type checking at runtime as against compile time and it helps you writea little quicker because you are doing not haveto specify types every time next if we compare the number of lines in a code pythoncan perform the same action with fewer lines than the same code writtenin Java in this example.
We are printing the statementhello world using both the languages but in Java, we need to define a classand a main function which makes ita 3 line code already. Where is we can justuse the print function for the statement. Case of pythonin Java programming language if you miss the semicolonat the end of a statement, it will throw an errorbut there is no such need of any semicolonto end the statement in case of python nowanother important difference in the syntax of both. The languages isthe indentation in Java. You must Define a particularblock using curly braces. Otherwise, the code won’t work. But in case of pythonthere is no such sight of any curly braces, eventhough indentation is mandatory. It also improvesthe readability of the code. So if we take a closer look atall these aspects of comparison, we can say that python hasa slight Edge over java and it would be fairto declare the former as the winner in this battle. So, what do you think do letus know about your opinion in the comment section below and also mention other aspects where you think Java wins over python till then? Thank you and happy learning. I hope you have enjoyed reading to this post. Please be kind enough to like it and you can comment any of your doubts and queries and we will reply them at the earliest do look out for more post in our blogs And subscribe to whitedevil blog to learn more. Happy learning.
|
OPCFW_CODE
|
@clanmills I have the feeling that for some macOS is the opposite of FOSS but in my opinion it has more FOSS components than Windows. Also, Apple products are $ $ $ $ $. And when are they updating the Mini?
Still Dual boot Manjaro Plasma 5 & Win7.
But in Manjaro side 90% of the time.
Tho still use Lightroom 3.6 on win side.
And due to the abysmal graphics tablet support in Linux.
Using Krita on windows side with my Huion H610 graphics tablet.
@afre I’m not an Apple Fan Boy. However the machines work well and give me a solid platform on which to develop Exiv2. If they don’t upgrade the mini this year, I guess they’ve killed the product line. And as they appear to have neglected the Pro, perhaps they’ll only ship iMacs and laptops (Air and MBP) in future. We’ll see.
Apple products cost $, however they last a long time. I purchased an iMac in 2007 and (apart from a disk replacement) lasted for 9 years.
Ofnuts Jr’s 5yo MacBookPro just died, and he is considering replacing it with the N-1(*) version (the one that still has USB ports) because the newer ones are seen as a step backwards…
(*) HEAD~, for the Git mavens.
Thanks, @HIRAM. The Apple Fan Boys are zealots who believe in the inherent superiority of Apple products. I don’t. Windows/Linux/MacOSX are all good and evil in different ways. I’ve made my living by knowing how they work. Guru? Engineer is praise enough, thanks.
I have Mint and Win7 installed in dual boot. I answered Windows for the poll, but once Win7 will cease to be supported, I’ll switch to Mint completely.
Arch (Manjaro KDE and KRevenge)
I suppose that’s due to many of us being developers and not “just” users. And Linux is by far the most comfortable OS for that. I do have VMs with Windows and OSX with build environments set up, and both systems are a major PITA.
OpenSUSE Tumbleweed for last 6 months because it just works but my favorite has always been Mint.
I have Windows 10 as dual boot but haven’t used it for 6 months or so. I am running out of space on my main HDD and thinking of wiping the W10 partition.
Windows 7 as main OS (for that reason I voted for Windows), but also using Sabayon on my Laptop and Manjaro (which I really like more than any other Linux distro I tried) in a virtual machine under Windows 7.
It wasn’t my intention to start a conversation about fan people. People tend to want to share what they like, though they sometimes overshare and it starts to irritate you. I have been on both sides of that.
What I vehemently dislike are self-described or hired “tech evangelists”. They trawl the net for unsavory representations of their beloved brands and aggressively troll the source into submission: killjoys.
At first I thought Jr was a 5yo considering what his next computer would be. Then after reading the post a dozen or so times throughout the day, I realized that you were referring to the MacBook Pro’s age rz
What i find amusing i had a few gentoo users already that told me “openSUSE Tumbleweed is what i always wanted from gentoo”. Tested rolling release distros FTW.
Sometimes I think it updates too often though (once per month would have been often enough for me). Also as any leading edge it bleeds once in a while.
Hehe … releasing Tumbleweed happens when ever it has built and passed the testsuite As we normally don’t use a separate update repository but just push security updates through the normal release process. you want to update like once a week at least I usually do that on Monday morning. (Yes i run TW also on my workstation in the office)
I remember this particular Cupertinoite used to visit the mac/apple user group circuit regularly.
What an insanely great time it was.
Nice follow-up to my rant-comment. Originally, I removed it for being a little negative but consider it undeleted .
I only ventured into the world of linux after I tried the short lived windows version of darktable. I think I’ve now got the distro hopping disease. I’m not a coder so I go for look and feel. I also detest Windows 10 (what a mess). So I quickly went through Ubuntu, Kubuntu, Mint, Fedora and finally settled on Opensuse. Something about the clean and efficient lines. However, I also installed Manjaro recently and was blown away by its speed and how it looked. It seems to very intuitive to me and may replace OpenSuse.
One thing I have noticed that in Flickr my pictures look a tad washed out when running Firefox in Windows compared to Firefox in Linux, weird.
How do you measure this speed difference?
|
OPCFW_CODE
|
Friday, February 12, 2010 1:33 PMIn never have seen any good response to this.
I am using Windows 7 Professional. I am running Virtual PC ver 188.8.131.52 with XP sp3 installed. I am having problems keeping the tab key working in my VPC session. I ahve added the VPCKeyboard.dll to the list of additional rules in the Local Security Policy. I am not using the variable %appdata% path, it's c:\... When I first start VPC it works fine. But after a while it stops working. In VPC I use remote desktop to connect to a work computer. I use VPC in full screen mode, constantly going back and forth between full screen and my host PC using the right Alt Enter combination. It seems to happen when switching from full screen to window and back again. I found that if I keep it in a window I don't lose the keys. Helps some, but like working in full screen.
I had been on Vista Ultimate and never had a problem.
Any help is appreciated
Thanks in advance
Monday, February 15, 2010 3:15 AM
First of all, Virtual PC 184.108.40.206 (2007 SP1) is the previous version of Microsoft Virtual Machine program and its fully support Windows XP and Vista, that’s why there is no problem when you use it on Vista Ultimate. Please notice Windows 7 Professional is not in the Supported Operating Systems list:
Actually, Windows 7 has an updated version of Virtual Machine feature which named “Windows Virtual PC”, you can consider upgrade to it. I’ve attached the related URLs for your reference. You should check if your PC can run Windows XP Mode first.
Windows Virtual PC
Windows XP Mode
In Virtual PC, you can view Windows XP Mode in full screen by click Action button from the top menu and select 'View Full Screen' which is very convenient.
Currently, I use Windows Virtual PC and XP Mode in my 64 bit Windows 7 without any problem yet.
If you want to troubleshoot the issue first, you can consider the following steps and it did work for some similar issues:
1. Shut down Virtual PC after shutting down any running virtual machines.
2. Open Windows Explorer.
3. Type %AppData% in the address bar and press Enter.
4. Navigate to the path Roaming\Microsoft\Virtual PC under the %AppData% folder.
5. Locate the file Options.xml in the above folder and delete it.
6. Restart Virtual PC and the misbehaving virtual machine and note that the Escape, Tab and other keys are now working.
NB: The issue will occasionally reappear requiring the above process to be repeated. Needless to say, that is irritating.
I hope this helps.
- Marked As Answer by Joe.WuMicrosoft Employee, Moderator Thursday, February 18, 2010 9:24 AM
Tuesday, December 27, 2011 3:01 PMThanks Alot for sharing! It helped me to get my tab key working now.
|
OPCFW_CODE
|
import numpy as np
import pandas as pd
from scipy import stats, integrate
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from matplotlib.dates import DateFormatter, HourLocator, MinuteLocator, AutoDateLocator
import seaborn as sns
import csv
import sys
from datetime import datetime,date,timedelta
import random
from math import ceil
import math
class IterRegistry(type):
def __iter__(cls):
return iter(cls._registry)
class Taxi:
__metaclass__ = IterRegistry
_registry = []
def __init__(self):
self.remainingBatterykWh = 30.0
self.income = 0
self.travelConsumption = 0.34 #kwh per mile
self.chargingMode = 0
self.busyTime = 18*60 #unit is minute
self.endBusyTime = 21*60
self.stopShiftTime = 2*60
self.startShiftTime = 5*60
self.electricityPrice = 0.6 #dollar/kwh
self.hirePrice = 2 #dollar/mile
self.useSwapping = 0
self.swapCost = 1 #dollar/mile
self.swapStartTime = 0 #startTimeForEach ongoing Swap
self.swapTime = 5
def getTravelSpeed(self, currentTime):
if self.chargingMode == 0:
if self.getHiredOrNot(currentTime):
trafficAdjustment = self.getTrafficAdjustment(currentTime)
self.speed = max(random.normalvariate(30/60+trafficAdjustment/60,10/60),0)
self.income += self.speed * self.hirePrice
elif currentTime%(24*60) < self.stopShiftTime or currentTime%(24*60) > self.startShiftTime:
self.speed = max(random.normalvariate(10/60,10/60),0)
else:
self.speed = max(random.normalvariate(1/ 60, 1/ 60), 0)
self.remainingBatterykWh -= self.speed * self.travelConsumption
def probByBusyTime(self,currentTime):
return self.calculateProbabilityFromTime(currentTime,self.busyTime)
def probByStopShiftTime(self,currentTime):
if currentTime%(24*60) < self.startShiftTime and currentTime%(24*60) > self.stopShiftTime:
return 1
else:
return self.calculateProbabilityFromTime(currentTime,self.stopShiftTime)
def calculateProbabilityFromTime(self,currentTime,targetTime):
if currentTime%(24*60) > targetTime:
return math.exp(2*(targetTime-currentTime % (24 * 60)))
else:
return math.exp(currentTime % (24 * 60) - targetTime)
def decideChargeMode(self, currentTime):
if (self.chargingMode == 1):
if random.random() < math.exp(self.remainingBatterykWh-30) \
or random.random() < self.probByBusyTime(currentTime):
#busy time must leave or leave when almost full
self.chargingMode = 0
else:
if random.random() < math.exp(-self.remainingBatterykWh) or \
(random.random() < self.probByStopShiftTime(currentTime) and self.remainingBatterykWh < 30 * 0.9):
#go to charge when battery is low or stop shift time is coming
self.chargingMode = 1
def getHiredOrNot(self, currentTime):
if currentTime%(24*60) < self.endBusyTime and currentTime%(24*60) > self.busyTime:
return random.random() < 0.95
elif currentTime%(24*60) < self.stopShiftTime or currentTime%(24*60) > self.startShiftTime:
return random.random() < 0.6
else:
return random.random() < 0.1
def getTrafficAdjustment(self, currentTime):
if currentTime%(24*60) < self.endBusyTime and currentTime%(24*60) > self.busyTime:
return -3
elif currentTime%(24*60) < self.stopShiftTime or currentTime%(24*60) > self.startShiftTime:
return 0
else:
return 3
def charge(self, currentTime, swapCapacity, chargeSpeed):
if self.chargingMode == 1:
if self.useSwapping:
if (currentTime > self.swapStartTime + self.swapTime):#first time start
self.swapStartTime = currentTime
self.income -= (swapCapacity - self.remainingBatterykWh)*self.swapCost
self.remainingBatterykWh = swapCapacity
elif (currentTime == self.swapStartTime + self.swapTime):
self.chargingMode = 0
else:
self.income -= self.electricityPrice * chargeSpeed
self.remainingBatterykWh += chargeSpeed
class Bus:
__metaclass__ = IterRegistry
_registry = []
def __init__(self):
self.remainingBatterykWh = 324
self.income = 0
self.chargingMode = 0
self.travelConsumption = 2
self.tripTotal = 0
self.maxTrip = 145
self.useSwapping = 0
self.swapCost = 1
self.swapStartTime = 0
self.swapTime = 10
self.electricityPrice = 0.6
self.busyTime = 18 * 60 # unit is minute
self.endBusyTime = 21 * 60
self.stopShiftTime = 2 * 60
self.startShiftTime = 5 * 60
self.runOnNightShift = 0
def getTravelSpeed(self, currentTime):
if self.chargingMode == 0:
if currentTime%(24*60) == self.stopShiftTime:
if random.random() < 0.3:
self.runOnNightShift = 1
else:
self.runOnNightShift = 0
if currentTime%(24*60) < self.endBusyTime and currentTime%(24*60) > self.busyTime:
self.speed = max(random.normalvariate(22 / 60, 3 / 60), 0)
self.tripPrice = max(random.normalvariate(9, 1.5), 0)
elif currentTime%(24*60) < self.startShiftTime and currentTime%(24*60) > self.stopShiftTime:
if self.runOnNightShift == 1:
self.speed = max(random.normalvariate(28 / 60, 3 / 60), 0)
self.tripPrice = max(random.normalvariate(2, 1.5), 0)
else:
self.speed = 0
self.tripPrice = 0
else:
self.speed = max(random.normalvariate(25 / 60, 3 / 60), 0)
self.tripPrice = max(random.normalvariate(5.5,1.5),0)
self.remainingBatterykWh -= self.speed * self.travelConsumption
self.tripTotal += self.speed
self.income += self.speed * self.tripPrice
def charge(self, currentTime, swapCapacity, chargeSpeed):
if self.useSwapping:
if (currentTime > self.swapStartTime + self.swapTime): # first time start
self.swapStartTime = currentTime
self.income -= (swapCapacity - self.remainingBatterykWh) * self.swapCost
self.remainingBatterykWh = swapCapacity
elif (currentTime == self.swapStartTime + self.swapTime):
self.chargingMode = 0
else:
self.income -= self.electricityPrice * chargeSpeed
self.remainingBatterykWh += chargeSpeed
def decideChargeMode(self, currentTime):
if self.chargingMode == 0:
if self.tripTotal >= self.maxTrip:
self.chargingMode = 1
self.tripTotal = 0
else:
if self.remainingBatterykWh >= 324:
self.chargingMode = 0
# self.maxTrip = min(random.normalvariate(130,20),160)
class BatterySwappingStation:
def __init__(self, numberOfSlot, batteryCapacity):
self.numberOfSlot = numberOfSlot
self.batteryCapacity = batteryCapacity
self.pendingVehicles = []
self.swappingVehicles = []
self.income = 0
self.kwhPrice = 1
def swap(self, intakeBattery):
self.income += self.kwhPrice * (self.batteryCapacity - intakeBattery)
return self.batteryCapacity
def addVehicle(self, vehicle):
if len(self.pendingVehicles) > 0 or len(self.swappingVehicles) >= self.numberOfSlot:
self.pendingVehicles.append(vehicle)
return False
else:
swapResult = self.swap(vehicle.remainingBatterykWh)
return swapResult
class DCChargingStations:
def __init__(self,numberOfStations):
self.numberOfStations = numberOfStations
self.chargeSpeed = 40/60
self.chargingVehicles = []
self.pendingVehicles = []
self.income = 0
self.kwhPrice = 0.6
def addCharge(self,vehicle):
if len(self.chargingVehicles) < self.numberOfStations:
self.chargingVehicles.append(vehicle)
else:
self.pendingVehicles.append(vehicle)
def charge(self):
self.income += len(self.chargingVehicles)*self.kwhPrice*self.chargeSpeed
|
STACK_EDU
|
from pyspark.ml.clustering import KMeans, KMeansModel
from pyspark.ml.feature import VectorAssembler
from sparkdq.models.IForest import *
if __name__ == "__main__":
from pyspark.sql import SparkSession
# spark = SparkSession.builder \
# .master('local') \
# .config("spark.jars", "/Users/qiyang/project/dqlib/target/dqlib-1.0-jar-with-dependencies.jar") \
# .getOrCreate()
spark = SparkSession.builder\
.master('local')\
.getOrCreate()
sc = spark.sparkContext
rdd = spark.sparkContext.parallelize([
(1, "A", 19, 168, 72),
(2, "B", 22, 172, -100),
(3, "C", 23, 166, 55),
(4, 'D', 26, 18, 70),
(5, 'E', 18, 180, 68)
])
from pyspark.sql.types import StructType, StructField, LongType, StringType, IntegerType
schema = StructType([
StructField("id", LongType(), True),
StructField("name", StringType(), True),
StructField("age", LongType(), True),
StructField("height", IntegerType(), True),
StructField("weight", IntegerType(), True)
])
df = spark.createDataFrame(rdd, schema)
row_key = "id"
data_columns = ["name", "age", "height", "weight"]
x = df.rdd.flatMap(
lambda row: [(row[row_key], [row[row_key], c, row[c]]) for c in data_columns]
).collect()
print(x)
# iForest = IForest(columns=["height", "weight"])
# ifm = iForest.fit(df)
# print(ifm.summary.success)
# from sparkdq.models.LOF import *
# lof = LOF(columns=["height", "weight"])
# lm = lof.fit(df)
# print(lm.summary.dataWithScores.success)
# from sparkdq.models.CFDSolver import CFDSolver
# cfds = CFDSolver(cfds=["name@age#_&_"], taskType="repair")
# cfm = cfds.fit(df)
# print(cfm.summary.success)
# from sparkdq.models.entity.EntityResolution import EntityResolution
# es = EntityResolution(columns=["name", "age"], taskType="repair", indexCol="id")
# esm = es.fit(df)
# print(esm.summary.numOfEntities)
# print(esm.summary.message)
# esm.summary.targetData.show()
# from sparkdq.models.BayesianImputor import *
# bi = BayesianImputor(targetCol="name", dependentColumns=["age", "height"]).setTargetCol("name")
# dm = bi.fit(df)
# print(dm.summary.success)
|
STACK_EDU
|
Understanding complex molecular systems using experiments alone is difficult. Computer simulations based on physical and chemical principles can complement experiments and provide novel insights into the behavior of these systems at an atomic level. Our research targets the development and applications of state-of-the-art computational tools that explore the underlying mechanisms of complex molecular systems. Enzymes and other biological macromolecules, along with bio-inorganic ligands, are of primary interest.
Developing computational techniques and theoretical models for complex systems
A substantial amount of research activity in our group is geared toward developing novel computational techniques to make the simulation of complex biomolecular systems possible. One major area involves improving the efficiency and accuracy of combined quantum mechanical and classical mechanical methods, such that bond-breaking and bond-formation (chemistry!) can be studied in detail for realistic biological environments. Another area is related to the development of coarse-grained models for proteins and membranes, such that insights into the driving force of conformational transitions in proteins, protein/peptide aggregation and membrane remodeling processes (e.g., membrane fusion) can be obtained computationally. In these coarse-grained model developments, we explore both particle and continuum mechanics based models, and integration with not only atomistic simulations but also experimental observables such as thermodynamics data for complex solutions.
Simulation of complex molecular machines in bio-energy transduction
Biological systems involve many fascinating "molecular machines" that transform energy from one form to the other. Important examples are F1-ATP synthase and proton pumps, the former utilizes the proton motive force to synthesize ATP, while the latter employs the free energy of chemical reactions (e.g., oxygen reduction) to generate the proton motive force across the membrane. With the recent developments in crystallography, cyro-EM and single molecule spectroscopy, the working mechanisms of these nano-machines are being discovered. In order to understand the energy transduction process at an atomic level, our group is developing and applying state-of-the-art computational techniques to analyze the detailed mechanisms of several large molecular complexes including: myosin, DNA repair enzymes and the cytochrome c oxidase. Questions of major interest include: (i). What are the functionally relevant motions of these complexes? (ii). How are the chemical events (e.g., ATP binding and hydrolysis) coupled to the mechanical (e.g., conformational transition) process? (iii). How are the efficiency and vectorial nature of energy transduction regulated?
Understanding the catalytic mechanism of enzymes
Enzymes overshadow most chemical catalysts because they are extremely efficient and highly reaction-specific. Our group is developing and applying novel computational methods to explore the physical and chemical mechanisms behind the catalytic efficiency and specificity of several fascinating enzymatic systems. These include enzymes that exploit transition metal ions (phosphatases) and radical intermediates (DNA repair enzymes). In addition to their important biological implications, an underlying theme for these systems is catalysis modulated by protein motion. Our studies will not only provide insights into the fundamental working mechanisms of enzymes, but may also lead to the rational design of proteins/enzymes (e.g., metal ion activated transcrption factors) with improved or even altered functions.
Interfacing biology and material science
The last decades have seen the thrilling developments in the science of materials at the nanometer scale. Nano-materials with tailored electrical, optical or mechanical properties have been synthesized. An exciting direction that has been recently recognized is that biomolecules can be used to provide control in organizing technologically important (non-biological) objects into functional nano-materials. The interaction between biomolecules and inorganic materials is fundamental to these applications, and we are using computational techniques to investigate this aspect. These studies are expected to play a guiding role in the design of novel hybrid materials, new sensors for biological molecules, as well as in understanding the fascinating process of biomineralization.
|
OPCFW_CODE
|
April 30: Final release. Ship and celebrate the release of Rails 6.0 at RailsConf 2019! Rails 6.0 will require Ruby 2.5+
Action Mailbox Action Mailbox allows you to route incoming emails to controller-like mailboxes. ActiveStorage is required. Simple setup with:
$ rails action_mailbox:install $ rails db:migrate
Like the Mailman gem https://github.com/mailman/mailman
Action Text Action Text brings rich text content and editing to Rails. It includes the Trix editor that handles everything from formatting to links to quotes to lists to embedded images and galleries. The rich text content generated by the Trix editor is saved in its own RichText model that's associated with any existing Active Record model in the application. Any embedded images (or other attachments) are automatically stored using Active Storage and associated with the included RichText model.
Simple install with rails action_text:install and running some migrations that are generated from the command.
Parallel Testing Parallel Testing allows you to parallelize your test suite. While forking processes is the default method, threading is supported as well. Running tests in parallel reduces the time it takes your entire test suite to run.
Action Cable Testing Action Cable testing tools allow you to test your Action Cable functionality at any level: connections, channels, broadcasts.
Multiple Database Support!! Thanks Eileen M. Uchitelle! Ported from GitHub into Rails.
follow_redirect can include additional arguments Pass along arguments to underlying GET method in #follow_redirect! It’s possible to pass parameters to the underlying GET request in a follow_redirect! by adding an additional arguments to the method.
Add allocations to template rendering instrumentation ActionView now outputs object allocations to the console to help you with your performance monitoring
File uploads behave how you expect now with ActiveStorage Uploaded files assigned to a record are persisted to storage when the record is saved instead of immediately In rails 5.2 files were persisted immediately when assigned, rather than when save was called.
ImageProcessing Gem to be used over MiniMagick se the ImageProcessing gem for Active Storage variants, and deprecate the MiniMagick backend ImageProcessing support some better macros such as :resize_to_fit, :resize_to_fill and also has built in libvips which is in an alternative to ImageMagick.
The change is also easily configurable using the usual Rails configuration
Rails.application.config.active_storage.variant_processor = :vips
Zeitwerk Zeitwerk is a new code loader for Ruby. It is efficient, thread-safe, and matches Ruby semantics for constants.
Given a conventional file structure, Zeitwerk loads your project’s classes and modules on demand meaning you don’t need to write require calls for your own files.
To enable it in Rails 6, simply set
config.autoloader == :zeitwerk
Filtering sensitive parameters If you’re dealing with sensitive data you want to hide from logs, console etc. you can configure ActiveRecord::Base::filter_attributes with a list of String and RegExp which match sensitive attributes.
- Add an explicit option --using or -u for specifying the server for the rails server command. Eg:
rails server -u puma
- Add ability to see the output of rails routes in expanded format.
- Run the seed database task using inline Active Job adapter.
- Add a command rails db:system:change to change the database of the application. (eg sqlite to postgres, MAGIC)
- Add rails test:channels command to test only Action Cable channels.
- Introduce guard against DNS rebinding attacks.
- Add ability to abort on failure while running generator commands.
- Add multiple database support for rails db:migrate:status command.
- Add ability to use different migration paths from multiple databases in the generators.
- Add support for multi environment credentials.
- Make null_store as default cache store in test environment.
Blue Yeti USB Microphone (Silver) - http://amzn.to/2BjKEh9
Blue Yeti USB Microphone (Blackout Edition) - http://amzn.to/2By4byE
Premium 6-inch Pop Filter For Blue Yeti Microphone by Auphonix - http://amzn.to/2DtK2aq
Canon PowerShot G7 X Mark II Digital Camera w/ 1 Inch Sensor and tilt LCD screen - Wi-Fi & NFC Enabled (Black) - http://amzn.to/2Bwd0ZK
Rode smartLav+ Lavalier Microphone for iPhone and Smartphones - http://amzn.to/2Dtt38f
JOBY GorillaPod 3K Kit. Compact Tripod 3K Stand and Ballhead 3K for Compact Mirrorless Cameras or devices up to 3K (6.6lbs). Black/Charcoal. - http://amzn.to/2BOh44L
Logitech G502 Proteus Core Tunable Gaming Mouse - http://amzn.to/2BYuH3O
Anker PowerCore 10000, One of the Smallest and Lightest 10000mAh External Batteries - http://amzn.to/2CVMohk
Seagate Backup Plus Slim 2TB Portable External Hard Drive - http://amzn.to/2Dr98GL
STM Impulse, Backpack for 15-Inch Laptop and Tablet - Black (stm-111-024P-01) - http://amzn.to/2CUxRm3
Bose QuietComfort 35 (Series I) Wireless Headphones, Noise Cancelling - Black (updated and wireless version) - http://amzn.to/2BN3yy0
Kogan - https://www.kogan.com/au/r/TX4FTG/
|
OPCFW_CODE
|
Atom 1.11 features performance and stability improvements and, in particular, we’re excited that Atom now asks for permission before sending any telemetry data.
On launching 1.11 for the first time, all users will be presented with this screen asking for their assistance in improving Atom by allowing us to collect information while they use the application:
This is something that a lot of users have been asking for. We’ve always had ways to opt out but it is only right to make it obvious and clear. Now it is!
Image View improvements
- Image View tabs that are in the pending state can now be confirmed by double-clicking the tabs
- The status bar now shows the size in bytes of the image as well as the width and height
- Fixed a bug where the dimensions of an image were reported as zero if more than one image was opened in the same action
Custom Title Bar
An option to show a custom title bar on macOS got added by @brumm. It will adapt to the theme colors and be less jarring when a dark theme is used. To try it out, go to
Settings > Core and enable “Use Custom Title Bar”.
Improvements and Bug Fixes
- Added a configuration option for the large file warning threshold
- Fixed a regression in the environment patching on macOS for users of the zsh shell
- Made the Split Pane menu items work the way they used to
Don’t forget to check out all the other improvements shipping with this version in the release notes!
Atom 1.12 Beta
International Keyboard Support
New APIs available in Chrome 52 allowed us to take on this long-requested feature. The new APIs turned out to be less important than we originally thought but we’re nonetheless happy to report Atom users in all locales now get typical keyboard behavior in Atom’s default installation.
Thanks to some amazing work by community maintainer @thomasjo, Atom comes out of the Electron dark ages in this release with an update to 1.3.6, bringing Chrome 52 along for the ride.
The scope specificity rules for keybindings were understandably confusing lots of Atom users. This release has a major simplification such that user-defined keybindings take precedence.
There are a number of tweaks specific to Atom on Windows, including:
- Shell Integration upgrade reliability
- Allow multiple instances on Windows
- Move emacs editor bindings to Darwin to avoid Windows menu conflicts
As ever, you can find all the gory details in the full release notes.
Get all these improvements today by joining the Atom Beta Channel!
Don’t see what you were hoping for here? Join the Atom team at GitHub. We’re hiring! Check out the details and apply here!
|
OPCFW_CODE
|
i wanna give geddy a try ..... but it looks like i can't use coffee script to write my app ...
in generators and stuff ...
Did i get it right ?
PS: i mean like geddy app Test --coffee
We don't currently have generators for it, but we do support coffeescript based apps. You'll just have to transition the generated app from js to coffeescript on your own.
We'd definitely take a pull request that added that functionality to the generators though.
I think it would be better to add a parameter to specify the folder to get the templates from instead of making it language specific. This way we could later have a version that includes, let's say LESS, SASS, different layouts, not use twitter bootstrap, etc.
The switch will default to (geddydir)/templates. Then you could git clone any template project and run $ geddy app myapp -templates $/geddycstemplates.
Also it would be nice to add a setting in config so running other generators will always use the same templates and to make it easier to share this with other devs without them having to clone the same templates we could include them in the app's root by default.
$ geddy app myapp -templates $/geddycstemplates
@Asp3ctus if you want to build the CS templates I can give you a hand to add the switch so the geddy executable will use templates located somewhere else. An easy way to test this now is to replace the files in the template folder (by default in Linux it's on /usr/local/lib/node_modules/gedddy/templates).
What does everyone else thinks?
@MiguelMadero I like the idea of custom generators. I can already think of a few uses for this kind of thing. I'm not too opinionated about the API for this kind of thing, as long as its intuitive. Yours seems good enough, though I'd switch templates out for generator.
I'd like to be able to have "named" templates that people could install via npm if they'd like:
// create a coffee app
// looks in the global node modules dir
$ npm install geddy-coffee -g
$ geddy app --generator=geddy-coffee myApp
// create a special scaffold
// looks in the local node modules dir
$ npm install geddy-facebook
$ geddy scaffold --generator=geddy-facebook user
We should have it default to looking in the app's local node modules directory, then check global. If they put a full path in though, that should always override it.
+1 for the custom generators ... i will try to play around with the templates folder ...
This is probably super-easy to do by setting an environment variable. Just need to pass it along to the Jake task.
Closing in favor of #226
|
OPCFW_CODE
|
PCI 1000Mbps NIC for home server?
I'm looking to buy a barebone box to use as a headless home server. I plan on loading Ubuntu Server 9.10 on it, and using it for backup, running a person webserver, and streaming music from it using Jinzora. I've built it as a VM, and am trying to pick out optimal hardware, and I'm pretty hardware-illiterate.
I understand that I'll need to buy a 2GB stick of DDR2 RAM, and a 3.5" SATA hard drive of whatever size I feel like paying for. My question is whether I should hunt around for a 1000Mbps NIC for optimal streaming. The questions I've read around here indicate that server performance from an Atom server like this are mostly going to be determined by the hard drive disc speed and the network connection, but I haven't built a server like this before. Is the built-in 10/100 Mbps NIC on the mobo sufficient, or should I attempt to find a 1000 Mbps NIC that I can stick in the PCI slot? The 1000 Mbps cards I see on Newegg are all PCI-e, so I'm not really sure what I should do.
Thanks for the help!
@Lifeson: Remember, you can always upgrade later if you make the switch to all gigabit. Best of luck to you!
I would try and buy a motherboard that has a built on 1000 Mbps card first, then I would look at 3rd party cards.
Also, are you running a gigabit network (switch/router) that this would be able to connect at full speed to? If you have a gigabit card, but no network that supports it, you will not be able to take full advantage of the speed. I recommend upgrading if you plan on doing lots of data transfers. When I upgraded, it was a very nice increase in network speeds and transfers.
I'm looking to be as cheap as possible, otherwise I'd definitely get a board with a built-in gigabit card. It's the form factor and ultra-low price tag of the Foxconn I linked that are attracting me to it.
You raise a good point about the router; I'll have to double-check when I get home.
@Lifeson: Make sure to notice what ~quack mentioned. If you are streaming outside of your home, your bottleneck will be your internet connection upload speed. Gigabit connection speeds would only be able to be utilized within your internal home network provided the proper network was in place.
I don't think PCI gigabit cards are anything more than a way to take your money. The PCI bus is your bottleneck and gigabit cards are going to be not much more than an expensive network card with a huge buffer.
You'll need a motherboard with PCI-E or PCI-X instead.
|
STACK_EXCHANGE
|
"""Virtual Servers."""
# :license: MIT, see LICENSE for more details.
import re
import click
MEMORY_RE = re.compile(r"^(?P<amount>[0-9]+)(?P<unit>g|gb|m|mb)?$")
class MemoryType(click.ParamType):
"""Memory type."""
name = 'integer'
def convert(self, value, param, ctx): # pylint: disable=inconsistent-return-statements
"""Validate memory argument. Returns the memory value in megabytes."""
matches = MEMORY_RE.match(value.lower())
if matches is None:
self.fail('%s is not a valid value for memory amount' % value, param, ctx)
amount_str, unit = matches.groups()
amount = int(amount_str)
if unit in [None, 'm', 'mb']:
# Assume the user intends gigabytes if they specify a number < 1024
if amount < 1024:
return amount * 1024
else:
if amount % 1024 != 0:
self.fail('%s is not an integer that is divisable by 1024' % value, param, ctx)
return amount
elif unit in ['g', 'gb']:
return amount * 1024
MEM_TYPE = MemoryType()
|
STACK_EDU
|
In this article we explain the benefits of the integration between MS Exchange and SharePoint, how it will enable the user to search and preserve content, manage cases, and export discovery data.
In the 2016 edition of Exchange Server, the users are provided with support to integrate with SharePoint Server. This integration will enable the Discovery Manager to make use of the e-Discovery center in SharePoint for the following purposes:
Searching and Preserving Content from Similar Location
When a Discovery Manager is authorized, it can be used for searching and preserving information throughout Exchange server and SharePoint server. This will also include Lync content, derived from messaging conversations and archived documents of shared meetings.
eDiscovery Center makes use of a case management approach to eDiscovery, this allows the user to create cases and preserve content in different repositories, for different cases.
Export Search Results
Discovery Manager can be used for exporting search results through eDiscovery Center.
In-case of on-premises deployments, you will have to establish trust between Exchange and SharePoint, before you can make use of eDiscovery Center in SharePoint, for searching Exchange mailboxes. You can make use of OAuth Authentication, for establishing this trust. Exchange uses RBAC to authorize the searches performed by SharePoint using the eDiscovery. For performing an e-Discovery search on an Exchange mailbox, the SharePoint user will need delegated Discovery Management Permissions in the MS Exchange Server.
Important tips for integrating Exchange and SharePoint
- Estimated time needed would be 30 minutes.
- Specific permissions would be needed to perform the integration.
- Configure the SharePoint site, to make use of Secure Socket Layer (SSL)
- All servers running SharePoint, should have Exchange Web Services Managed API installed.
- Exchange and SharePoint can be installed in different domains, without having a relationship between these domains. The applications will make use of OAuth 2.0 protocol.
After you have taken care of the above points, you can move towards the process of integration in SharePoint and Exchange. While you engage in the integration process, you should also keep an ost 2 pst conversion tool handy so that you can export specific data for further study.
Steps for integrating MS Exchange and SharePoint
- The server to server authentication needs to be configured for Exchange, on server running SharePoint server.
- You will now have to configure server to server authentication for the SharePoint server, on server, running Exchange Server.
- Authorized users should be added to Discovery Management Role group.
With the coming of support for integration of SharePoint in MS Exchange, and using e-Discovery for searching across mailboxes has made preserving, managing, and exporting content a lot easier. With the integration , Exchange can not only get linked with SharePoint, but also with other related applications like the Lync content. This is referred to as content form the partner application. To be able to allow all partner applications, access to each other’s resources, server to server authentication will have to be configured.
Van Sutton is a data recovery expert in DataNumen, Inc., which is the world leader in data recovery technologies, including repair Outlook pst mail and bkf recovery software products. For more information visit www.datanumen.com
|
OPCFW_CODE
|
#include "ParticlePool.h"
ParticlePool::~ParticlePool()
{
particle_pools.clear();
}
GameObject* ParticlePool::GetInstance(std::string particle_type, float3 pos, float3 rotation, GameObject* parent, bool local)
{
if (particle_pools[particle_type].size() > 0)
{
GameObject* instance = particle_pools[particle_type].back();
if (instance == nullptr)
{
LOG("Error: Particle %s failed to recover from pool.", particle_type.c_str());
return nullptr;
}
particle_pools[particle_type].erase(particle_pools[particle_type].end() - 1);
instance->SetNewParent(parent ? parent : this->game_object);
if(!local)
instance->transform->SetGlobalPosition(pos);
else
instance->transform->SetLocalPosition(pos);
instance->transform->SetGlobalRotation(Quat::FromEulerXYZ(rotation.x, rotation.y, rotation.z));
instance->SetEnable(true);
LOG("HELLO ME HAGO ENABLE");
return instance;
}
else
{
GameObject* instance = GameObject::Instantiate(particle_type.c_str(), pos, false, parent ? parent : nullptr);
if (instance == nullptr)
{
LOG("Error: Particle %s failed to instantiate.", particle_type.c_str());
return nullptr;
}
instance->transform->SetGlobalRotation(instance->parent->transform->GetGlobalRotation());
if (!local)
instance->transform->SetGlobalPosition(pos);
else
instance->transform->SetLocalPosition(pos);
return instance;
}
LOG("Error: Unknown error getting particle %s", particle_type.c_str());
return nullptr;
}
void ParticlePool::ReleaseInstance(std::string particle_type, GameObject* instance)
{
instance->SetNewParent(this->game_object);
instance->SetEnable(false);
particle_pools[particle_type].push_back(instance);
}
|
STACK_EDU
|
Not a commercial application
Pro WEB is supplied with a simple test harness which can be used to verify that you have installed the product correctly and to demonstrate some of the product functionality.
The test harness can be run on either Windows or UNIX. On UNIX, the main library must be accessible as a shared object. To ensure this is the case, you can register the library with the following steps (you will need to be logged in as root):
Idconfig(or your system's equivalent).
The test harness is not intended to be used as a commercial application, and only implements a sub-set of the full functionality. It enables you to demonstrate the following functionality:
The test harness is not multithreaded, although the C API itself is threadsafe.
If you are a Windows user, you can run the test harness from the shortcut in the Program Group which was created when you installed Pro Web.
If you are a UNIX user, you can run the test harness executable ./pwctest, located within the /apps directory.
When the test harness is running, type #Help to list all the operations available with the test harness:
The searching options are listed at the top of the dialog with the commands listed below. All commands begin with '#', and are used mostly to view and change settings. You can abbreviate commands to "#
You should enter the command without arguments to view the associated options, or with arguments to select an option. For example, type #d to view a list of all available datasets, or #d [0-MaxIndex] to select a specific dataset.
This will display all the current configuration settings being used by the test harness during a search.
Enter a command or search: **#a** DataSet: USA Engine : Singleline PromptSet : Default Search Intensity: Close Search Timeout: 20000 Picklist flattening: False Picklist threshold: 50 Layout: (QAS standard layout)
This will give you a list of all currently installed Datasets. If you are using multiple datasets, this can also be used to change the active dataset. To do this, first run the command #d to get a list of the installed datasets. From there, #d + the number of the dataset can be used to toggle between datasets.
DataSets available: 0 - CAN - Canada 1 - NZL - New Zealand 2 - USA - United States of America Enter command or search: **#d** DataSet is set to CAN Enter command or search: **#d 2** DataSet is set to: USA
This will give you a list of the available Pro Web engines. This can also be used to change the active engine. To do this, first run the command #e to get a list of available engines. From there, #e + the number of the engine can be used to toggle between engines. The Pro Web engines are designed to be used for different methods of address capture.
Engines Available: 0 - Singleline 1 - Verification 2 - Typedown 3 - Keyfinder Enter command or search: **#e** Engine is set to: 0 - Singleline Enter command or search: **#e 1** Engine is set to: 1 - Verification
This will display the picklist flattening setting and allow you to turn flattening on or off.
What does Flattening do?
Flattening defines whether the search results will be 'flattened' to a single picklist of deliverable results, or shown as (potentially multiple) hierarchical picklists of results that can be stepped into.
Enter command or search: **#f** Picklist flattening can be: True/1 or False/0 Picklist flattening is set to: False Enter command or search: **#f** 1 Picklist flattening is set to: True
This will display or allow you to change the searching intensity.
What does the intensity setting do?
This setting defines how hard the search engine will work to obtain a match. Higher intensity values may yield more results than lower intensity values, but will also result in longer search times. The default value of this setting is Close. The available values are:
Exact: This does not allow many mistakes in the search term, but is the fastest.
Close: This allows some mistakes in the search term, and is the default setting.
Extensive: This allows many mistakes in the search term, and will take the longest.
Search Intensity Levels:
0 - Exact
1 - Close
2 - Extensive
Enter command or search: **#i** Search Intensity Level is to: 1 - Close Enter command or search: **#i 0** Search Intensity Level is set to: 0 - Exact
This will display or allow you to change the active output Layout. Layouts are created in the qawserve.ini or in the Configuration Editor. Layouts allow the final output address to return formatted in a particular way.
0 - (QAS standard layout)
1 - < Default >
2 - Database Layout
3 - Barcode
Enter command or search: #l
Layout is set to: (QAS standard layout)
Enter command or search: #l 2
Layout is set to: Database layout
This will display or allow you to change the picklist threshold. This defines the threshold that is used to decide whether results will be returned in the picklist, or whether an informational picklist result will be returned, requiring further refinement. Due to the algorithms used to return result picklists, this value is used only as an approximation. The default setting is 50 items.
Enter command or search: #m
Picklist threshold is set to: 50
Enter command or search: #m 25
Picklist threshold is set to: 25
This will display or allow you to change the active Prompt Set.
What are the Pro Web Prompt Sets?
Prompt sets are designed primarily to constrain search terms for the Single Line and Typedown engines, in order to aid users to capture addresses quickly and easily.
Default: This prompt set is designed to be used with the Verification engine. This is an unconstrained prompt set that can accept one or many text field inputs with any address element in any field.
OneLine: This prompt set is designed to be used with the Single Line engine in hierarchical mode (ie., the Flatten engine option set to False) or the Typedown engine. This prompt set specifies a single unconstrained input line that will accept any address elements.
Generic: This prompt set is designed to be used with the Single Line engine in Flattened mode. This is a standard prompt set that can be used across multiple countries. It has four fields: "building number or name", "street", "town/city", and "postcode". This is useful in situations where the user must complete the address fields at the same time as or before specifying the country.
Optimal: This prompt set is designed to be used with the Single Line engine in Flattened mode. This prompt set defines the minimum number of fields that users must complete, in order to return the required address.
Alternate: This prompt set is designed to be used with the Single Line engine in Flattened mode. This is an extended country-specific prompt set. It is designed for cases where the user does not have the information required to fill in all fields requested in Optimal (such as when they cannot remember their postcode).
0 - Default
1 - Oneline
2 - Generic
3 - Optimal
4 - Alternate
5 - Alternate2
6 - Alternate3
Enter command or search: #p
PromptSet is set to: 0 - Default
Enter command or search: #p 1
PromptSet is set to: 1 - Oneline
This will display the system information. For example, the Pro Web version, as well as the dataset versions and expiration dates.
This will display or change the searching timeout setting.
What does the timeout setting do?
This defines the time threshold in milliseconds after which a search will abort.
A value of 0 signifies that searches will not timeout.
The default setting is 20000 milliseconds.
The maximum value is 600000 milliseconds.
Searching timeout can be: 0 (No timeout) - 600000 (Max timeout), default 0
Enter command or search #t
Searching timeout is set to: 20000
Enter command or search: #t 10000
Searching timeout is set to: 10000
The configuration layout determines the format of the returned address. The default layout is (QAS Standard Layout). You can switch to a different layout using the #Layout command.
To perform a Singleline search, follow these steps:
Press Enter to select the default layout.
Press Enter again to select the default dataset.
Alternatively, type ? and press Enter to display a list of all datasets with their identifiers. Type the relevant code (for example, DEU for Germany, AUS for Australia) and press Enter to select the related dataset.
Once you have selected a dataset, enter a search string, separating each part from the next with a comma, and press Enter. For example (if you are using the GBR dataset):
linden gardens, london
There are 181 possible matches, as shown by the Match Count.
Type 30 and press Enter to refine the picklist and to therefore reduce the number of matches. Press Enter at any prompt to remove the refine and return the list to the way it was.
There are now two matches displayed. Note that they are numbered from 1 to 2 in the picklist.
Type #1 and press Enter to select the first picklist entry.
The full address is returned.
|
OPCFW_CODE
|
Time Dependent Problems and Difference Methods
Time dependent problems frequently pose challenges in areas of science and engineering dealing with numerical analysis, scientific computation, mathematical models, and most importantly--numerical experiments intended to analyze physical behavior and test design. Time Dependent Problems and Difference Methods addresses these various industrial considerations in a pragmatic and detailed manner, giving special attention to time dependent problems in its coverage of the derivation and analysis of numerical methods for computational approximations to Partial Differential Equations (PDEs).
The book is written in two parts. Part I discusses problems with periodic solutions; Part II proceeds to discuss initial boundary value problems for partial differential equations and numerical methods for them. The problems with periodic solutions have been chosen because they allow the application of Fourier analysis without the complication that arises from the infinite domain for the corresponding Cauchy problem. Furthermore, the analysis of periodic problems provides necessary conditions when constructing methods for initial boundary value problems. Much of the material included in Part II appears for the first time in this book.
The authors draw on their own interests and combined extensive experience in applied mathematics and computer science to bring about this practical and useful guide. They provide complete discussions of the pertinent theorems and back them up with examples and illustrations.
For physical scientists, engineers, or anyone who uses numerical experiments to test designs or to predict and investigate physical phenomena, this invaluable guide is destined to become a constant companion. Time Dependent Problems and Difference Methods is also extremely useful to numerical analysts, mathematical modelers, and graduate students of applied mathematics and scientific computations.
What Every Physical Scientist and Engineer Needs to Know About Time Dependent Problems . . .
Time Dependent Problems and Difference Methods covers the analysis of numerical methods for computing approximate solutions to partial differential equations for time dependent problems. This original book includes for the first time a concrete discussion of initial boundary value problems for partial differential equations. The authors have redone many of these results especially for this volume, including theorems, examples, and over one hundred illustrations.
The book takes some less-than-obvious approaches to developing its material:
* Treats differential equations and numerical methods with a parallel development, thus achieving a more useful analysis of numerical methods
* Covers hyperbolic equations in particularly great detail
* Emphasizes error bounds and estimates, as well as the sufficient results needed to justify the methods used for applications
Time Dependent Problems and Difference Methods is written for physical scientists and engineers who use numerical experiments to test designs or to predict and investigate physical phenomena. It is also extremely useful to numerical analysts, mathematical modelers, and graduate students of applied mathematics and scientific computations.
Отзывы - Написать отзыв
Fourier Series and Trigonometric Interpolation
Higher Order Accuracy
Stability and Convergence for Numerical Approximations
Hyperbolic Equations and Numerical Methods
Parabolic Equations and Numerical Methods
Problems with Discontinuous Solutions
The Laplace Transform Method for InitialBoundaryValue
The Energy Method for Difference Approximations
The Laplace Transform Method for Difference
The Laplace Transform Method for Fully Discrete
Appendix A 1 Results from Linear Algebra
Appendix A 3 Iterative Methods
The Energy Method for InitialBoundaryValue Problems
2ir-periodic accurate of order approximation is stable boundary conditions bounded computed consider constant coefficients converges corresponding Definition derivatives diagonal difference approximation difference operator differential equation discrete discussed Duhamel's principle eigenvalue problem energy estimate Euler equations example EXERCISES Figure finite following lemma following theorem Fourier method Fourier series fourth-order grid gridfunctions gridpoints Hermitian hyperbolic systems inequality initial data initial value problem initial-boundary-value problem interpolation Kreiss condition L2 norm Laplace transform Lax-Wendroff method leap-frog scheme linear Lipschitz continuous lower order terms matrix method of characteristics nonlinear nonsingular norm obtain one-dimensional one-step operator Q order of accuracy Parseval's relation polynomials Proof prove the following quarter-space problem Runge-Kutta method scalar product Section semibounded semidiscrete smooth functions smooth solution solution of Eq solve space dimensions stability condition strongly hyperbolic strongly stable sufficiently small tion variable coefficients vector wave numbers well-posed problem well-posedness
|
OPCFW_CODE
|
The VBA Integrated Development Environment provides a limited set of options under Tools > Options which will let you customise the environment for your own personal needs.
Microsoft hasn’t added anything new to this part of MS Office for many years so you’re not going to get the rich interactivity of an application such as Visual Studio 2012. Nevertheless, there are some important settings which need to be understood so here’s a quick overview of some key ones together with the settings I use.
Editor – Auto Syntax Check
This setting defines whether or not you want to receive a message box when you write a line of VBA code which is syntactically invalid, like so:
Oops, I pressed Enter too early and the message box helpfully informed me that the line is missing a
If you’re new to VBA then I recommend that you have this option ticked because the message box prompt will help you work out what’s wrong with your code. Personally, I find the message box intrusive and, since I’m familiar with the VBA syntax, I leave it unticked. The invalid line will still be highlighted in red for me, but I can carry on typing whatever piece of code was on my mind and go back and correct it later.
Editor – Require Variable Declaration
Having this option ticked means that an
Option Explicit directive is automatically inserted into every new code module.
Option Explicit statements force you to declare all of your constants and variables.
The fact is that, when you start learning VBA, having
Option Explicit statements makes writing code harder. The compiler will complain that your variables aren’t declared and then, to make it happy, you have to go back and add in your variable declarations which is a real pain. To make things even worse, you’ll find that it’s best practice to give your variables an appropriate type when you declare them, so you have to learn about those too. Ugh.
Believe me when I say that the pain is worth it: having this check in place will save you countless hours of debugging because of silly typos in your code and, ultimately, it will give you a much better general understanding of VBA. Declaring your variables with appropriate types will mean that your code uses up less memory and runs faster and, when you’re using object variables (not
Object), you’ll get the benefit of VBA IDE features such as intellisense when writing code. There are some syntactical nuances you need to be aware of when declaring variables and constants so have a read through these blog posts if you’re not already up to speed with them:
Let’s just scratch that itch and fix that compile error:
By default, the require variable declaration option comes unticked, which is a travesty. If you’re remotely serious about writing decent VBA code, turn it on and leave it on. Trust me.
Editor – Auto List Members
Have you noticed that, when you’re using a particular class in VBA, you can get a helpful Intellisense dropdown which lists the members you can use?
Provided your code doesn’t have any compile errors then that’s what the auto list members option gives you. At any point you can press TAB to auto-complete your construct using the selected member from the list. Handy.
There are very rare occasions when bugs in the VBA IDE mean that intellisense can give you a wrong suggestion. These really are extremely uncommon, so trust intellisense and let it help you write your code.
Editor – Auto Quick Info
Yet another setting which you should leave ticked. This one helps you when you’re typing code by giving you a pop-up box with the parameters you can use when you call a method or property, provided that it knows which class the method or property is a member of.
Editor – Auto Data Tips
This one gives you a useful pop-up box when you’re debugging code. I’ve talked about auto data tips on my blog before so, if you want to know more, have a look here.
Editor – Auto Indent
Auto Indent helps with the fluency of your code writing. Code indentation really warrants a blog post in itself but suffice to say that code is much easier to read if it is indented correctly. This setting means that when you press the Enter key, the next line will automatically have the same alignment as the previous one – which is what you’ll want most of the time.
Editor – Default To Full Module View
If you have this setting ticked as I do then you can see all the code in a given code module at the same time. If it’s unticked then you can only see one section at a time but you can navigate through the sections by using the dropdown at the top right hand side of the code pane. Does anyone find it useful to have this unticked – I can’t think of a good reason to do so?
Editor – Procedure Separator
This one puts a horizontal line between different sections of your code module to help you distinguish between them when you’re in full module view . It’s totally up to you whether or not to use it, but I find it helpful.
Editor Format Tab
Other than the margin indicator bar (which speaks for itself and should be ticked so you can see icons such as breakpoints), all the settings on this tab define the text appearance of your code. Ultimately you’re entirely free to go to town with the text settings and I’ve seen all sorts of colours and fonts used by experienced VBA’ers. If I recall correctly, Roger Govier, a long time MS Excel MVP, uses an enlarged Comic Sans font, but perhaps that’s only for presentations? My own recommendation is that you should stick to a monospaced font such as the Courier New default and only change the text colours if you struggle to differentiate between the defaults.
For me, the keyword default blue is too dark and sometimes I have trouble distinguishing it from normal text. I have the foreground on this one set to a brighter, blue colour which matches the keyword text colour in Visual Studio.
But that’s the only customisation I have.
While I’m on the subject of keywords, have you noticed that the IDE sometimes highlights words which are not keywords in blue (or whatever keyword colour you have set)? Let me give you an example:
CStr() is not a keyword in VBA. It is a method from the
VBA.Conversion class. I believe the erroneous VBA keyword highlighting is an offshoot from the Visual Studio 6 IDE.
General – Show ToolTips
Show ToolTips is yet another setting which controls whether or not you get a little, yellow (okay, yours might not be yellow if you’ve changed it in your Windows settings) prompt box. This one is for toolbar controls in the VBA IDE itself:
General – Notify Before State Loss
State loss is another topic which I’ve written about on this blog before.
General – Error Trapping
Error trapping settings could call for an extensive discussion so all I will say here is that you probably want to have it on Break In Class Module. You probably do not want to have it on Break on All Errors unless you are doing some specific debugging.
General – Compile On Demand
If compile on demand is unticked then all of your code is compiled before it is executed. If it is ticked then each bit of code is only compiled when it is needed.
With the setting ticked, I can quite happily step into and run the
foo() method even thought
foo2() is illegal and will not compile because
rubbish has not been declared.
With the setting unticked I can’t do that and the compiler complains to me that I’ve got undisclosed
rubbish in my code:
I generally have this option ticked.
Wrapping It Up
That covers most of the options. Do you have different settings to me and, if so, what advantages do you get from them?
|
OPCFW_CODE
|
03-09-2012 11:24 AM
I'm using SIM x_receipt_url, which works fine. The payment page shows a button after the payment is complete, clicking it will take the payer back to my site to update his status.
Now, the payer may forget to click the "Go back" button and simply close the browser, which will cause his payment status not updated. So I added x_relay_url, that will do a server to server call (Authorize.Net posts the transaction details to my server's relay URL) after the payment is complete. I just need to make sure the same transaction won't get processed twice.
But if I issue a refund from my merchant account, how does my server receive a notification from Authorize.Net to update member's status accordingly? So, I guessed I should use Silent Post URL. I haven't tried it yet when I post this.
Does anyone know if the Silent Post URL will receive any/all kinds of transaction notification, such as Refund, Complete, Pending, etc.? If not, what is the approach / setting should I do?
If YES, and the Silent Post URL is set, will both relay URL and Silent Post URL receive a call from Authorize.net on a same payment complete transaction?
03-09-2012 12:11 PM
The relay URL is passed the order data along with the order status, so you would update your database there before forwarding the person to your receipt page. Silent Post is not needed except with ARB.
03-09-2012 12:52 PM
Here is my result against Authorize.Net test server.
If I set x_relay_url only, I got the following error after the payment complete.
An error occurred while trying to report this transaction to the merchant. An e-mail has been sent to the merchant informing them of the error. The following is the result of the attempt to charge your credit card.
This transaction has been approved.
It is advisable for you to contact the merchant to verify that you will receive the product or service.
However, the relay URL was successfully called and was able to process the POST fields.
If I set Silent Post URL only, it worked exactly the same as x_relay_url, except that no error was displayed. Great !
Both cases showed that the transaction can be successfully recorded on my server after the payment complete, without having to click "Go back" button for x_receipt_url to process.
I wonder why x_relay_url case displayed error.
But at least I can use Silent Post URL to server the purpose. Next, I'll need to verify if it can receive other transaction notification, such as Refund.
03-09-2012 01:37 PM
That error generally means it couldn't contact the relay response URL. Did you configure the URL in your control panel? Is the relay response page available when you post to it yourself?
Really shouldn't be using silent post with SIM if you can avoid it.
03-09-2012 04:33 PM
Thank you TJPride for you prompt reply.
I opened an eTicket on this issue. According to their reply, Silent Post URL is what I need, and my test confirmed that. When I click "Void" to refund the payment from merchant account, the silent_post_url received the notification about this transaction (so the member status can be updated immediately on my site).
However, the same verification mechanism against x_MD5_hash (see below) only works for Payment Complete, not for Void. I'm waiting for their explanation.
// ... success on Payment Complete
// ... failure on Void
03-09-2012 10:04 PM - edited 03-09-2012 10:06 PM
Well sure, you don't void things through SIM, you'd use AIM for that. SIM is only for allowing your users to create transactions. And anything run through your control panel wouldn't show up through relay response, just through silent post.
From the documentation:
This transaction type is used to cancel an existing transaction that has a status of Authorized/Pending Capture or Captured/Pending Settlement. Settled transactions cannot be voided (issue a Credit to reverse such charges). The SIM API does not support Void transactions.
You can manually void transactions from the Unsettled Transactions screen of the Merchant Interface. From there, you can use the Group Void filter toward the top of your screen to void multipletransactions at once, or click on the individual Transaction ID of the transaction you would like to void; the next screen will provide a Void button.
If this transaction type is required, it is recommended that the merchant process the transactions by logging on to the Merchant Interface directly, or by using a desktop application that uses AIM.
03-13-2012 09:50 AM
The following link solved my problem. For x_type "void" or "credit", authorize.net uses user login ID (not API LOGIN ID) to construct x_MD5_Hash. Now my Silent Post URL works great.
03-13-2012 10:41 AM
Only voids and credits from the terminal, not from the API.
That makes no sense, though. The API is not supposed to require putting in any part of your account login, you'd think that even transactions you put in via the terminal would have a hash generated from the API login so that you aren't forced to have the account user inside your hosting to verify the MD5 hash. This is a minor security breach.
Whatever works, I suppose, but this really needs to be changed.
03-13-2012 11:11 AM
I'm using SIM. So "void" and "credit" are issued directly from merchant account, not from API, the silent URL receives the notification. This could be reason they use account login ID instead of API LOGIN ID to construct hash.
AIM will be my next test.
|
OPCFW_CODE
|
Diflucan Oral tablet 200mg Drug Medication Dosage information.
ARCOXIA - Prevacid - Diflucan Side EffectsIncludes common and rare side effects information for consumers and healthcare professionals.Diflucan and Diflucan side effects will be described in the following article.Diflucan - Get up-to-date information on Diflucan side effects, uses, dosage, overdose, pregnancy, alcohol and more.Minich on diflucan side effects itching: Diflucan (fluconazole) takes a.
As your body adjusts to the medicine during treatment these.Fluconazole - Get up-to-date information on Fluconazole side effects, uses, dosage, overdose, pregnancy, alcohol and more.
Decreased appetite, skin rash,...Diflucan Generic Side Effects Liver from the 24-week and the 26-week studies were both published online January 24 in the Journal of the American Medical Association.While generally safe and effective when prescribed by a veterinarian, fluconazole can cause side effects in some animals.Fluconazole received an overall rating of 7 out of 10 stars from 3 reviews.Along with its needed effects, a medicine may cause some unwanted effects.Diflucan side effect Symptom Count: 44919Diflucan online. PYREXIA. 1458. NAUSEA. 1183. DIARRHOEA. 1174. PNEUMONIA. 987. PAIN. 985. VOMITING. 948. FEBRILE.If you experience a serious side effect, you or your doctor may send a report to.
Fluconazole is a triazole antifungal drug used to treat and prevent systemic and superficial fungal infections.Diflucan(Fluconazole) - side effects of fluconazole 200 mg in dogs, buy diflucan online, diflucan price.Fluconazole Injection: learn about side effects, dosage, special precautions, and more on MedlinePlus.Diflucan is a prescription antifungal that your doctor may give you for Candida or a yeast infection.
Diflucan Prices and Diflucan Coupons - GoodRxAs this eMedTV article explains, while most problems are mild, some are potentially.
TheBody.com fills you in on the topic, long term side effects of diflucan, with a wealth of fact sheets, expert advice, community perspective, the latest news.Fluconazole During Pregnancy and Breastfeeding. with no reported side effects. Effects on Pregnancy: Fluconazole does not appear to have any negative effect.Diflucan side effects in infants Price of metronidazole 500mg Diflucan side effects in infants Pictures of strep throat Daily antibiotic for uti What are the home.I was on Diflucan for about 18 months and hair loss was one of the first side effects I noticed.Using Diflucan while pregnant may cause birth defects, the FDA warns.
Fluconazole With Discount Uk * Side Effects FluconazoleSome of the side effects that can occur with fluconazole may not need medical attention.
An alternative well-known oral drug for candida yeast infections consists of Fluconazole (Diflucan). These are the more serious side effects of Diflucan, see.Rate Fluconazole to receive MedCheck, Discover best treatments based on user reviews of side effects, efficacy, health benefits, uses, safety and medical advice.Diflucan is a prescription drug that treats susceptible fungal yeast infections.In these studies, the side effects that occurred in a group of people taking fluconazole were documented.
Can I Give My Dog Diflucan?Doctors give unbiased, trusted information on whether Diflucan can cause or treat Itch: Dr.Some side effects from using fluconazole include: headache diarrhea nausea or upset stomach dizziness stomach pain changes in the way food tastes Tell your doctor if.Learn about the reported side effects, related class drugs, and how these medications will affect your.Diflucan is used to treat infections caused by fungus, which can invade any part.Fluconazole works by inhibiting production of the fungal cell wall.Fluconazole (By mouth) floo-KON-a-zole. If you notice other side effects that you think are caused by this medicine, tell your doctor.
Learn about side effects and possible interactions when taking Diflucan (Fluconazole).Fluconazole (Diflucan) is a medication used to treat fungal infections like thrush in the mouth and throat and yeast infections in women.This includes candidiasis, blastomycosis, coccidiodomycosis, cryptococcosis.
Although most people respond well to fluconazole, side effects are possible.
|
OPCFW_CODE
|
I’ve been faced with an interesting challenge: teach Functional Programming to last year students at Centrale Nantes.
The students did not have any background in FP, even though they have a strong level in math after two years of preparatory classes.
Most of them have only seen C and Java.
I got a two hour window to make them discover FP.
I’ve found it a bit difficult to design this course. Should I show them just a functional language and let them infer functional programming from it ? Should I talk about more abstract notions and stay closer to the essence of FP ? I chose to stand in middle ground, and split the presentation in two parts: the first part would be about fundamental notions about FP, the other part would be the presentation of a functional language and a quick problem-solving session with it.
I began by describing different programming paradigms and insisted on the declarative nature of FP. I then proceeded on explaining a few key characteristics of FP (immutability, recursion, limited side effects).
I also chose to mention quickly the Curry-Howard isomorphism to show the deep relation between proofs and programs. I did not have the time to elaborate on type systems, though.
After having explained the fundamental parts, I went on with Haskell. Chosing Haskell over other functional languages was easy, it’s the purest yet practical language I know of. I could also have used SML, but I have a poor knowledge of it.
I began by running quickly over Haskell’s syntax. I could have insisted a bit more on pattern matching.
During the rest of the session, I solved small problems on the whiteboard to show students the functional problem-solving mindset. A few of them were quite active and got it quite fast.
I had a few minutes left, so I decided to quickly show them some category theory constructs and how it enabled them to take great advantage from simple abstractions. Thanks to their math background, a Monoid was a familiar concept for many of them and I’ve been able to show how this abstract, mathematical construct was helpful with more concrete problems.
The feedback I got from the students and a teacher who attended the class was generally positive. It was a bit fast and technical, but interesting.
Apart from the talk, I gave a list of must read books and papers (SICP, Why FP matters, Learn You A Haskell…) but I’m not sure how many of them have been looking at it yet.
Slides (in French), PDF
Slides (in French), markdown
I should do it again next year and I’m looking forward feedback from functional programmers. If you have suggestions, please let me know, the current structure is far from being perfect and I’m willing to improve it.
|
OPCFW_CODE
|
Creating the Display Function4:56 with Alena Holligan
We can use a custom function to ensure that our items are displayed consistently across all the list pages in our store. In this video, we’ll create that function and talk about where to place the code. We’ll also talk about concatenation, the process of chaining these values together in a chain.
We're now ready to write a custom function that we can use to display the list view 0:00 of an item in both index.php and in catalog.php. 0:04 It's a good idea to name things in a descriptive way 0:09 instead of just making it something short and easy to reference in our code. 0:12 This way when someone else, even our future selves, 0:16 reads this code later, it's easy to understand what's happening. 0:19 Let's get started. 0:23 We'll name this function get_item_html. 0:25 For the parameters, we'll want to pass in the item ID, and 0:27 the interior array for the single item. 0:32 You can include multiple parameters for a function by separating them with a comma. 0:35 In this new function, we'll want to build a string 0:39 with the HTML needed to display the item in our list view. 0:42 We'll then want the function to pass back the HTML as the return value. 0:46 Now, where should we put this function? 0:52 We can't put it in catalog.php because we also need it to be available in index.php. 0:54 We'll need to put this in an include file so that both pages can access it. 1:00 Since this file will contain functions, 1:06 let's create a new include file named functions.php in our inc folder. 1:08 We open up our PHP code and add our new function. 1:14 Get item HTML. 1:21 We include our parameters, ID and item, and open and close our function. 1:23 We already have the code that builds the HTML output for a single item in the list. 1:31 It's over in the catalog.php file right now. 1:37 But we need to move it over here into our function so 1:40 that other files can access it. 1:43 In catalog.php, the code we need is right here. 1:45 So let's copy this and paste it into our function. 1:50 This works, but it's better with functions like this to return the HTML to the main 1:55 code as a piece of text instead of displaying it straight to the screen. 1:59 This gives our main code a little bit more flexibility when calling the functions. 2:03 So instead of echo, let's assign this string to a variable. 2:08 Let's call it output, equals, and the string. 2:13 Let's clean this up a little bit and then return our output. 2:17 Since we moved the code out of catalog.php, 2:27 we need to replace it with the code that calls this function. 2:29 In catalog.php we want to echo 2:32 get item html. 2:38 We want to pass the ID and the item. 2:44 Now we need to modify our foreach loop to get our item ID. 2:49 The ID is the key for the single item we have loaded into our variable. 2:55 So we add ID, equals greater than and then our item. 2:59 Just like we used when we created our array. 3:06 For each item now, as we loop through them one at a time, we pass the item 3:09 information into our function and get back the HTML for that item. 3:12 We then echo out the return value to the screen. 3:17 We haven't included our function page yet, so let's do that now. 3:20 Up at the top we'll include 3:23 inc/functions.php and 3:28 save this file. 3:33 The catalog page hasn't changed much. 3:37 It's working essentially the same way as it was before, but 3:39 we now have the code in a function that we can use on other pages. 3:42 The page that we want to call this function on is index.php. 3:46 I'll copy this whole foreach loop and paste it into our index file. 3:50 We'll replace the list items with 3:58 the call to our function within our foreach loop. 4:03 We need to add the include for 4:11 our data .php and our functions .php. 4:16 Now let's save this page and take a look at it in our browser. 4:22 Now when we refresh the page you can see how all 12 items 4:29 are showing on the homepage just like the catalog page. 4:34 However, on the homepage, we only want to display four random selections. 4:38 Awesome work. 4:45 We just finished creating our first custom function. 4:45 Next, we'll check out a built in php function to choose those four random items 4:48 that we display as suggestions on our home page. 4:53
You need to sign up for Treehouse in order to download course files.Sign up
|
OPCFW_CODE
|
Do you want to be able to control your Amazon Seller account from a distance? Do you want to be able to automate certain tasks without even touching your computer? If so, then you need to learn how to use the Amazon API. In this post, we will provide you with examples of what the Amazon API can do for you and also walk you through the pricing structure. So whether you’re looking to improve your sales processes or simply gain more control over your Seller account, read on for a comprehensive guide.
What is Amazon API?
Amazon API is a set of programming instructions and tools that allow developers to access the functionality of Amazon’s products and services. This includes everything from retrieving data from Amazon Web Services products to building applications that can interact with AWS services.
There are several different ways to obtain access to Amazon API. The most common way is through an Amazon Developer Account, which offers free access for developing simple applications. Paid subscriptions also offer more features and greater flexibility in how applications are developed. There are also many public libraries that provide access to some or all of the Amazon API for free.
Because the Amazon API is so comprehensive, there are lots of examples available online. Some of the most popular resources include the AWS Documentation website and Stack Overflow, a question-and-answer site for programmers. In addition, many commercial firms offer training on using the Amazon API. Prices for access vary depending on what level of access is required and how much development time is budgeted.
Types of Amazon API Requests
There are many types of Amazon API requests you can make. Here are a few examples:
1. Get an item’s inventory status
To get the current inventory status of an item, use the “Inventory” resource. This request takes the following format:
You can also specify a particular time range to return results for, or you can specify a list of attributes you want to look at (such as condition and SKU). The result set includes an “Inventory” object with information about the item, like its current quantity and whether it is available for purchase.
The price for this request is calculated based on how many items are in your inventory and how long it would take to fulfill an order from that inventory. For more information, see the Amazon SimpleDB Pricing Guide.
2. Get customer reviews for an item
To get customer reviews for an item, use the “Reviews” resource. This request takes the following format:
GET /Reviews/ItemName?itemId=item_id&sortby= rating&orderby= publishDate&limit=10
How to Request an Amazon API Request
There are a number of ways to request an Amazon API request. The simplest way is to use the Amazon Web Services General Request Page. You can also use the Amazon API Gateway console, which provides a more user-friendly interface.
The pricing for Amazon API requests depends on the type of request and the amount of data you want to access. For example, requesting a list of books in a certain category costs less than retrieving all the books in that category. You can find out more about pricing here: https://aws.amazon.com/api/pricing/.
Examples of how to use Amazon API
If you’re looking to get started using Amazon’s API, there are a few resources you can use. The first is the AWS Developer Guide, which provides an overview of the Amazon API and some tips for getting started. You can also check out the Amazon Developer Portal, which has more detailed information on each of the Amazon APIs and how to use them. Finally, if you want to see specific examples of how to use the different APIs, you can browse through the AWS Lambda Samples repository or the Amazon DynamoDB Samples repository.
When it comes to pricing, each of the APIs has different pricing policies. The following table provides a summary:
API Name Free Tier Standard Tier Gold Tier CloudFront $0/month $500/month $2,500/month DynamoDB $0/day $10/day $100/day S3 $0.01/GB per month $0.10/GB per month $1.00/GB per month SWFKit $50 per year
What are the Fees for Amazon API Requests?
There are two types of fees you’ll pay when using the Amazon API: programmatic and usage-based. The programmatic fees apply to requests that you initiate yourself, while the usage-based fees apply to your requests that are sent in response to a user request.
The programmatic fee for each API request is determined by how much data you send over the network in a given request. For example, if you make a GET request for an album’s title and track list, you’ll pay a $0.50 per 1 kilobyte fee. This means that if your request is less than 100 KB, you won’t have to pay anything. If your request is more than 100 KB but less than 500 MB, you’ll pay $0.75 per 1 kilobyte fee. If your request is more than 500 MB but less than 1 GB, you’ll pay $1.00 per 1 kilobyte fee. And if your request is more than 1 GB, you’ll pay $1.25 per 1 kilobyte fee.
The usage-based fees for each API call depend on how many calls you make in one 24-hour period and how much data each call consumes. You can find more information about these usage-based fees on our pricing page .
How to Use the Amazon API
If you are interested in using the Amazon Web Services (AWS) platform to power your next big project, you are in luck. AWS offers a rich set of services that can help you quickly build and scale your application. In this article, we will show you how to use the Amazon API, which provides access to many of the core features of AWS. We will highlight some examples and provide pricing information so that you can get started right away.
2.Getting Started with the Amazon API Reference
To get started with the Amazon API, first make sure that you have registered for an account at amazonaws.com. After registering, sign in to your account and click on “Services” in the left nav bar. From here, select “Amazon API” from the list of services on the left side of the screen. This will open up a window that lists all of the available APIs. If you would like to explore a particular API in more depth, hover your mouse over it and a tooltip will appear that includes information such as documentation links, version numbers, and pricing tiers. To learn more about any one of the APIs listed here, simply click on its title or link to jump straight to its corresponding documentation page on amazonaws.com.
3.Using The Amazon API Platform
Once you have selected an API from the list on the left side of your screen, it is time to start building some code! The first
One way to use the Amazon API is through the AWS Management Console. This allows you to manage your resources in a centralized location. You can also use the AWS CLI to access the Amazon API. These are both free tools and available on most platforms.
There are several pricing models for using the Amazon API. The following table outlines these models:
Pricing Model Description Cost perrequest Free Usage limit per hour None or 1000 requests per hour $0.10/hour Monthly usage limit 500,000 requests per month $0.05/request Hourly usage limit 10,000 requests per hour $0.02/request Daytime usage limit 5,000 requests per day $0.01/request
Free Usage Limit: No limits on how many requests you can make within an hour for free. The cost for each request is 10¢/hour.
Monthly Usage Limit: You can make up to 500,000 Requests within a monthly period for a total cost of $5/month (500,000 x 10¢). Hourly Usage Limit: You can make up to 10,000 Requests within an hourly period for a total cost of $0.02/Request . Daytime Usage Limit: You can make up to 5,000 Requests during the daytime hours for a total cost of $0.01/Request .
Pricing Model Description Cost Per Request summary A list of all currently available pricing models used by Amazon’s APIs
What are the parameters for each type of Amazon API call?
There are several different types of Amazon API calls that you can make. Here’s a breakdown of each:
1. Get objects
2. Put objects into buckets
3. List objects by tag or category
4. Create, Read, Update, and Delete (CRUD) objects
5. Listen for events on behalf of an account or instance
Get Objects: This call allows you to retrieve data from Amazon in response to specific queries. The available parameters vary depending on the type of object you’re querying for, but generally there are two types of parameters: path and filter conditions. For example, let’s say you wanted to get all books that have been published in the last year. You could use the following call: https://console.aws.amazon.com/apis/authorize?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE&Expires=1488946400&SignatureVersion= 2002-04-01&SourceType=PRIMARY&Token=ebbcdcde-2211-42a0-b668-fecaec916cae&QueryString=SELECT%20book%20from%20titles% 20where%20year%3D2018 Note that the path parameter is “titles/”, and the filter condition is “%>=2017”. The result would be
In this article, we have outlined different ways in which you can use the Amazon API and supplied some examples of how you might price your products using the API. Use these tips to get started with pricing your products on Amazon and start seeing sales growth!
|
OPCFW_CODE
|
_______________ / \ //\ | Greetings | ////\ _____ | | //////\ /_____\ \ / ======= |[^_/\_]| /-------------- | | _|___@@__|__ +===+/ /// \_\ | |_\ /// HUBOT/\\ |___/\// / \\ \ / +---+ \____/ | | | //| +===+ \// |xx|
How to run:
There are 3 ways to run hubot. Local, on Heroku or using Docker.
Run on Heroku
(Recommended for easy and fast run)
Firstly, make sure you have installed nodejs & npm and dependencies:
% sudo apt-get install nodejs npm % apt-get install build-essential libssl-dev git-core libexpat1-dev
To run hubot local you will need to clone it first:
% mkdir hubot && cd hubot % git clone https://github.com/AuthEceSoftEng/chatops.git
Install project's depenndencies. Inside hubot directory run:
% npm install
For using Hubot in slack and using all his available integrations, you must set all the needed environment variables. Example of those vatriables can be found in the file env_example
Set the environment variables using
% export or create a file
.env in $HOME directory:
% cd $HOME % touch .env % pico .env Add your environment variables in this form: ENV_VAR_NAME=VALUE
To run the bot:
% ./bin/hubot -a adapter slack
You are ready to go
When running local you will not be able to set up any webhooks unless you set a public host url.
One way to do this is by using ngrok tool.
% ngrok http $PORT Hubot by default listens on port 8080
Run using Docker
For using docker, first install docker cli.
After that, you have to create a docker image using the follow command:
% sudo docker build https://github.com/AuthEceSoftEng/chatops --tag <image_repo_name>:<image_tag_name> OR % git clone https://github.com/AuthEceSoftEng/chatops.git % sudo docker build . --tag <image_repo_name>:<image_tag_name>
Set the environment variables:
% touch env % pico env Add your environment variables in this form: ENV_VAR_NAME=VALUE
Run the image:
% sudo docker run -it --name <container_name> --env-file env <image_repository_name>:<image_tag_name>
You need to set some environment variables to take full advantage of Hubot. Here is a list of all the environment variables. A more deep explanation for most of them can be found in the Integraions Set-up section.
HUBOT_SLACK_TOKEN HUBOT_HOST_URL=(e.g. <HUBOT_URL>:<PORT> OR https://<heroku_app_name>.herokuapp.com/) MONGODB_URL=(e.g. for mlab addon on heroku: mongodb://XXXX:XXXX@XXXX.mlab.com:<PORT>/XXXX) ENCRYPTION_ALGORITHM=aes-256-ctr ENCRYPTION_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX (a 32 char key of your choice) JENKINS_URL APIAI_TOKEN GITHUB_APP_ID GITHUB_WEBHOOK_SECRET GITHUB_APP_CLIENT_ID GITHUB_APP_CLIENT_SECRET GITHUB_PEM OR GITHUB_PEM_DIR HUBOT_TRELLO_KEY HUBOT_TRELLO_TEAM HUBOT_TRELLO_OAUTH HUBOT_EMAIL=(Hubot's email account. e.g. firstname.lastname@example.org) HUBOT_EMAIL_PASS=(Hubot's email account password) STANDUPS_EMAIL=(a preconfigure email for sending standups reports)
To use hubot, you will need a hubot slack integration. Follow the link above and click “Add Configuration”. Slack will ask you to designate a username for your bot.
Once the username is provided, Slack will create an account on your team with that username and assign it an API token. It is very important that you keep this API token a secret, so do not check it into your git repository. This statement exists for all integration tokens. You’ll also have the option to customize your bot’s icon, first and last name, what it does, and so forth.
Set the slack api token to env variable: HUBOT_SLACK_TOKEN.
To use GitHub integration you must first register a new GitHub App in you account or organization.
After a GitHub App is registered, you'll need to generate a private key. To generate a private key, click on your app's name, then click the Generate private key button. Open the .pem file in any text editor and paste the content in the relevant field in environment variables.
- You you also need the app ID and OAuth credentials and specificly you need Client ID and Client Secret where you can find them at the bottom of the GitHub App's page.
- Configure App as bellow.
To use trello integration you must provide Trello API Key, OAuth Secret and your Team name. You can find them all in trello's api page, as shown in the screenshots bellow.
To use jenkins you must provide your jenkins' url. For build notifications you must install Jenkins Notification Plugin
After that, configure the Plugin:
- Add hubot's endpoint to jenkins jobs: (see Screenshot)
- Configure it to be JSON, HTTP and either "All events", "Job started" or "Job finalized". "Job completed" will be ignored.
- To send to a room: http://:/hubot/jenkins-notifications?room=
- To send to a user: http://:/hubot/jenkins-notifications?user=
- Add log lines if you want to
Dialogflow (ex API.AI)
- Create a new dialogflow account
- Create a new Agent
- Download dialogflow.zip file from the repository root
- Import data as shown in the screenshot bellow
- Set APIAI_TOKEN=(Client access token). For the token, check the screenshot bellow
Standup Meetings - Daily Reports
Hubot provides a descent way for users to post reports to a common slack channel. These reports can be sent in a CSV format via email. For that reason, hubot must:
- Own an email account (tested on outlook and gmail)
- (optional) Know a preconfigured email address to send the reports.
Set these env variabes:
HUBOT_EMAIL=(Hubot's email account. e.g. email@example.com) HUBOT_EMAIL_PASS=(Hubot's email account password) STANDUPS_EMAIL=(a preconfigure email for sending standups reports)
First, read the full Hubot Framework Scripting Documentation
Add functionality to existing integrations
Every integration's functionality is developed in a separate script file. The file name is something like
<integration_name>-integration.js (e.g. github-integration.js).
Theses files contain all the necessary listeners for human-to-bot commands. You can add more listeners for new API requests. For the requests you will need user's credentials that can be returned as an Object using the function
getCredentails(userid) where userid is the user's slack id.
Add new integrations
You can add new integrations by creating and developing new script files.
In cases where integrations use slack member's credentials, a good idea is to store them in mongoDB or in robot.brain or in cache (see brain.js and cache.js). Be aware that in this hubot implementation, robot.brain functions were avoided due to "request time out" bugs. For this reason, a cache moduled was used.
You can encrypt/decrypt tokens using the functions provides by the encryption.js script.
In this implementation, tokens are encrypted and stored in mongoDB but they are also saved in their original form (decrypted) in cache for fetching them faster. To do so, they are saved immediately in cache every time a new token i created or every time the bot (re)starts. This procedure is taking place in brain.js script using functions from the cache.js script.
Bug Reporting & QnA
Feel free to report a new bug or ask a question using issues.
|
OPCFW_CODE
|
A product launch can seem insurmountable when planning begins. Between whiteboarding a plan and launching it, there’s a lot of work to accomplish. Programming teams typically utilize sprints.
Sprints allow developers to remain nimble, focus on the most important tasks, restructure work to match altering corporate priorities, and maintain work balance among teams.
Sprint best practices
Effective sprint roadmaps require more than breaking work into two-week segments. Set reasonable goals and budget for unplanned tasks.
- Planning for the known: Prepare for each sprint by establishing stretch objectives and allocating workloads fairly among team members. This helps reduce burnout and keeps your techs interested in the process. Only the highest priority tickets from your team should be included in sprints so that you can accommodate unforeseen work. It will be the responsibility of your product manager or scrum master to review all of your tickets, determine which ones are most important, and then direct the team to complete them in that order.
- Planning for the unknown: You must allot time for the unforeseen work that will inevitably arise while allocating this assignment. Although it can be challenging to plan for the unexpected, as a general guideline, planned work should only occupy roughly 80% of engineers’ sprint bandwidth, leaving the remaining 20% free for unplanned work.
- Measure your past success: Utilize information from prior sprints as you try to strike this equilibrium. The ability to track the progress your teams made in finishing planned work is only possible with data-driven insights, but they also provide you the chance to see how often unplanned work cropped up during sprints.
- Productive Retrospectives: The retro phase of development sprints is crucial. They offer a chance to assess if the team fulfilled its obligations, what worked, what required improvement, and how those improvements could be put into practice. The formality of retrospectives and stand-ups also consumes active coding time, so they must be effective and useful. Soon, I’ll say more about that.
- Make check-ins productive: You may ensure that your one-on-one meetings are fruitful by utilizing an engineering insights platform such as Best Books About Budgeting. Team leaders and employees are able to engage in fruitful and constructive conversations about work-life balance, the types of tasks employees prefer to take on, how they are contributing to the overall product build, and other related topics when they use a check-in report and the individual contributor’s player card. Use one-on-one meetings to assess individual contributions and discuss any barriers that arose during the sprint. Since sprint movement enables individuals to measure the value of their contributions to the sprint, these meetings should be used.
- Cross-team Collaboration: When developing new products and features, a large number of teams will always be involved, working either synchronously or asynchronously, and they will most likely be located in different physical places. Utilize Pluralsight Flow to identify areas in which chances for collaboration arise, and then build out sprint alignments in those areas where they are both available and valuable. The advantages of working together go much beyond simply completing successful sprints. Together, strategic sprint planning and Pluralsight Flow foster a healthy culture within a team, generate opportunities for organic upskilling, and ensure that engineers remain engaged through the distribution of a wide variety of tasks.
Pluralsight Flow and Sprint Movement
The new Sprint movement report in Pluralsight Flow provides a thorough yet succinct review of recent sprints, helping teams prepare for future sprints.
We all know the truth that you can’t manage what you can’t measure. It doesn’t matter if data is monitored if you can’t easily sort through code changes, pull requests, and sprint success. Using the Sprint movement report, you’ll know what was committed before a sprint, what work was added during the sprint, and your overall completion rate. It also shows who is initiating and finishing new requests, allowing you to better collaborate with teams and people to streamline procedures and prepare for future work.
The Sprint movement report lets you compare your sprints to past iterations to ensure you’re using methods that will help you achieve your goals. Unexpected issues and new tasks may take precedence during sprints, despite managers’ expectations. Sprint movement helps managers identify future priorities so they can plan for more work and meet obligations. Monitoring where new work is coming from and what it is helps improve production pipelines.
Individual contributors can better understand their own process, and managers and executive teams can set realistic expectations about what products and services will be provided and when. This report shows how your preparation led to success.
By reducing the time you spend trying to figure out what is causing bottlenecks in your sprints, you increase the time spent on removing said bottlenecks. Your teams can spend more time coding, and the entire organization can better grasp what you can and cannot commit to.
|
OPCFW_CODE
|
drivers: conflicting demands between DEVICE_MMIO API and GPIO drivers
Description:
I was planning to refactor all device drivers for the integrated peripherals of the Xilinx Zynq-7000 / ZynqMP(UltraScale) families so that the DEVICE_MMIO API is used for mapping the device's respective register space with proper memory protection (MMU in the Zynq, MPU in the UltraScale) rather than having a DT_NODE_HAS_STATUS(..., okay)-dependent static entry for each device in the SoC init code (comp. current contents of soc/arm/xilinx_zynq7000/xc7zxxxs/soc.c), as suggested in the discussion in #46463.
While the DEVICE_MMIO API works as expected in general, with the Xilinx PS UART driver being the first test subject, it now turns out that there's a conflict which prevents the supported GPIO drivers (Xilinx PS GPIO, Xilinx AXI GPIO) from being refactored:
Each GPIO driver has in its config data struct a member of the type struct gpio_driver_config and in its run-time data struct a member of the type struct gpio_driver_data. Regarding those, the GPIO documentation states:
"This structure is common to all GPIO drivers and is expected to be the first element in the object pointed to by the config field in the device structure." (slightly different wording for the run-time data struct, but the same applies there as well).
Each device driver using the DEVICE_MMIO_MAP API contains the macro DEVICE_MMIO_ROM in its config data struct and the macro DEVICE_MMIO_RAM in its run-time data struct, both of which resolve to the instantiation of a struct. Regarding those, the Single MMIO region macros documentation states : "This must be the first member of the config struct." ("data struct" for DEVICE_MMIO_RAM accordingly).
Following both of these demands at the same time is obviously not possible.
Impact:
Unless one of those two demands is more of a "should" than a "must" condition, it is currently not possible to modify GPIO drivers to use the DEVICE_MMIO API.
Expected behavior:
GPIO drivers can use the DEVICE_MMIO API.
Environment:
Current sources from main.
Since both have to be the top of config/data struct, they cannot co-exist in the current foam. I can think of a couple ways forward:
Create another set of device MMIO API which takes a dedicated struct instead of the device struct. This means the driver will need to be aware of whether the MMIO struct resides in config or data. This is going to cause some confusions as there are two sets of device MMIO APIs.
Modify the current device MMIO API to not require struct at top of config/data, and simply takes in a dedicated struct. This requires modifying all current drivers using the API, and thus will require testing of them to ensure no breakage (probably not all of them, but at least more than a few).
Create a GPIO-specific device MMIO API in the GPIO namespace. This would co-exist with the current GPIO mandated config/data struct layout, and will only be used for device drivers requiring this.
Write custom code in GPIO to make it work. This is a localized version of number 3 above as this is not introducing functionality in the API level, but still need to implement it in the driver itself.
@mnkp as GPIO maintainer.
tagging myself. To look at this after I'm back from my vacations.
Following both of these demands at the same time is obviously not possible.
You can use named MMIO regions. See e.g. the following GPIO drivers, which use MMIO:
drivers/gpio/gpio_bcm2711.c
drivers/gpio/gpio_davinci.c
drivers/gpio/gpio_dw.c
drivers/gpio/gpio_intel.c
drivers/gpio/gpio_psoc6.c
drivers/gpio/gpio_rcar.c
drivers/gpio/gpio_sedi.c
|
GITHUB_ARCHIVE
|
Version 3.2.1 (July 6, 2016)
Version 3.2.0 (February 22, 2016)
- New: Display Info Boxes only on hover.
- New: Change the color of Points of Interest in Roadmap Customization.
- Fixed: bug causing the Roadmap Customization to not work.
- Fixed: Added to the Map sorting group.
- Other minor improvments and bug fixes.
Version 3.1.0 (January 9, 2015)
- RW6 Ready
- New Icons
- Address option fixed
- Info Boxes can be turned on/off
- Multiple locations option fixed
- Reduced file size and increased overall speed
Version 3.0.0 (September 24, 2014)
- Support for multiple locations
- Markers will remain centered on window resize
- Info box option to display content with the map marker
- Tap option to be on the bottom
- Color contor for forest areas
- A choice to input an address
Version 2.1.9 (September 9, 2014)
- ID tags changed to classes to avoid duplications on multiple uses
- Overflows set for instances where size percentages are set to zero
Version 2.1.7 (August 18, 2014)
- Added bind event for loading inside cleanTabs
Version 2.1.6 (May 16, 2014)
- Added Markup to assist in overriding theme defaults for title tags
Version 2.1.5 (March 10, 2014)
- Created new color controls for Title, Title (hover) and Expanded Title
Version 2.1.2 (March 10, 2014)
- Added abilit to add a Google API Key (in advanced options) - this is not needed for most users - for more information see Google's Documentation here
- Changed colors for Arterial & Local roads - color will no longer effect text/road names.
Version 2.0.0 (January 27, 2014)
- Custom Controls for Roadmap Map Layout
- Custom Title (only if Custom Controls is turned on - appears in top right controls as the name of the map style and allows users to switch back to default)
- Animate Icon (bounce) - nice animation effect for map markers when selected
- Custom Map Colors (availabe if Custom Controls is turned on)
- Highways, Arterial Roads (main roads), Local Roads (i.e. neighborhoods)
- Transit Lines & Transite Stations
- Land/Landscapes (Man Made & Natural)
- Turn on/off Map Controls: pan, zoom, map type, scale, streetview, overview map
- Advanced - API Address option (maps.google.com by default, maps-api-ssl.google.com for SSL addresses)
Version 1.0.4 (January 20, 2014)
- Added control of header tags for titles
- Added control of header tags for expanded titles (top of large description section)
Version 1.0.2 (December 18, 2013)
- Removed 'enable' command on Active Title color control
- Changed a few default values
|
OPCFW_CODE
|
using System.Collections.Generic;
using System.IO;
using System.Linq;
using UnityEngine;
public enum DoorType {
HotNoisySafe = 0,
HotNoisy = 1,
HotSafe = 2,
Hot = 3,
NoisySafe = 4,
Noisy = 5,
Safe = 6,
None = 7,
}
public class RoomMover : MonoBehaviour
{
public List<Transform> allrooms = new List<Transform>();
public GameObject treasurePrefab;
public GameObject stonePrefab;
public GameObject firePrefab;
public GameObject soundPrefab;
public GameObject sound;
public string fileName = "Assets/probabilities.txt";
public List<DoorType> allDoors;
// Start is called before the first frame update
void Start()
{
int iterator = 0;
foreach (Transform transform in this.GetComponentInChildren<Transform>())
{
allrooms.Add(transform);
transform.rotation = Quaternion.Euler(0, 18 * iterator, 0);
float angle = (18 * iterator) * Mathf.Deg2Rad;
transform.position = new Vector3(-Mathf.Cos(angle) * 6, 0.25f, Mathf.Sin(angle) * 6);
iterator++;
}
//call setup
ParseDoors(fileName);
}
private void Update()
{
Debug.Log(sound.GetComponent<AudioSource>().isPlaying);
Debug.Log(sound.GetComponent<AudioSource>().isActiveAndEnabled);
}
//Parses probabilities from text file and sends to statistics
private void ParseDoors(string fileName) {
Dictionary<DoorType, double> prob = new Dictionary<DoorType, double>();
StreamReader reader = new StreamReader(fileName);
//skip first line
string line = reader.ReadLine();
line = reader.ReadLine();
while (line != null) {
//remove all the whites
line = line.Replace("/", string.Empty)
.Replace(" ", string.Empty)
.Replace("\t", string.Empty);
string type = line.Substring(0, 3);
double value = 0;
if (!double.TryParse(line.Substring(3), out value))
{
Debug.LogError("Invalid Parsing: Unexpected Probability value format");
}
//fill dictionary based on doortype
switch (type) {
case "YYY":
prob.Add(DoorType.HotNoisySafe, value);
break;
case "YYN":
prob.Add(DoorType.HotNoisy, value);
break;
case "YNY":
prob.Add(DoorType.HotSafe, value);
break;
case "YNN":
prob.Add(DoorType.Hot, value);
break;
case "NYY":
prob.Add(DoorType.NoisySafe, value);
break;
case "NYN":
prob.Add(DoorType.Noisy, value);
break;
case "NNY":
prob.Add(DoorType.Safe, value);
break;
case "NNN":
prob.Add(DoorType.None, value);
break;
default:
Debug.LogError("Invalid Parsing: Unexpected Probability Type format");
break;
}
line = reader.ReadLine();
}
allDoors = Statistics(prob);
allDoors = allDoors.OrderBy(x => Random.value).ToList();
SetupDoors(allDoors);
}
//sets up scene using input door list
private void SetupDoors(List<DoorType> allDoors) {
int iterator = 0;
foreach (DoorType door in allDoors) {
Transform doorTransform = allrooms[iterator];
//check for heat
if (door == DoorType.HotNoisySafe ||
door == DoorType.HotNoisy ||
door == DoorType.HotSafe ||
door == DoorType.Hot)
{
Instantiate(firePrefab, doorTransform.Find("door_wall_A3 (1)").position, doorTransform.rotation);
}
//check for noise
if (door == DoorType.HotNoisySafe ||
door == DoorType.HotNoisy ||
door == DoorType.NoisySafe ||
door == DoorType.Noisy)
{
sound = Instantiate(soundPrefab, doorTransform.position, Quaternion.identity);
}
//check for safety
if (door == DoorType.HotNoisySafe ||
door == DoorType.NoisySafe ||
door == DoorType.HotSafe ||
door == DoorType.Safe)
{
Instantiate(treasurePrefab, doorTransform.Find("floor_A (23)").position, doorTransform.rotation);
}
//not safe
else {
Instantiate(stonePrefab, doorTransform.Find("floor_A (23)").position, doorTransform.rotation);
}
iterator++;
}
}
//Returns list of door types calculated from probability dictionary
private List<DoorType> Statistics(Dictionary<DoorType, double> prob, double probSum = 1.0)
{
List<DoorType> doors = new List<DoorType>();
//temp prob ensures origin prob stays intact
Dictionary<DoorType, double> tempProb = new Dictionary<DoorType, double>(prob);
double doorPercentRatio = probSum / (double)allrooms.Count;
//loop through all probabilities above door fraction threshold
foreach (KeyValuePair<DoorType, double> entry in prob)
{
double probfract = entry.Value;
while (probfract >= doorPercentRatio)
{
probfract -= doorPercentRatio;
doors.Add(entry.Key);
}
//WTF why is double not precise? I thought that was the whole point??????? you better fix it billy
probfract = Mathf.Round(((float)probfract) * 100f) / 100f;
//update tempProb for use in flushing
tempProb[entry.Key] = (double)probfract;
}
//flush remainder door probabilities until door count is full
while (doors.Count < allrooms.Count)
{
//update probabilities
DoorType toAdd = RandWeightedItem(tempProb);
doors.Add(toAdd);
//zero for the future
tempProb[toAdd] = 0.0;
}
//handle probability not summing to expected 1
if (doors.Count > allrooms.Count) {
Debug.LogWarning("Probability did not sum to 1");
double actualSum = 0.0;
//determine actual sum
foreach (KeyValuePair<DoorType, double> entry in prob)
{
actualSum += entry.Value;
}
return Statistics(prob, actualSum);
}
return doors;
}
//Returns weighted random from a dict list
private DoorType RandWeightedItem(Dictionary<DoorType, double> prob)
{
double accumulatedWeight = 0.0;
//accumulate weight
foreach (KeyValuePair<DoorType, double> entry in prob)
{
if (entry.Value > 0.0)
{
accumulatedWeight += entry.Value;
}
}
//pick
double selectedValue = Random.Range(0f, (float)accumulatedWeight);
//find
foreach (KeyValuePair<DoorType, double> entry in prob)
{
if (entry.Value > 0.0)
{
selectedValue -= entry.Value;
if (selectedValue <= 0.0)
{
return entry.Key;
}
}
}
Debug.LogError("Accumulated Range was unexpected.");
return DoorType.None;
}
}
|
STACK_EDU
|
Complete the HDMI Timer we built in Part 1, getting the software running and installing the display.
Now that we have the PCB hardware sorted in Part 1, we’re going to tackle the rest of the setup, including the code.
You’ll need to load up the hdmi_gaming_timer.ino sketch into the Arduino IDE, and compile onto your ATmega328P. If you don’t have an ATmega programmer, you can simply use an Arduino UNO itself, then remove the chip and install it into your PCB. Take care with the orientation, as it is easily reversed.
Once this is done, you can install the screen. It’s a simple shield, which is accommodated for in our PCB.
A touch screen panel was a logical choice for the interface, given the need for user input in a variety of different ways, and of course the visual output.
With everything firmly installed, and the tests performed on the PCB in Part 1, you should now be able to power up the unit.
With power applied, you should see the screen illuminate and boot. If you don’t, immediately disconnect power and revisit the testing phase of the instructions in Part 1.
The sketch (Arduino code) for this project is relatively long at 1100 lines, but is written to be easily understood and modified if desired. If you look into the sketch, you’ll see that a lot of it is based around manipulating the GUI. It easily fits within what can be the challengingly small storage limits of an Arduino, or as in this case the 328P microcontroller, which is at the heart of our beloved Arduino.
While some people may want to use this timer as a simple reminder/time restriction device, it’s expected that a key use is parents limiting a child’s screen time. For this reason, we have implemented a simple PIN to lock out adjustments to the settings. Kids are very savvy and would quickly work out how to suspend or reset the timer. We’ve also catered for a restart and other attempted “workarounds”.
Here are a few of our concepts from within the code, which you can easily review within the sketch.
Running the Screen
We designed this project with the Jaycar XC4630 LCD panel. Other LCD panels may work. The equivalent Altronics screen does work, however, minor code alterations are required to get it working due to differences in driver chip.
This LCD panel has a good resolution of 320 x 240 and has some level of hardware graphics acceleration, which means it’s relatively easy to draw filled or unfilled shapes like circles, rectangles etc. There is 1 font included and text can be drawn in a number of sizes.
Displaying Text On Screen
Due to the nature of how text is drawn on the LCD, code similar to the following is used:
tft.fillRect(FromX, FromY, ToX, ToY, FILL_COLOUR);
tft.print("Text to print");
Writing say ‘123’ over ‘456’ does not erase the previous 456. It just makes a mess. To show new text where text has previously been written it is necessary to draw a filled rectangle over it.
You then choose the font size. Size 2 is good for general text. Size 3 is double the size of size 2. Size 26 will just about fill the screen with 2 characters.
The next step is to choose a text colour. Several have been defined. It is important to set the colour with each text print as the colour may have been set to something unexpected in other parts of your sketch.
Next, position your cursor to the x and y co-ordinates of the position where you’d like your text to appear. Finally, you can print the text.
Responding to the Touchscreen
The display is 320 pixels wide and 240 high, but the touch is detected as 1024 by 1024 and has some dead area at the edges. We do a little magic to map the active area to the screen size.
We found it easy enough to test for touching an area by using absolute co-ordinates. For example, while reading the initial PIN code it is easy to determine the button being touched (if any), by determining if a touch is in a particular column. Column 1 being 1, 4 and 7, column 2 being 2, 5 and 8 etc, then doing similar to determine the row.
Continuous touching of the screen will generate a stream of touch events. A simple ‘debounce’ delay eliminates the problem of multiple touch detections for a brief touch, but if the user continues to touch the same area, multiple touches may be registered.
Storing a PIN
The Personal Identification Number (PIN) is stored in EEPROM, which is a small amount of non-volatile (contents remain after power off) storage. Consider that a brand new Arduino or 328P as we have used will have nothing valid in it’s EEPROM.
We have to be able to determine this so if a new device is detected, we initialise the EEPROM by writing a “magic number” into it. If we see the magic number in future we know that the stored PIN is valid. The EEPROM has a large but finite number of times its contents can be changed. In our application, it won’t be changed nearly often enough to damage it.
The Timer Function
This project is designed around allowing x number of minutes of access to a gaming console etc. While it would be possible to count whole minutes, and indeed, the display gives the impression that that is what is happening, we have to consider monitoring the touch ‘controls’ too.
To do this we actually count seconds with a check of the controls being touched at each count. The final minute is handled a little differently. We have made provision for a piezo buzzer to be added to ‘sound’ the last 10 seconds. As these are not looked upon favourably in the office, we omitted it in the final build.
Enclosing this device took a little consideration and planning. HDMI can be susceptible to EMF and other interference fairly easily, due to its low voltage and very high frequencies.
Our first choice would be to enclose this build in a full die-cast aluminium case to help shield everything. However, our touchscreen makes this near impossible to create a clean cut-out for the screen.
Therefore, we’ve developed a 3D printable custom case, as usual, with the addition of aluminium foil to assist in creating some EMF shielding in the case. It’s arguably not as effective as a metal case, but the ease of construction provides some huge benefits.
You can find the case files in the digital resources. The case itself is fairly straightforward, providing mounting for everything.
In order to mount the PCB, you must first remove the screen so you can get to the mounting holes. Support posts are built into the case itself, to screw in directly with 4 x M3 screws.
Before you mount the PCB, optionally line the case with aluminium foil. Using craft glue and a paintbrush isn’t a bad way to achieve this (trying to use one single large piece is virtually impossible).
It takes a little jiggling to get everything lined up perfectly, as we’ve kept it fairly snug. This was done intentionally to avoid issues with HDMI cables with bulky plugs. While the connector is very well standardised, some manufacturers of cables use larger-than-standard grips.
USING THE TIMER
While most of the user interface is fairly straightforward, we thought it would be prudent to go through a few of the features quickly.
Most functions also require the entering of a PIN in order to disallow unauthorised “extension” of gameplay.
You’ll be prompted to enter a PIN. If this is your first boot, you’ll be asked to set your preferred PIN. If you have previously set it, enter your PIN. You’ll then see the Home Screen.
The menu provides you with options for 1 hour and 2 hour timers, no timer (i.e. - permanently on), setting your own custom timer duration, and change PIN. Basically, all the functions we need.
The 1hr and 2hr timers are one-touch options to set the timer. You can easily amend their duration within the code if you have a different preference. To activate, simply touch the preferred timer.
This permanent-on function is great for when you want standard operation of your HDMI device, whether it’s your computer or TV, for an extended period.
his custom timer function provides controls to set whatever time limit is desired, in 15 minute increments. It’s entirely possible to modify the 15 minute increment to 5 or even 1 minute increments, however, we decided 15 minutes was a good general place to start.
To set the time, simply use the up / down controls for the hours and minutes. There is no need for seconds here, so the provision has not been made.
Entirely self-explanatory, allowing you to update the PIN any time a timer is not already active.
WHERE TO FROM HERE?
Our PCB includes a full set of headers for unrestricted access to the GPIO (however you can’t use GPIO that’s already in use unless you modify the code, of course). This allows you to expand the timer however you’d like.
Perhaps, you can automatically power-down all of your devices when the timer expires (using safe methods such as interfacing with remote control powerboards, of course - playing with mains power is not something to be taken lightly). Maybe you’d like to expand it with a WiFi shield and send push notifications so you know when the timer has run out, even if you’re not near the device. The opportunities are endless!
|
OPCFW_CODE
|
hey, can you help me how to get the transfer function from this equations
@mohamed This article and the equations provided would not provide the transfer functions / IK directly.
Thanks for this tutorial its very help full
I just want to ask if we could contact because the robot arm is my graduate project
And i need to know more about it
@Adobe: If you have questions related to project design and challenges, we recommend that you post about it on the RobotShop forum or on our project-oriented website, Let’s Make Robots.
How do you calculate the waist rotation moment about the base ? Does the motor torque need to overcome friction (MU* Fn) ? And the inertia about the axis perpendicular to roation ?
@Vikram If the arm is balanced and frictionless (which it’s not unfortunately), the torque needed in the base would only have to overcome inertia and be able to stop the arm (decelerate). You can see if you can come up with equations, but friction and inertia at various configurations (full extension and max payload with maximum rotational acceleration) would likely need to be considered. We’ve found based on experience that the torque is not too high and in the AL5D, only a Hitec 422 is needed.
Hi , Mr Coleman Benson , the above article about calculating forces and torques is good . It is for 6 axis SCARA. Can you please explain the same for parallel kinematic robots. that would be great. Thank you
@shrishail bannigidad Although this article does not go into inverse kinematics, you can find a bit torwards the end of this guide: https://www.robotshop.com/media/files/pdf/robotshop-multi-purpose-robotic-arm-guide.pdf
Hi Coleman-i am working on a design for an arm that will have support already in place for the weight of the sample and arm and will only require the actuator to rotate the sample to the desired position; ie the only degrees of freedom are rotation about the Z axis. We will use a low friction/rolling item to support the weight of the sample and arm.When calculating the torque required by the actuator, is it correct to assume that we only need to take the inertia of the arm and sample into account and disregard the weight since it is supported? Thank you
@Thomas Fjeld So long as the mechanism is mechanically sound at the base, the torque required will need to counteract / overcome the inertia (acceleration / deceleration).
Hello Mr Coleman. First of all i want to thank you for this great tutorial.
My concern is about the “safety factor”, I’m working on the design of an arm and the first approach doesn’t require to go through the dynamics (inertia and acceleration) so the use of that “safety factor” works just fine. My question is if there’s some bibliography or references you could recommend to support the use of one or other factor (I mean numerically).
@Ronnie Good question; unfortunately we do not currently have any links to suggestion. You will likely ned to obtain a university physics book / robotics book regarding kinematics and dynamics.
May i know the parts available website
Thanks For your helpful Tutorial
the last relation says : T(holding) + T(acceleration) = Ia
I think that it should be : T(acceleration) = Ia
@Fadi Masalmah You are indeed correct.
I found it really helpful. I want little bit more to complete my calculation. If robotic arm stops suddenly(let say from a angular velocity of x to zero) then how can I calculate jerk on joint of the link. Please give me solution.
in case of a cylindrical coordinate robot…if i place motor for linear up and down motion with the base motor(i.e motor for rotation)Then the torque of the two motors should be same or not?
@hasnid The calculations / equations would be quite different.
@Mayur Jadhav The “jerk” is a result of inertia. The inertia depends on the position of the arm, the weight of each component etc., as well as the deceleration curve as the arm stops. There’s a lot involved.
|
OPCFW_CODE
|
import string
main=string.ascii_lowercase
def conversion(plain_text,key):
index=0
cipher_text=""
# convert into lower case
plain_text=plain_text.lower()
key=key.lower()
# For generating key, the given keyword is repeated
# in a circular manner until it matches the length of
# the plain text.
for c in plain_text:
if c in main:
# to get the number corresponding to the alphabet
off=ord(key[index])-ord('a')
# implementing algo logic here
encrypt_num=(ord(c)-ord('a')+off)%26
encrypt=chr(encrypt_num+ord('a'))
# adding into cipher text to get the encrypted message
cipher_text+=encrypt
# for cyclic rotation in generating key from keyword
index=(index+1)%len(key)
# to not to change spaces or any other special
# characters in their positions
else:
cipher_text+=c
print("plain text: ",plain_text)
print("cipher text: ",cipher_text)
plain_text=input("Enter the message: ")
key=input("Enter the key: ")
# calling function
conversion(plain_text,key)
'''
----------OUTPUT----------
Enter the message: hi there my name is abhiram
Enter the key: awesome world
plain text: hi there my name is abhiram
cipher text: he xzsdi zu brxh io etvuvni
>>>
'''
'''
Took help from:
1. https://www.youtube.com/watch?v=FAbkLSktxWQ
2. https://www.youtube.com/watch?v=zLbZM_MA3qE&t=575s
'''
|
STACK_EDU
|
I am using dataset which contains some records I want to query with dataset to get every time next 20 records. I don't want to do it throught loop is there any way that I can get next 20 records everytime.
i have a table for customer forecast with following fields.. customer, year, week, quantity. now i need to write the SP to retrive the records based on the from week,year and to week,year... like this... from 45th week of 2009 to 25th week of 2010. how can write the query...
I have a basic CMS which I'm adding to a site to display some downloads organised into groups. I have two tables, one for the groups and one for the downloads. I'm using a repeated to display the groups with another repeater inside to display the children. This works perfectly.
However some of my download groups may not have any downloads related to them and I'd like to handle this by filter the groups so that only those with a relate download record(s) are shown.
I'm trying to do this with the query which populates the top repeater based on some ideas I read but I must be going wrong with the syntax.
Here is what I'm using to try and only select downloads groups which have downloads linked to them by the download group ID.
Can anyone offer any thoughts on how I should construct the query to perform this?
I know I'm missing something here but I can't figure out what it is. I've got a query joining three tables....accounts, payments, and a table linking the two (there is a M:M relationship).I'm trying to pull a list of all accounts in the account table that have a payment that needs to be resequenced, and also the maximum payment priority if there are any payments that haven't been fully paid. If all payments HAVE been fully paid, I want to return a 0.
It's that last bit of logic that I can't get right. If I include that in the where clause, I get only the accounts that have a payment that hasn't been fully paid. If I take it out, I get all the accounts I get, but I get the highest payment priority whether or not the payment has been fully met.
Here is the query....how do I include the where clause criteria but still include all accounts?
i have a question regarding grid view. i have done almost all work i needed. but one thing i want to do is i have set 4 of page size of grid. so when i show records on the web page i want to set a iframe after first two records then iframe and then two more records like i have shown in the example image. i have used LINQ with C#.how to do it. if you people want i can show you my code.
I have a SQL Data Source that displays records after going through the Query Builder and select "Test Query" yet when I save it, go back to the design mode and select View in Browser I get nothing but a blank screen.
1. I have a GridView on my page and it uses sqldatasource with parameterized query. What I want to do is, on page load (where nothing has been selected so no parameter supplied), I want it to query everything (something like SELECT * FROM [this_table]) but since my SelectCommand is something like
SELECT * FROM [this_table] WHERE [this_column] = @someParameters AND [that_column] = @someParameters.
Can I play around with default value to achieve something like that but how ? Now, when the page loads, it doesn't show anything (No Gridview).
2. On my page, I made something like (username, gender, address, and more) and one single search button. That means, no single control enable auto postback. What I am trying to accomplish is building dynamic query
(if username specifed -> SELECT * FROM [this_table] WHERE [username] LIKE @username).
If both username and gender are specified (SELECT * FROM [this_table] WHERE [username] LIKE @username AND [gender] = @gender) and you know the rest. How can I do this using GridView and SqlDataSource ? To my knowledge, I can only specify one SELECT statement in a sqldatasource.
By right clicking on my database i created a query in server explorer. But where are this query stored can't find them back. I should aspect that their is a folder query's like there is a folder tables but this isn't the case.
I am facing a big problem with simple linq query.. I am using EF 4.0..
I am trying to take all the records from a table using a linq query:
var result = context.tablename.select(x=>x);
This results in less rows than the normal sql query which is select * from tablename;
This table has more than 5 tables as child objects (foreign key relations: one to one and one to many etc)..
This result variable after executing that linq statement returns records with all child object values without doing a include statement.. I don't know is it a default behavior of EF 4.0 . I tried this statement in linqpad also..but there is no use... But interesting thing is if I do a join on the same table with another one table is working same is sql inner join and count is same..but I don't know why is it acting differently with that table only.. Is it doing inner joins with all child tables before returning the all records of that parent table?
Dim MediaQuery = From m In dB.DOWNLOADS _Where m.ID = id _Select
which returns a record from the database. One of the fields in the record is a FK to another table. I need to read this value to be able to set a dropdown based on the ID but I can't work out how to access it. For a standard record I can just do the following txtTitle.Text = MediaQuery.FirstOrDefault().TITLE
However with the foreign key it doesn't work like that. I've tried drpGroup.SelectedIndex = MediaQuery.FirstOrDefault().DOWNLOAD_GROUPS.ID where DOWNLOAD_GROUPS is the FK field but it returns Object reference not set to an instance of an object. If you're simply wanting to read some values from a single db record in the entitiy framework and one is a foreign key how should I go about getting the value?
SELECT TOP 5 * FROM MyTableName WHERE ID=@ID ORDER BY NEWID()
As you may know it gets 5 records by random. I use it with dataset and bind the listview to this dataset and in ItemDataBound i want to do some programming (like making some controls visible or invisible) i need itemindex.
but the question is : how can i get the itemindex of a listview when sql query selects records randomly?
|
OPCFW_CODE
|
Birmingham City v Middlesbrough H2H
Below you will find all of the head to head matches and results played between Birmingham City and Middlesbrough
|19/12/20||Birmingham City v Middlesbrough||Championship||1-4|
|21/01/20||Middlesbrough v Birmingham City||Championship||1-1|
|04/10/19||Birmingham City v Middlesbrough||Championship||2-1|
|12/01/19||Birmingham City v Middlesbrough||Championship||1-2|
|11/08/18||Middlesbrough v Birmingham City||Championship||1-0|
|06/03/18||Birmingham City v Middlesbrough||Championship||0-1|
|22/11/17||Middlesbrough v Birmingham City||Championship||2-0|
|29/04/16||Birmingham City v Middlesbrough||Championship||2-2|
|12/12/15||Middlesbrough v Birmingham City||Championship||0-0|
|18/02/15||Birmingham City v Middlesbrough||Championship||1-1|
|09/08/14||Middlesbrough v Birmingham City||Championship||2-0|
|08/04/14||Middlesbrough v Birmingham City||Championship||3-1|
|07/12/13||Birmingham City v Middlesbrough||Championship||2-2|
|16/03/13||Middlesbrough v Birmingham City||Championship||0-1|
|30/11/12||Birmingham City v Middlesbrough||Championship||3-2|
|17/03/12||Birmingham City v Middlesbrough||Championship||3-0|
|21/08/11||Middlesbrough v Birmingham City||Championship||3-1|
|26/12/07||Birmingham City v Middlesbrough||Premier League||3-0|
|01/09/07||Middlesbrough v Birmingham City||Premier League||2-0|
|04/03/06||Middlesbrough v Birmingham City||Premier League||1-0|
|23/08/05||Birmingham City v Middlesbrough||Premier League||0-3|
Birmingham City vs Middlesbrough H2H Results
In the last 20 head to head games Birmingham City has won 5 times, Middlesbrough has won 10 times and on 5 occasions it has ended in a draw.
Birmingham City vs Middlesbrough H2H Goals
The last 20 times Birmingham City have played Middlesbrough H2H there have been on average 2.6 goals scored per game. The highest scoring match had 5 goals and the lowest scoring match 0 goals. Birmingham City have scored an average of 1.1 goals per game and Middlesbrough has scored 1.5 goals per game.
Birmingham City vs Middlesbrough Over 2.5 goals
In the last 20 games between Birmingham City vs Middlesbrough, there has been over 2.5 goals in 50% of matches and under 2.5 goals 50% of the time.
Birmingham City vs Middlesbrough Over 3.5 goals
In the last 20 games between Birmingham City vs Middlesbrough, there has been over 3.5 goals in 30% of matches and under 3.5 goals 70% of the time.
Birmingham City vs Middlesbrough Over 1.5 goals
In the last 20 games between Birmingham City vs Middlesbrough, there has been over 1.5 goals in 75% of matches and under 1.5 goals 25% of the time.
Birmingham City vs Middlesbrough Over 0.5 goals
In the last 20 games between Birmingham City vs Middlesbrough, there has been over 0.5 goals in 95% of matches and under 0.5 goals 5% of the time.
|
OPCFW_CODE
|
Amiga Virus Encyclopedia
Name : SCA
Aliases : No Aliases
Type : Bootblock
Size : 1024 bytes
Clones : Too many clones to list them all here.
Symptoms : No Symptoms
Discovered : 15 november 1987, Germany
Way to infect: Boot infection
Rating : Harmless
Kickstarts : 1.2
Damage : Overwrites boot
Removal : Kickstart 1.2 & 1.3 : VT-Schutz v3.17
Kickstart all others: VirusZ III v1.04ß or higher, and also Xvs.library v33.47 or higher
Comments : The SCA-Virus is a very simply one. It copies itself
to $7EC00 and patches the Cool-Vector to stay resident
resident in memory. After a reset the virus uses the
DoIO()-Vector to infect other disks.
BORING, isn`t it? What is the special thing of the
really boring virus ??? Yes. You have guessed it!
The SCA-Virus was the FIRST real AMIGA-virus. It was
created by a group of SWISS Crackers called the:
(S)wiss (C)racking (A)ssociation.
Some old cool dudes from the scene will recognize them
and their productions. As this virus was created more
and more good coders raped their ability to code more,
more and more viruses (Byte Bandit and Byte Warrior)
are also little "legends" besides the SCA.
As you can think by yourself the SCA-Virus was very
wide spreaden by swappers and especially on the so
called "Copy-Parties" from the sceners. In that time
nobody knows much about the topic "Amiga-Viruses", so
nearly every Amiga-User was infected.
The same dudes which coded the SCA virus before, gave
out a special SCA-Killer. Yes, a virus killer which
should kill a virus done by the same person, who maked
the SCA virus before.
The SCA-Virus and his virus killer was probably coded
by a man called 'CHRIS'
I don`t know exactlty if it is the CODENAME or his
real name. In a very old TRISTAR-Demo this virus was
discussed by the just formed TRISTAR crew. I the
greetingslist you can read:
"SCA (Hey CHRIS! we are thinking that your SCA-Virus
is great. But we also think that we are the only one."
Or something like that. Make your own picture of the
The virus gives out a GFX-message every 15th infection:
"Something wonderful has happened"
"Your AMIGA is alive !!!"
", and even better"
"Some of you disks are infected"
"by a VIRUS !!!"
"Another masterpiece of"
"The Mega-Mighty SCA !!"
(Animation of the SCA Original Bootblock Virus)
Info: The 'SCA virus' is the first computer virus created for the Commodore
Amiga and one of the first to gain public notoriety. It first appeared in
Test made by : Safe Hex International
|
OPCFW_CODE
|
|A day in the life of a CyberMasochist||- by Jonathan Reason|
still fondly remember my first "real" computer and how proud of it I was. A real, True Blue, IBM AT with 256k RAM and a 10 Meg hard drive. I also remember how scared I was of delving into the secret world of DOS 3. Still, with a little help from my friends and a subsequent purchase of an all singing, all dancing 386, resplendent with MS Windows 3.0, I thought I had really arrived. I soon started playing and rapidly formed an opinion, which I still hold as a truism:
You have to do something pretty bloody stupid to do any lasting harm to a computer.
Hmmm. If only I'd stopped there.
Soon I was doing things at the dear old command prompt that, only months before, I'd watched in awe as others performed such miracles.
Later--via Windows 3.1 and WfWg, and a Pentium 100 with more RAM than I used to have hard disk--I got bored with Windows and so I started looking around for something else to take to pieces and put back together (and then count the number of screws left over). And so I stumbled across Warp, which I now love dearly.
But soon (and I won't pretend I didn't know the moment would arrive some day) I found I could delve into the depths of OS/2's config.sys and twiddle and tweak without trepidation. And whilst I still had a lot to learn about Warp, I was generally unafraid of it.
Then, and I rue the day, I saw a posting about FixPaks, a phrase which still sends shivers down my spine. But being basically brave (read: foolhardy) and remembering my trusty truism, I downloaded FixPak 16 and was ready to set about the install when I read about the problems with it and decided to wait another week until FixPak 17 was available. In retrospect I think this was an omen. One which I duly ignored.
Cut to: One week later. Machine in front of me. Cup of coffee. Cigarette. Logged into Hobbes FTP site. I start the download.
Having spied the self-extracting versions in .exe format I decided these were the ones to get, rather than the zipped ones. Mistake number one. The first three downloaded well, but slowly. Numbers four onwards steadfastly refused to come when called. Another cup of coffee. Another cigarette. OK, so download the others in zipped format. No problem. Mistake number two.
Ten floppy disks worth of FixPak17 in one hand and a fresh mug of coffee in the other, I did a chkdsk/f on all drives before I started anything. All went well (another omen?). And so to the install--or so I thought. I read the read.me file (you see, some people do) and decided not to print the 72 pages. Mistake number three.
I also decided, for some unknown reason, to install from the command line. Perhaps I felt a little safer there. All went well, but rather alarmingly the screen did some very strange things. The message boxes didn't line up with the text that I presume was meant to go in them. I started thinking, "Here comes another reinstall," but that holds no fear for me now. After the first three disks the computer came to a grinding halt. An internal processor error.
But all was not lost. At this stage I could still boot to the old setup and retrieve the first three disks again, this time in the same zipped format as the others.
More coffee. Re-read the read.me. Start again. This time, I decide to install from the PM install application that comes with the FixPak. Perhaps this is safer after all. In go the kicker disks. In go the rest of the disks, one after the other. The text that was all over the screen before is now lined up in neat little boxes. I even managed to drink a cup of coffee, make a phone call and watch the install progress at the same time, proving I can multitask as well as Warp. I felt as if another milestone had been reached. But my feeling of well being was short-lived. For some reason disk 7 was not called for and so not used.
At last a little box popped up informing me that the install was successful and I should now reboot. I did. It lied.
Yes the boot time was quick, much quicker than it had been before. Unfortunately it only booted to a blank desktop and an error message (SYS 2070 WPPRINT -> PMSPL.616. Type help SYS 182 for more information). The problem was, I couldn't easily get to a command line to type "help SYS 182".
Ah. Well there's always the Maintenance desktop, isn't there? No, not this time there wasn't. Just the same error message.
Never mind, simply reboot to a command line, yes? No. My computer seems to dislike Alt-F1 boots. (I subsequently found out why--I was trying to press Alt and F1 at exactly the same time. You have to hold down Alt, then press F1, of course. I told you I still had a lot to learn. So much so, that during software upgrades the first thing I do is go to the Desktop Settings notebook and check the "Display recovery choices at each system start" checkbox.)
Eventually I managed to get to a command line using rescue disks, but typing HELP SYS182 shed little, if any light on my problem. Time for more coffee. Oh well, never mind, just reinstall and start again I thought. Wrong again. Out came the trusty Warp CD and the install went smoothly (I'm getting almost as well practiced at installing Warp as I had become at reinstalling Windows). Reboot. Same blank desktop. Same error message. Time to buy shares in a coffee company.
My OS/2 system is installed on drive D: which is an HPFS drive in order to keep it separate from DOS (Drive C:--FAT) and all the data and programs, etc. on drives E: and F:. So all I needed to do was to reformat drive D: and re-re-install Warp. Simple right? How can one averagely intelligent man be so consistently wrong?
By now it was about 5 o'clock in the morning, but I would not be beaten. Not by a mere machine. After reformatting Drive D: and another install, I sat staring again at the same old familiar error message, lonely on a blank desktop, sipping coffee nervously. Had my truism been proved a falsehood?
I discovered some time later that FixPak17 writes a file to the root directory of drive C: which tells subsequent installs the status of the current system level (even if you don't boot Warp from C:). Of course whatever I did to drive D: had no effect on this file.
It was at this time that I was glad I didn't completely wipe DOS and Windows. After a brief bit of culture shock trying to get back into the swing of the antiquated ways Windows tries to do things (I was right clicking on "objects"--Doh!), the members of the OS/2 Support forum on CompuServe suggested a fix. It appears that I was not alone with my problems. Apparently there is a DLL on disk 4 of the FixPak which the install routine does not update. By now I was willing to try anything, so I did. I unpacked the relevant library and, holding my breath, rebooted. And, wonder of wonders, into life sprang Warp, well and truly FixPak'd!
Do I like it? Yes. It's quicker. The SIQ fix doesn't seem to have solved the problem entirely, but it is an improvement. Was it worth the hassle? Well...
By the way, last week I had reason to reinstall Warp yet again because my PMMERGE.DLL seemed to be causing odd and erratic crashes, so I had to reinstall FixPak17. No problem. Straight in, no hangs, no copying odd files manually, it even used disk 7 this time. Hmmmm. Who ever said these things were just machines? This one definitely has a mind of its own. Perhaps it wanted its own cup of coffee. Java, anyone?
letter to the editor.
Our Sponsors: [ScheduPerformance] [SPG] [Surf'nRexx] [BMT Micro]
Contents | Previous Article | Next Article
Copyright © 1996 - Falcon Networking
|
OPCFW_CODE
|
Workshop: Branching and Interacting Particle Systems
Johannes Gutenberg-Universität Mainz
27th February - 2nd March 2023
Branching random walks are a natural model for various systems of population dynamics and genetics. A key question is understanding the influence on spatially-dependent branching mechanisms of interactions caused, for example, by selection, competition, random environments. It is also important to have mathematical tools to model one or more coexisting spatial populations competing for resources. This workshop aims at bringing together younger colleagues and experts in the field in order to provide a stimulating discussion environment and share the most recent developments on these challenging research topics.
This workshop is part of the DFG Priority Programme SPP 2265: Random Geometric Systems funded by the Deutsche Forschungsgemeinschaft.
Viktor Bezborodov (University of Göttingen)
Elisabetta Candellero (Università degli Studi Roma Tre)
Jiří Černý (University of Basel)
Piotr Dyszewski (Wrocław University)
Alison Etheridge (University of Oxford)
Matthias Hammer (TU Berlin)
Pascal Maillard (Université Toulouse III)
Bastien Mallein (Université Sorbonne Paris Nord)
Pascal Oswald (University of Basel/JGU Mainz)
Sarah Penington (University of Bath)
Matthew Roberts (University of Bath)
Emmanuel Schertzer (University of Vienna)
Alexandre Stauffer (University of Bath)
Zsófia Talyigás (University of Vienna)
Terence Tsui (University of Oxford)
The workshop is open to all members of the SPP Priority Programme 2265 as well as external participants.
Registration is mandatory and is open until January 31st 2023. Please send an email to Ms. Sabine Muth (stochastik(at)uni-mainz.de) stating whether you are a member of the SPP 2265 or an external participant.
For members of the SPP Priority Programme 2265 all costs are covered. We have limited funds to cover costs for external participants. In case you require financial support, please indicate so in your registration email.
Invited speakers do not need to register.
Institut für Mathematik
The workshop will take place in Room 04-432, located on the 4th floor.
Two other rooms in the same corridor have been booked for discussion.
A map of the campus and directions can be found here.
Matthias Birkner (JGU Mainz), birkner(at)mathematik.uni-mainz.de
Alice Callegaro (JGU Mainz), alice.callegaro(at)tum.de
Nina Gantert (TU Munich), nina.gantert(at)tum.de
|
OPCFW_CODE
|
Use wp_enqueue_style based on user option in widget
I currently developing wordpress widget for my website. This widget will pull user latest post in my website and show in their blog.
In the widget there is option for user to use their css or my css for the widget
I use this code in my widget and its work perfectly but this code will always load the css.
add_action( 'widgets_init', 'load_my_widgets' );
function load_my_widgets() {
register_widget( 'My_Widget' );
wp_register_style( 'my_widget_css', 'http://mydomain.com/css/my-widget.css' );
wp_enqueue_style( 'my_widget_css' );
}
The problem is how can I enable the CSS based on the user option? I try something like this but its not working
function widget( $args, $instance ) {
$own_css = isset( $instance['own_css'] ) ? true : false;
if ( ! $own_css ) {
wp_enqueue_style( 'my_widget_css' );
}
}
This is sort of sloppy as the style will still be enqueued regardless.
In the code to display your widget, change the CSS selectors based on whether or not the user selected own style:
<?php
$prefix = $instance['own_style'] ? 'ownstyle_' : 'pluginstyle_';
//then....
?>
<div id="<?php echo $prefix; ?>selector"> ...</div> etc
A user would be able to use both a widget with custom style and one with default style this way. Enqueue your style separately in its own function.
add_action( 'wp_print_scripts', 'wpse26241_enqueue' );
function wpse26241_enqueue()
{
if( is_admin() ) return;
if( is_active_widget( 'My_Widget' ) )
wp_enqueue_style( 'my_style' );
}
this code surely work but the CSS will be load regardless of the user option
Correct. As I said: sloppy. It's an interesting question, however. Makes you wonder why it's not easier to do.
anyway thanks for the idea :) need additional research or maybe need to log in trac?
finally I manage to load the CSS based on user option.
How To:
extend my widget class to WP_Widget
in my widget constructor I just call the code below and I can get the user option.
$settings = $this->get_settings();
Based on the user option, I can use wp_enqueue_style.
You need to pull the option out of the Widget and make it a Plugin option.
For example, if your register_setting() option array name is plugin_myplugin_options (where "myplugin" is your Plugin slug), you could add a Boolean option, use_plugin_css, that if true will use the Plugin's CSS, and if false will use the user's CSS.
Thus, change your widgets_init callback to something like this:
function load_my_widgets() {
register_widget( 'My_Widget' );
$myplugin_options = get_option( 'plugin_myplugin_options' );
if ( $myplugin_options['use_plugin_css'] ) {
wp_register_style( 'my_widget_css', 'http://mydomain.com/css/my-widget.css' );
wp_enqueue_style( 'my_widget_css' );
}
}
p.s. if this is a publicly released Plugin, you need to namespace all of your function calls. e.g. change function load_my_widgets() {} to function myplugin_load_widgets() {}, where "myplugin" is your Plugin slug. The same goes for registering a custom style or script; change my_widget_css to myplugin_css, where "myplugin" is your Plugin slug.
Widget options are stored in the wp_options table, correct? I'm guessing we can't really grab widget options because WordPress likely names them differently for each install. Is there some function related to grabbing widget options that can be used outside the widget class? Seems like that would be a handy thing for situations like this: a quick function that found all the active widgets of a given class and returned an array of their options.
As far as I can tell, the option needs to be global. What happens if multiple instances of the same Widget are called on the same page, one of which specifies using the Plugin CSS, and the other of which specifies using custom CSS?
You would just need to make sure to change the css selectors of each, which gives me an idea (to be posted in an answer below).
You can certainly go that route for options that are local to each Widget instance; however, you're going to run into problems when attempting to call wp_enqueue_style() (which enqueues a stylesheet in the document head rather than in the local Widget instance), based on a Widget-instance setting.
is it possible to add option in the wp_options table and store the checkbox value? so I can grab the option like @Chip Bennett code above
|
STACK_EXCHANGE
|
Summary: This blog is regarding how to perform SharePoint to SharePoint online migration. So, if you want to perform such type of migration then keep reading the blog.
Alex: Dear support team, you just did it. Your software worked really well. Congratulations. I will definitely recommend this SharePoint to SharePoint Online Migration tool to everyone. My issue was I was searching for a reliable tool to migrate SharePoint files to SharePoint online account. I also wanted to transfer document library from SharePoint to another account. I used many tools to accomplish this task. But, no one worked well for me.
With your suggested tool, I have converted unlimited SharePoint files to online SharePoint. Thank you so much for your valuable support.
What would be the most precious thing than getting good reviews from clients. The answer is “Nothing”. We know that many users are looking for a tool to perform SharePoint to Online SharePoint Migration. So,, we would like to explain the step by step process to migrate SharePoint files to SharePoint online without facing any hassle.
1. Download the software on your machine.
2. Install and run SharePoint migration tool and click on Start button.
3. Enter the source login such as Site URL, User Name, and password and then click on Login button.
4. Now, check the categories and click on Next button.
5. The tool provides multiple filters. Apply the filters as per your requirements and click on Next button.
6. Now, enter the credentials of Destination Login and click on Login button.
7. Lastly, click on Export button to begin SharePoint to Online SharePoint migration.
The tool also provides free demo version facility. It is suggested to download demo edition first. With this freeware, you can easily check the complete working of software. It also allows you to export 100 lists records and 500 MB documents from SharePoint to SharePoint. Now, users can easily get the trial without investing on the tool.
It is an advance solution to Migrate Site Contents from SharePoint to SharePoint. With this tool, one can easily get the safe SharePoint to SharePoint Online email migration with meta properties. Also, the tool allows you to convert SharePoint 2013 to SharePoint online in just a few moments.
1. The tool supports to perform SharePoint to online SharePoint migration without any hassle.
2. SharePoint Migration tool allows you to migrate contents of SharePoint Team Site.
3. Move SharePoint document library to another SharePoint online account.
4. Supports to migrate Microsoft SharePoint document sets and document folder without any trouble.
5. Alse, it offers multiple filters such as date filter. One can apply filter for selective migration.
6. SharePoint to SharePoint tool is compatible with every Windows platform.
7. Allows you to migrate selective SharePoint file to another SharePoint account.
8. Also, it provides incremental contents migration facility to migrate only new data from team site.
9. The tool provides the complete progress report of SharePoint to SharePoint online migration.
Ques 1. Does the tool allows you to migrate On-Premise SharePoint data ?
Ans 1. No, it does not support to migrate On-Premise SharePoint to another account.
Ques 2. How to migrate SharePoint 2010 to SharePoint 2016 ? Is this software support the same ?
Ans 2. With this tool, you can easily migrate SharePoint 2010 to SharePoint 2016 account.
Ques 3. Can I use this software to move SharePoint list content to SharePoint online ?
Ans 3. Yes, the tool supports to migrate SharePoint List Content to another SharePoint Online account.
Ques 4. Is this application is compatible with Windows 10 edition ?
Ans 4. Yes, SharePoint Online Migration tool is compatible with every Windows platform.
Now, users can easily migrate SharePoint to SharePoint online without losing a bit of information. We have explained the complete process to accomplish this task. Free SharePoint Email Migration tool is free of any risk. It provides the safe, secure, and accurate migration.
The suggested tool can easily resolve the following queries:
How to migrate SharePoint to SharePoint online ?
Convert SharePoint 2013 to SharePoint online
How to migrate SharePoint 2013 to SharePoint online
How to migrate SharePoint 2010 to SharePoint online
|
OPCFW_CODE
|
Can't display table with a secret property
Seems like the new ext::auth added support for secret fields, which do not work in the UI?
A secret modifier is used in the extension's DDL.
It got me wondering if it's usable in classical SDL, but no luck:
type User {
required password: str {
secret := true;
readonly := true;
}
}
The readonly property is valid but the secret throws error:
error: 'secret' is not a valid field
┌─ C:\fakepath\default.esdl:10:9
│
10 │ secret := true;
│ ^^^^^^^^^^^^^^^ error
edgedb error: cannot proceed until .esdl files are fixed
I speculated that I need to login into the UI as some root user, but apparently the edgedb ui commands authenticates with the creadentials from the config linked via edgedb info
As a sidenote, i was wondering why there are the 2 configs since the auth extensions acts like a singleton. This is not only in the UI, but also in REPL. When I attempt to update the 'table' I get back error, that the view cannot be updated. It got me wondering whether one of the objects is table and the other some view of it, like a clone/mirror. But no, neither of those are updatable.
The values in the auth config can be updated via the configure command like
configure current database set ext::auth::AuthConfig::allowed_redirect_urls := {"http://localhost:4000/", "http://localhost:3000/"}
i was wondering why there are the 2 configs since the auth extensions acts like a singleton
I'm not entirely sure, but I think it's some quirk of how the config system works. Other configs can have different levels like 'instance' and 'database', that get layered on each other to produce a 'final' config, which I think shows up in introspection as another config object (eg. try running select cfg::AbstractConfig {*}. I get three objects returned with types of cfg::Config, cfg::DatabaseConfig and cfg::InstanceConfig). So for auth config, where you can only define it at the database level, I think the two objects represent database and 'final' config (which end up being the same).
When I attempt to update the 'table' I get back error, that the view cannot be updated. It got me wondering whether one of the objects is table and the other some view of it, like a clone/mirror. But no, neither of those are updatable.
As far as I know all config is readonly in normal edgeql queries, and can only be updated with the configure ... commands. I think because config is special in that it's not just data in the database, but represents state that needs special handling to sync with the EdgeDB server and postgres backend (eg. listen addresses, query working memory, etc.)
i was wondering why there are the 2 configs since the auth extensions acts like a singleton
I'm not entirely sure, but I think it's some quirk of how the config system works. Other configs can have different levels like 'instance' and 'database', that get layered on each other to produce a 'final' config, which I think shows up in introspection as another config object (eg. try running select cfg::AbstractConfig {*}. I get three objects returned with types of cfg::Config, cfg::DatabaseConfig and cfg::InstanceConfig). So for auth config, where you can only define it at the database level, I think the two objects represent database and 'final' config (which end up being the same).
Yeah, this is all correct. One AuthConfig object represents just the database level config, and one represents the final config (which is is database+session). Currently extension configs can't be configured at the session level, and so this is redundant, but that will change soon, and we didn't want people to write code that expected only one object for extension configuration and have it break. (An example of something that we really want to have be session configurable for extensions is the probes config for pgvector. Possibly it won't actually make sense for any of auth to be session configurable, so maybe we should make it so there is only one AuthConfig object, though that would require some adjusting of the schema because ExtensionConfig has a single link to the owning AbstractConfig.)
The best way to fetch an extension config object is something like
select cfg::Config.extensions[is ext::auth;:AuthConfig]
We're adding the secret field to introspection, which will allow this to be fixed in the right way, but I think we should hack around it since I don't think that fix can be out until 5.0
|
GITHUB_ARCHIVE
|
AID 201406 Agenda
This is the agenda of the AID out-of-cycle meeting to be held in Amsterdam, the Netherlands.
The scope of the meeting is the use of HL7/IHE/DICOM/OpenEHR/terminology and other healthIT standards, and the exchange of best practices and architectures related to the implementation of these standards in software applications.
- Date: June 3rd, 2014. 09:00-17:30
- Language: All Presentations will be in English
- Location: Furore offices, Amsterdam, @ Bos en Lommerplein 280, 1055 RW Amsterdam (Directions: in Dutch, and/or Google Maps)
- Registration: There is no registration fee. The size of the meeting room is limited however, so we urge you to please sign up by adding your name to the tail end of this page.
- Administrative agenda items
- Aproval of the agenda
- User Groups
- Rene: at some point AID may be transformed into a fullblown HL7 User Group (HUG). At this point in time the idea is that all HL7 members (of HL7 International, or of any affiliate) will be able to participate for free. Non-members pay a fee of $100 a year, and get to attend all meetings. The $100 (EUR 70) was defined for a UG with just one annual face to face meeting in mind. On the assumption we (as AID) can nearly always use a sponsored meeting room (e.g. provided by a software vendor) the meetings costs are limited to catering costs (EUR 20-45, depending on the quality thereof).
- Discussion: to define a reasonable fee structure; what to do with any surplusses/losses.
- Implementation aspects of my own PHR (Dirk Jan van der Pol, Quli, NL)
- Dirk Jan, akin to e-Patient Dave once suffered from cancer - and decided to develop his own PHR to collect and manage his own health data. Dirk Jan, unlike Dave, has a background in software development and therefore has the perfect background and motivation to develop a PHR.
- Dirk Jan will share his background, and will subsequently focus on the challenges of trying to incorporate data from all sorts of sources in his own PHR - which meant he had to create clinical models, and resorted to a tagging mechnism in order to retrieve and associate relevant bits of data.
- An OpenEHR RESTful Archetyped API (Jan-Marc Verlinden, Ralph van Etten, Medvision360, NL)
- Jan-Marc and Ralph are working on a RESTful API (see http://www.medrecord.nl/medrecord-json-api/) for MEDRecord, an open source OpenEHR based application. They will show how the API is related to the back end EHR, as well as how one could create a FHIR frontend on top of this API.
- Using an OpenEHR platform in a HL7/IHE interoperability environment (Tomaž Gornik or Borut Fabjan, Marand)
- The Think!EHR Platform is a big-data, high-performance solution designed to store, manage, query, retrieve and exchange structured electronic health record data based on the latest release of openEHR specifications.
- Tomaž and/or Borut will discuss some of the architectural aspects of Think!EHR and the impact of embracing OpenEHR, as well as the support for IHE/HL7 (v2,CDA,FHIR) as an interoperability interface.
- A world of models - model transformations (Michael van der Zel, UMCG, NL)
- Michael has created a number of tools that help in the tranformation of models between OpenEHR, CIMI, DCM, ART-DECOR, EA UML, and others. This includes the use of the MAX metadata format (kind of an XMI format) for expressing the models.
- Other presentations
Registration is free, but the size of the room is limited. Please sign up by adding your name to the list below:
- Rene Spronk, Ringholm, NL
- Ewout Kramer, Furore, NL
- Dirk Jan van der Pol, Quli, NL
- Adri Burggraaff, HL7 NL, (afternoon)
- Michael van der Zel, UMCG, NL
- Jan-Marc Verlinden, Medvision360, NL
- Ralph van Etten, Medvision360, NL
- Tomaž Gornik or Borut Fabjan, Marand, SLovenia
- Henk Enting, MGRID, NL
|
OPCFW_CODE
|
This week at VMworld 2020, VMware announced new innovations to help customers build, run, manage, connect, and protect any app on any cloud.
According to the company, over 15 million enterprise workloads run on VMware in the cloud.
"VMware has reached a major milestone in its plan to unlock the power of every cloud for every business. We now support customers' application strategies by delivering VMware-based services on every major public cloud provider and hundreds of VMware Cloud Verified partners worldwide," VMware chief operating officer of products and cloud services Raghu Raghuram said.
"As we drive our strategy forward, we are expanding our portfolio of cloud infrastructure, operations, and security services to enable faster application migration and modernisation, and better business agility, and resiliency."
In addition to building on its 2019 Tanzu Kubernetes play, distributed workforce security updates further integrating its Carbon Black acquisition, and its redefined hybrid cloud architecture for the data centre offering Project Monterey, VMware has also made a slew of multi-cloud focused announcements.
VMware Cloud Disaster Recovery is a new on-demand disaster recovery-as-a-service that protects on-premises vSphere workloads onto VMware Cloud on AWS; while VMware Cloud on Dell EMC, the data centre-as-a-service offer of Dell Technologies Cloud, has added new VMware HCX workload migration capabilities, improved performance, new host types, and support for multiple clusters within a single rack.
VMware Horizon 8 is available for deployment on VMware Cloud on AWS, VMware Cloud on Dell EMC, Google Cloud VMware Engine, and Azure VMware Solution.
New cloud management services also unveiled include the Cloud Management Hybrid Subscription Solution, with the VMware vRealize Cloud Universal combining SaaS and on-premises management software into a single subscription licence; as well as the VMware vRealize AI Cloud -- formerly Project Magna -- which has been touted as an intelligent, self-tuning cloud service for application performance optimisation.
VMware said it is also adding more cloud automation and scale, uptime and resiliency, predictive analytics, and intelligence to the virtual cloud network. New capabilities in VMware NSX-T 3.1 deliver better support for large-scale global deployments and disaster recovery use cases, it said.
VMware also announced that CloudHealth by VMware now supports Oracle Cloud Infrastructure (OCI), and that its CloudHealth Secure State adds real-time monitoring for Google Cloud, as well as 20 new AWS and Azure services, including managed Kubernetes and serverless configurations.
Formerly Project Path, VMware Cloud Partner Navigator will also enable partners to expand business opportunities beyond their own clouds and VMware Cloud Director 10.2 adds a set of capabilities to let partners expand their service offerings.
With the new consolidated VMware Marketplace, customers also have access to thousands of validated third-party, open-source, and first-party solutions. These can be deployed across vSphere, VMware Cloud on AWS, VMware Cloud on Dell EMC, and VMware Tanzu environments.
Azure VMware Solution, CloudHealth support for OCI, VMware Cloud on AWS support for VMware Tanzu, VMware Cloud on Dell EMC updates, VMware vRealize Cloud Universal, and VMware Marketplace are all available now.
VMware Cloud Partner Navigator is in preview and all other products and services are expected to be available by the end of October.
|
OPCFW_CODE
|
let req = require('request');
const FACE_API_KEY = '';
const PERSON_GROUP_ENDPOINT = '';
const PERSON_GROUP_ID = '';
const PERSON_GROUP_NAME = '';
module.exports = function (context, person) {
let personTable = context.bindings.personTable = [];
let pgId, pId;
// check if the person group has been created.
ensurePersonGroup(context)
.then((personGroupId)=>{
pgId = personGroupId;
return ensurePerson(context, personGroupId, person);
})
.then((personId) => {
pId = personId;
let addingFaces = [];
person.faceImages.forEach(faceImage => {
addingFaces.push(addingPersonFace(context, pgId, pId, faceImage));
});
return Promise.all(addingFaces);
})
.then(() => {
context.log('[Final] Training person group...');
req({
url: `${PERSON_GROUP_ENDPOINT}/${pgId}/train`,
method: 'POST',
headers: {
'Ocp-Apim-Subscription-Key': FACE_API_KEY
}
}, (e, r, b) => {
context.log('Done.')
context.done();
});
});
};
/**
* Ensure the person group is created.
*
* @param object context
*/
function ensurePersonGroup(context) {
context.log('[PersonGroup] Check if the person group is created...');
return new Promise((resolve, reject) => {
req({
url: `${PERSON_GROUP_ENDPOINT}/${PERSON_GROUP_ID}`,
method: 'GET',
headers: {
'Ocp-Apim-Subscription-Key': FACE_API_KEY
},
json: true
}, (err, response, body) => {
if (body.error) {
context.log('[PersonGroup] Person group does not exist.');
req({
url: `${PERSON_GROUP_ENDPOINT}/${PERSON_GROUP_ID}`,
method: 'PUT',
body: {
"name": PERSON_GROUP_NAME
},
headers: {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': FACE_API_KEY
},
json: true
}, () => {
resolve(PERSON_GROUP_ID);
})
} else {
context.log('[PersonGroup] Person group existed.');
resolve(PERSON_GROUP_ID);
}
});
});
}
/**
* Create a new Person on Face API.
*
* @param context
* @param string name The name of the person.
* @param object data The metadata of the person.
* @return string The created Person Id.
*/
function ensurePerson(context, personGroupId, person) {
return new Promise((resolve, reject) => {
context.log('[Person] Check if the person existed...');
// get
let result = context.bindings.personEntity.find((element) => element.RowKey == person.name);
if (result !== undefined) {
context.log(`[Person] Person ${result.PersonId} existed`);
resolve(result.PersonId);
} else {
context.log(`[Person] Creating new person...`);
req({
url: `${PERSON_GROUP_ENDPOINT}/${personGroupId}/persons`,
method: 'POST',
json: true,
headers: {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': FACE_API_KEY
},
body: {
'name': person.name,
'userData': JSON.stringify(person.data)
}
}, (e, r, b) => {
// write back to table storage
context.log(`[Person] New person ${b.personId} has been created...`);
context.bindings.personTable.push({
'PartitionKey': 'LearnedFace',
'RowKey': person.name,
'PersonId': b.personId
});
resolve(b.personId);
});
}
});
}
/**
* Add faces to a specified person.
*
* @param object context
* @param string personId
* @return
*/
function addingPersonFace(context, personGroupId, personId, faceImageUrl) {
return new Promise((resolve, reject) => {
context.log('[AddingPersonFace] Adding person face...');
req({
url: `${PERSON_GROUP_ENDPOINT}/${personGroupId}/persons/${personId}/persistedFaces`,
method: 'POST',
json: true,
headers: {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': FACE_API_KEY
},
body: {
'url': faceImageUrl
}
}, (e, r, b) => {
context.log(`[AddingPersonFace] Added face ${b.persistedFaceId}...`);
resolve(b.persistedFaceId);
});
});
}
|
STACK_EDU
|
<?php
chdir( dirname(__FILE__) . '/../../plugin' );
$plugin_dir = getcwd();
require_once $plugin_dir . '/WP2Static/WP2Static.php';
require_once $plugin_dir . '/WP2Static/HTMLProcessor.php';
require_once $plugin_dir . '/URL2/URL2.php';
use PHPUnit\Framework\TestCase;
final class HTMLProcessorIsInternalLinkTest extends TestCase {
/**
* @dataProvider internalLinkProvider
*/
public function testDetectsInternalLink( $link, $domain, $expectation ) {
/*
$link should match $domain
$domain defaults to placeholder_url
we've rewritten all URLs before here to use the
placeholder one, so internal link usually(always?)
means it matches our placeholder domain
TODO: rename function to reflect what it's now doing
*/
$processor = $this->getMockBuilder( 'HTMLProcessor' )
->setMethods(
array(
'loadSettings',
)
)
->getMock();
$processor->method( 'loadSettings' )->willReturn( null );
$processor->settings = array();
$processor->placeholder_url = 'https://PLACEHOLDER.wpsho/';
$result = $processor->isInternalLink( $link, $domain );
$this->assertEquals(
$expectation,
$result
);
}
public function internalLinkProvider() {
return [
'site root' => [
'https://PLACEHOLDER.wpsho/',
null,
true
],
'internal FQU with file in nested subdirs' => [
'https://PLACEHOLDER.wpsho//category/travel/photos/001.jpg',
null,
true
],
'external FQU with matching domain as 2nd arg' => [
'http://someotherdomain.com/category/travel/photos/001.jpg',
'http://someotherdomain.com',
true
],
'not external FQU' => [
'http://someothersite.com/category/travel/photos/001.jpg',
null,
false
],
'not internal FQU with different domain as 2nd arg' => [
'https://PLACEHOLDER.wpsho//category/travel/photos/001.jpg',
'http://someotherdomain.com',
false
],
'not subdomain' => [
'https://sub.PLACEHOLDER.wpsho/',
null,
false
],
'not internal partial URL' => [
'/category/travel/photos/001.jpg',
null,
false
],
];
}
}
|
STACK_EDU
|
IP Multicast and Firewalls
Network Working Group R. Finlayson
Request for Comments: 2588 LIVE.COM
Category: Informational May 1999
IP Multicast and Firewalls
Status of this Memo
This memo provides information for the Internet community. It does
not specify an Internet standard of any kind. Distribution of this
memo is unlimited.
Copyright (C) The Internet Society (1999). All Rights Reserved.
Many organizations use a firewall computer that acts as a security
gateway between the public Internet and their private, internal
'intranet'. In this document, we discuss the issues surrounding the
traversal of IP multicast traffic across a firewall, and describe
possible ways in which a firewall can implement and control this
traversal. We also explain why some firewall mechanisms - such as
SOCKS - that were designed specifically for unicast traffic, are less
appropriate for multicast.
A firewall is a security gateway that controls access between a
private adminstrative domain (an 'intranet') and the public Internet.
This document discusses how a firewall handles IP multicast
We assume that the external side of the firewall (on the Internet)
has access to IP multicast - i.e., is on the public "Multicast
Internet" (aka. "MBone"), or perhaps some other multicast network.
We also assume that the *internal* network (i.e., intranet) supports
IP multicast routing. This is practical, because intranets tend to
be centrally administered. (Also, many corporate intranets already
use multicast internally - for training, meetings, or corporate
announcements.) In contrast, some previously proposed firewall
mechanisms for multicast (e.g., ) have worked by sending *unicast*
packets within the intranet. Such mechanisms are usually
inappropriate, because they scale poorly and can cause excessive
network traffic within the intranet. Instead, it is better to rely
Finlayson Informational [Page 1]
RFC 2588 IP Multicast and Firewalls May 1999
upon the existing IP multicast routing/delivery mechanism, rather
than trying to replace it with unicast.
This document addresses scenarios where a multicast session is
carried - via multicast - on both sides of the firewall. For
instance, (i) a particular public MBone session may be relayed onto
the intranet (e.g., for the benefit of employees), or (ii) a special
internal communication (e.g., announcing a new product) may be
relayed onto the public MBone. In contrast, we do not address the
case of a roaming user - outside the firewall - who wishes to access
a private internal multicast session, using a virtual private
network. (Such "road warrior" scenarios are outside the scope of
As noted by Freed and Carosso , a firewall can act in two
1/ As a "protocol end point". In this case, no internal node
(other than the firewall) is directly accessible from the
external Internet, and no external node (other than the
firewall) is directly accessible from within the intranet.
Such firewalls are also known as "application-level gateways".
2/ As a "packet filter". In this case, internal and external
nodes are visible to each other at the IP level, but the
firewall filters out (i.e., blocks passage of) certain packets,
based on their header or contents.
In the remainder of this document, we assume the first type of
firewall, as it is the most restrictive, and generally provides the
most security. For multicast, this means that:
(i) A multicast packet that's sent over the Internet will never
be seen on the intranet (and vice versa), unless such packets
are explicitly relayed by the firewall, and
(ii) The IP source address of a relayed multicast packet will be
that of the firewall, not that of the packet's original
sender. To work correctly, the applications and protocols
being used must take this into account. (Fortunately, most
modern multicast-based protocols - for instance, RTP -
are designed with such relaying in mind.)
3. Why Multicast is Different
When considering the security implications of IP multicast, it is
important to note the fundamental way in which multicast
communication differs from unicast.
Finlayson Informational [Page 2]
Show full document text
|
OPCFW_CODE
|
Create rules_release
ref #451
The flow
write file: https://github.com/changesets/changesets/tree/main/packages/write
changelog content: https://github.com/changesets/action/blob/main/src/utils.ts#L37
[x] Figure out calling dependencies with bzlmod: getting "No repository visible as '@aspect_rules_js' from main repository" (https://bazelbuild.slack.com/archives/C014RARENH0/p1698918118051159, https://github.com/aspect-build/rules_js/pull/1342)
[x] Setup bazel-diff script which runs against latest master
[x] Restructure files like rules_python or rules_py
[x] Setup release rule which stores information about the release
[x] Parse command line arguments in the runner (https://github.com/tj/commander.js#installation)
[x] Setup generator command which generates changesets file for each release
[x] Support for external repositories, like rules_task and rules_release itself?
[ ] Refactor using repositories and action classes
[ ] Setup tests using rules_jest
[ ] Setup version command which versions the release version files based on changesets data
[ ] Get new version and read changelog entry using version
[ ] Publish artifacts for each changed release with version (like docker image)
[ ] Create GitHub release for each changed release with version and changelog
[ ] Run git tag command to update tags inside repository
[ ] Commit files and push to master
[ ] Setup test which fails if not all changeset entries have been created
[ ] Ability to dry run
[ ] Remove semantic release for rules_task
[ ] Setup spinner when figuring out changes with Ink and https://github.com/vadimdemedes/ink-spinner
API example
load("@rules_release//:defs.bzl", "release", "release_manager")
release(
name = "bunq2ynab_release",
version_file = "version.txt", # version file gets bumped with a new version
target = ":bunq2ynab_image", # the Bazel target that if changed will trigger a new release (up to contributor to decide if it's "patch", "minor" or "major"
publish = [":bunq2ynab_image_push", ":publish_github_release"], # commands to run when publishing a new release
)
# The release manager will collect all the release information and act accordingly
release_manager(
name = "release_manager",
deps = [
":bunq2ynab_release",
]
)
As a strategy to get the version bumps from changesets:
Create tmp/changesets directory
For each release target create a directory like
tmp/changesets/rules_task
tmp/changesets/bunq2ynab
Within those directories generate a package.json which is marked as private to prevent publishing
Set the version attribute to the value of the version file of the release rule
Using Bazel figure out dependencies and add them to the package file like
"dependencies": {
"rules_task": "workspace:^"
}
Setup appropriate pnpm-workspace.yaml to target tmp/* (can we live without this file?)
Run changeset version
Copy the values from the updated package version back into the release rule version file
Bazel query commands:
Find all the recursive source files (including external) of the rules_task workspace
bazel query --noimplicit_deps --notool_deps 'kind("source file", deps(@rules_task//...))'
|
GITHUB_ARCHIVE
|
[QUESTION] Support duration in the noteon function
Documentation Confirmation
Please confirm that you have read the documentation and have not found an answer to your question there.
Yes
Question's Topic
spessasynth_lib
The Question
Hi, first thanks for the awesome lib. I have a small question. I see it seems for the backend synthesiser like fluid synth, it can support the duration in the "note on" function.
It has a loop mode, so it can play the soundfont's sustain time indefinitely. (The time between the green line and red line)
Is this supported in the library? Would it useful for synthesising a long note in the midi (It seems for now, the library just trigger the note one time and let it decay by time no matter how long the note is. Is this accurate enough for the playing of the long note? I am a newbie in music and i checked the code only. Please correct me if i am wrong.)?
Thanks for the help!!
Yes, it is supported. Both looping and decay. If your soundfont sets decay to a really high value (like 144 seconds) the note will keep playing even after note-off. It depends on your soundfont.
For example, if you start a note with patch 80 (square) in the bundled soundfont it will keep playing until you send a note-off.
Hi, thanks for the reply. May i know is there a example that i can refer to? i check the doc here: https://github.com/spessasus/SpessaSynth/wiki/Synthetizer-Class#noteon but i see noteon has only very limit params. May i know where i can get the info for loop and decay?
You can't see that info. It's internal. I'm not sure why would you even want to see it. To keep playing the sample, simply send a note-on and don't send a note off on a patch that loops (flute for example) and it will keep playing (be looped)
If you really want to see each sample's loop points though, you must use the SoundFont2 class. It has a property called samples and each sample has a defined loop start and loop end. It's documented in the wiki.
for my understanding, there are four phase of sound: 1 Attack,2 Decay 3 Sustain 4 Release
If noteon is triggered, Attack Decay and Sustain will happen. For some midi program, Sustain is a constant phase. It has no decay. But this is not real. For real piano, even do not note off, the sound will decay little by little. Some advanced program can work like this.
if noteoff is triggered, a huge decay will enforce to the sound. It is the release phase. The sound will usually decay to zero very fast.
I tested your example program, like the real piano, the sound will disappear finally even i do not noteoff. This is nice i think but i just do not know where i can set this "loop time" and "slow decay" factor, to change 1 how much loop sustain time we have 2 how long sustain will go to zero (even i do not noteoff).
Please correct me if i am wrong since i am totally a rookie in music... Thanks a lot for your great lib and generous help!
To set the loops points, you have to edit the soundfont itself. I recommend Polyphone for that.
oh i see. Sorry i misunderstood. I thought it need to be set in the library but in fact those are params in the soundfont. Thanks a lot!
That's okay, this is what questions are for.
|
GITHUB_ARCHIVE
|
For: 1st International Workshop on Algorithmic affordances in recommender interfaces held in conjunction with INTERACT 2023 19th IFIP TC13 Conference on Human- Computer Interaction.
AIMS AND SCOPE
Algorithms play a significant role in our daily lives, making decisions for users on a regular basis. This widespread adoption necessitates a thorough examination of how users interact with algorithms via interfaces, particularly in the context of recommender systems. The design of a recommender’s interface and specifically its algorithmic affordances have a serious impact on the user experience. Algorithmic affordances are mechanisms in the interface of recommender systems, that allow users tangible control over the algorithm. A straightforward example of an algorithmic affordance is ‘feeding the algorithm’, where the user specifically provides data to the algorithm to influence subsequent recommendations. Examples of implementations of ‘feeding the algorithm’ are rating and blacklisting. Other algorithmic affordances are, for instance, explanations, or allowing a user to manipulate parameters, in a way that shifts the recommendations’ original prominence.
For recommender interface design, it is crucial to understand how algorithmic affordances impact interaction qualities such as transparency, trust, and serendipity, and as a result, the user experience. Currently, the precise nature of the relation between algorithmic affordances, their implementation in recommender interfaces, interaction qualities, and user experience remains unclear. Consequently, much is still to be explored in this domain; furthermore, designers are largely without guidance when making design choices on algorithmic affordances in their algorithm-driven design projects. In response, this one-day workshop aims to bring together designers and researchers, providing a platform to exchange insights, research findings, design experiences, and knowledge on these complex interrelationships. The concluding segment of the workshop will focus on exploring the feasibility of a prospective tool designed to facilitate collaboration between designers and researchers in this field to aid both research and design practice.
- the user’s experience of increased control provided by algorithmic affordances
- mental model construction signalled by algorithmic affordances
- design patterns of algorithmic affordances
- balancing algorithmic affordances & cognitive overload
- how interface elements signal the presence of algorithmic affordances
- the relationship between algorithmic affordances and various interaction qualities
- general principles of the relationship between algorithmic affordances, interaction qualities and user experience
- a practitioner’s hands-on experience with designing algorithmic affordances in a recommender’s interface
- means to have fundamental research results on algorithmic affordances in recommender interface design land in the design practice
We invite both researchers and designers to submit papers and/or relevant examples. We will reserve a limited amount of spots for participants who did not submit a paper but for whom the topic is relevant. Look under “how to submit” to see how you can make your interest known.
|09:00 – 09:30
|Introduction + Keynote
|09:30 – 10:45
|Panel session 1: short presentations by authors of accepted papers/accepted cases
|10:45 – 11:00
|11:00 – 12:15
|Panel session 2: short presentations by authors of accepted papers/accepted cases
|12:15 – 13:00
|13:00 – 14:15
|Panel session 3: short presentations by authors of accepted papers/accepted cases
|14:15 – 14:45
|Presentation concept of an algorithmic affordances pattern library
|14:45 – 15:00
|15:00 – 17:00
|Case studies in groups (including breaks and plenary presentations): on translation of academic results to the design practice
SUBMISSIONS AND PUBLICATIONS
We invite ux/ui-designers, researchers into HCI and HAII, and AI engineers working on topics or designs related to recommender interfaces to submit original papers of the following kinds:
- research papers describing recent results, user studies, literature reviews
- statements of interests or position papers describing novel ideas or perspectives
- case studies of interfaces including algorithmic affordances designed by the submitter, including a rationale of the decisions regarding algorithmic affordances and interaction qualities
- case studies of recommender interfaces encountered by the submitter, that sparked ideas or considerations on algorithmic affordances and interaction qualities in recommenders
All submitted should represent original and previously unpublished work currently not under review in any conference or journal. Papers, submitted design examples , and statements of interest will be peer-reviewed and selected by relevance and likelihood of stimulating and contributing to a discussion related to the workshop theme. Should for the discussions of examples an NDA for participants be needed, that can be taken under consideration. Should for the discussions of examples an NDA for participants be needed, that can be taken under consideration.
– The maximum length for papers is 6 pages, use springer format including references and a 300 word abstract. Shorter submissions are also welcome.
– When submitting an example of a design, please use the abstract to explain the context of the design and the prime reason for its relevance for this workshop, and use the file to provide clear visual material of the interfaces as far as that is possible without breaching confidentiality agreements. Maximum length, here, too is 6 pages.
– When submitting a statement of interest, please use the abstract to make your interest known and elaborate on it a bit. You may use the file to submit e.g. a list of previous research, a recent relevant paper that you have written, but that has been published already or is under review, or an updated cv, so we can ensure an engaged and active group of participants
Submitters of accepted contributions (paper or case study) or requests for participation must guarantee their presence at the workshop (or send an representative). Accepted work, both papers and design examples, will be published in a volume dedicated to the proceedings of this workshop.
IMPORTANT DATES AND DEADLINES
- August 4, 2023: deadline submission of position papers to workshops
- August 7, 2023: Notification to submitters of papers and examples (earlier submitters will get earlier feedback, so you can make travel plans
- June 6, 2023: Early bird registration opens (https://store.york.ac.uk/product-catalogue/computer-science/interact-2023)
- August 28 – September 1st: Interact 2023 (the workshop is on 29 August)
Deadlines are AOE
- Aletta Smits – HU University of Applied Sciences, Utrecht, The Netherlands
- Ester Bartels, MA – HU University of Applied Sciences, Utrecht, The Netherlands
- Chris Detweiler – The Hague University of Applied Sciences, The Hague, The Netherlands
- Koen van Turnhout – HU University of Applied Sciences, Utrecht, The Netherlands
For further information and questions, please contact: email@example.com at HU University of Applied Sciences
|
OPCFW_CODE
|
I have a DB2 table that has Lat/Long data within fields in decimal form. Instead of having a field with a spatial data reference, i.e. point representation of these fields. I have attempted to write a query that utilizes the DB2 function to convert these fields into a point, this function is db2gse.point_st . Within a Jupyter Notebook this seems to work fine and I create a field that is added with the spatial reference for the point based on the lat/long. However if I try to include this function as part of my query within ESRI Query Layer, it will not show the field to select as the spatial reference. Any advice on this?
I am trying to map live data. Our database is updating about every 5 minutes and I am setting the query to pull the most recent lat/long entry for each object. I think I would need it to be a query layer for this, as ultimately I would like to host this layer on our portal if possible, after getting it to work properly within ArcGIS Desktop.
You will be able to achieve all of this with an XY Event Layer.
It appears the OP eventually wants to publish this data/layer as a service so it can be accessed within a Portal. If so, the following warning message will likely be relevant, at least if there are many point locations involved:
10043: Layer's data source is an XY Event Table
A layer in your document is using an XY event table as a data source, which can perform slowly in many situations.
XY event data sources are commonly used to draw point data originating from a data source that is not spatially enabled. In this respect, XY event data sources are a powerful way to integrate simple point data into your map. However, the simplicity of this integration comes with a cost in that XY event sources cannot take advantage of spatial indexing that makes spatial data sources perform well. Consider converting your data to a spatial data source such as a file geodatabase or enterprise geodatabase. If the data resides in a relational database that cannot be geodatabase enabled, consider using the database's native spatial storage type and drawing it with a query layer as an alternative to XY events.
In this case, I think a query layer will run into the exact same issue since the spatial points are being generated dynamically, and hence won't be indexed.
DB2 data types supported in ArcGIS—Help | ArcGIS Desktop
ArcGIS works with specific data types. When you access a database table through a Database Connection or a query layer, ArcGIS filters out any unsupported data types. ArcGIS will not display unsupported data types and you cannot edit them through ArcGIS.
So make sure that the Data Type is supported.
I don't work with DB2 enough to know whether this will work. Regardless of whether you can get it technically working, you aren't going to want to take this approach. The problem with generating the spatial points on-the-fly or using an XY event layer is that the spatial points are never indexed, since they are generated dynamically. Without a spatial index on the spatial data, the performance of the layer will tank. If you are working with a few tens or hundreds of points, you might be able to get away with it. If you are working with tens of thousands of points or more, be prepared to wait.
Typically, text representations of spatial coordinates are used just for storing or data-interchange, not doing any type of visualization or analysis/processing. In this case, can your workflow be changed so the points are natively stored as points, and when you need a text representation, you dump it at that time?
We were trying to get around having the Database Developers actually intervene and create new fields in the database and natively storing the data as points in DB2 (not even sure if they can). I am on a separate team that does some ad-hoc data analysis, network redesign, testing new solutions, etc. I believe what I could do is query this data into a Postgres table and geo enable it with the POSTGIS extension. Then setup ESRI to query from that table..... possibly having a job run every so often to have my Postgres table query and update from the DB2 table. This seems inefficient but may be a possibility that could avoid having other do any work........ should this be able to work?
Also, this project is mapping a smaller set of objects. Less than 100 objects will be mapped based on their most recent lat/long entry only.
Given the small number of objects/points involved, not having the points spatially indexed might not matter. I would invest some time trying to get this to work technically, either with a query layer or XY event layer, before committing to the larger investment of having a PostgreSQL reporting database stood up.
As I said, I don't work with DB2, so I don't have any specific pointers, but I do encourage you to explore XY event layers as Jake suggests to see if they will work for you.
|
OPCFW_CODE
|
import time
import socket
import unittest
import collections
import multiprocessing
from smtp import ResSMTP
from smtpd import SMTPServer
class MockSMTPServer(SMTPServer):
def process_message(self, peer, mailfrom, rcpttos, message_data):
self.log_info("message come from: %s" % str(peer))
self.log_info("message from: %s" % mailfrom)
self.log_info("message rcpt: %s" % str(rcpttos))
self.log_info("message length: %s" % len(message_data))
self.log_info("message data length: %s" % len(message_data))
def start_mock_mail_server():
import asyncore
MockSMTPServer(('0.0.0.0', 25), None)
asyncore.loop()
class TestResSMTP(unittest.TestCase):
def setUp(self):
self.smtpd_host = '127.0.0.1'
self.smtpd_port = 25
self.sender = "sender@test.com"
self.rcpt = "rcpt@test.com"
self.rcpts = "rcpt01@test.com;rcpt02@test.com"
self.email = "test email"
self.mock_smtpd = multiprocessing.Process(target=start_mock_mail_server)
self.mock_smtpd.start()
time.sleep(1) # wait for mail server to start
def tearDown(self):
if self.mock_smtpd:
self.mock_smtpd.terminate()
self.mock_smtpd.join()
def test_send_email_ok(self):
s = ResSMTP()
s.connect(self.smtpd_host, self.smtpd_port)
s.sendmail(self.sender, self.rcpt, self.email)
s.quit()
self.assertIsInstance(s.results, collections.OrderedDict)
self.assertDictContainsSubset({'connect_args': '%s:%s' % (self.smtpd_host, self.smtpd_port)}, s.results)
self.assertEqual(220, s.results['connect_response'][0])
self.assertIn(socket.getfqdn(), s.results['connect_response'][1])
self.assertDictContainsSubset({'helo_args': '',
'helo_response': (250, socket.getfqdn())}, s.results)
self.assertDictContainsSubset({'mail_args': self.sender,
'mail_response': (250, 'Ok')}, s.results)
self.assertDictContainsSubset({'rcpt_args': [self.rcpt, ],
'rcpt_response': [(250, 'Ok'), ]}, s.results)
self.assertDictContainsSubset({'data_args': self.email,
'data_response': (250, 'Ok')}, s.results)
self.assertDictContainsSubset({'quit_response': (221, 'Bye')}, s.results)
def test_send_email_to_rcpts(self):
rcpts = self.rcpts.split(';')
s = ResSMTP()
s.connect(self.smtpd_host, self.smtpd_port)
s.sendmail(self.sender, rcpts, self.email)
s.quit()
self.assertDictContainsSubset({'rcpt_args': list(rcpts),
'rcpt_response': [(250, 'Ok'), ]*len(rcpts)}, s.results)
if __name__ == '__main__':
unittest.main()
|
STACK_EDU
|
Please note: the addresses on this page have been masked such that they do not appear as addresses to automated harvesting programmes. Please accept our apologies for any inconvenience caused by this measure, but we hope that it will enable us to make sure that there is as little anti-spam protection (and hence possible collateral) on the automated addresses as possible. When you paste them into your email client, they should be used without the spaces.
These are the instructions for using the Colondot/Nodnol OpenSRS Domain Registration system.
This document assumes that you have a PGP key all set up, and that you understand how to send PGP-signed mails. For more help with this, see: The GnuPG project, Enigmail for Mozilla and Netscape, The Mutt MUA
The domain robot will accept PGP clearsigned or PGP/MIME mail. For any operation which results in a change to the database, the mail MUST be signed or the message will bounce. These are the messages sent to: domain @ domain . colondot . net
Under some circumstances, you will get more information back from the system (eg. when generating a template for an object which already exists in our database), if the request for information is signed by an appropriate key.
The key database
In order to start using the system, the first procedure is to send your key to our key-database robot. To do this, export your key as ASCII armoured, and include it in the body of a message to: pgpkeys @ domain . colondot . net. We do not currently support the automated retrieval of keys from keyservers, though this may be a feature in the future. You will get a message confirming that your key has been added, or that it was already in the database. The supernotify contact will also get notified.
Once you've registered your PGP key with the system, you can submit templates.
The templates take different options depending on what kind of object you are referring to. There are, however, three basic types:
The Domain template (example)
This is the one that is likely to be used most often. In its simplest form, all the fields for contacts are there, plus such things as nameservers. The contact fields are required to fit particular formatting, and you may find verification errors. If you leave the first line of a contact blank, it is assumed that you want to inherit it from the previous contact. You may specify between 2 and 6 nameservers. Notifies that are added to a domain template will be sent every time this domain record is updated.
If you want a blank form, you can send a message to domain - template @ domain . colondot . net. If you include a line
in your mail, then you will get back a template with the appropriate domain and the publically available information that is known filled into the template. If the domain is already on our system, and the request mail is signed with an appropriate PGP key, then you will get all the information our system holds.
The User template (example)
This allows you to set notifications associated with a given PGP key. This means that the people who are on the list will be notified with the result mail whenever that key is used to sign a message. As with the domain template above, you can request an appropriate template by sending a message to user - template @ domain . colondot . net. If you include a line
PGPKey: your keyid
in your mail, then you will get back a template with the appropriate pgp key filled in. If you sign this request, and notifies have already been set, then they will also be filled in, otherwise they will be left blank.
The Host template (example)
This allows for a glue record to be set up for a nameserver in this domain. This doesn't have to be used as a nameserver for the domain in question, but the domain does have to be on our system, and the signing key should have nameserver modification privileges on the domain. You can get a mocked up host template by sending a message to host - template @ domain . colondot . net. If you include a line
Host: the dns name of your host
in your mail, then you will get back a template with the appropriate values filled in. If the host is already in the DNS, then the IP address will be looked up and filled in. If you sign the original message with an appropriate key, then the notifies for changes to that host will get shown too.
If you want a description of what each of the fields in the template does and what values it may take, then you should email: help @ domain . colondot . net.
This page last modified on
Tuesday, 05-Jul-2016 10:21:17 UTC
Contact <email@example.com> for more information about this site, or <firstname.lastname@example.org> if you want not to be able to send any more mail to this machine.
|
OPCFW_CODE
|
“The Snake Is Not Mine” Kwekwe Python Man Speaks Out Following Arrest
The Kwekwe man who created a buzz in the Central Business district on Tuesday after he was found in possession of a huge python has denied having any connection to the reptile.
Tatenda Mutema (31), a soldier based at 5 Infantry Brigade just outside Kwekwe claims he is not the owner of the snake.
Speaking after his arrest, Mutema told The Chronicle that when the owner of the vehicle gave him the car to drive he wasn’t aware that there was a huge reptile stashed in the car.
He claims that he doesn’t know the owner of the vehicle well, all he knows is that he owns a plot near the army barracks.
“I do not even know the name of the owner of the car to be honest. I met him one day when I was on my way to Kwekwe from the barracks.
He asked me if I had a driver’s licence which I produced and he gave me the vehicle to drive,” narrated Mutema.
He said on the following day, the owner of the vehicle asked him to drive him to Bulawayo.
“I drove him to Bulawayo together with two other women. Upon arrival, he and the ladies hopped into a brand-new Honda Fit Hybrid. He then asked me to drive back to Kwekwe as he said he was on his way to South Africa,” said Mutema.
He said the vehicle owner instructed him not to use the money he collected from passengers but to take it to his family at the farm.
Mutema claims that he did as he was instructed and left some cash at the owner’s farm before driving away.
He revealed that on his way from the farm, he gave another man a lift whom he asked what exactly the owner of the vehicle did in life because he had seen brand new cars at his farm.
The man allegedly told him that all he knew was that the man was a sangoma.
Mutema said he never took it seriously and went about his business.
However, disaster struck when Mutema picked up a female friend and decided to use some money he had collected to buy the friend lunch.
“I took a US$10 note to buy lunch. Upon returning I opened the door and was shocked to see part of the dashboard broken. When I checked at the back seat I noticed a huge snake and quickly grabbed the girl and locked the car,” he said.
He refuted claims that he fled from the scene after the snake was spotted inside the car.
“I did not run away, I was there when the rangers removed the snake. I was still in a state of shock and I feared the people’s response,” he said .
Mutema only handed himself to the police after the snake was successfully retrieved from his Toyota Wish by rangers from the Zimbabwe National Parks and Wildlife Authority (Zimparks).
He was charged for possession of a python which is a protected species
Articles you may want to read:
- Man Fined Five Beasts For Adultery & Incest After He Was Caught Pants Down With His Son’s Wife
- Highlanders Brings Out The Begging Bowl After Failing To Raise Funds To Pay Salaries
- Tragedy As Man Kills Rival Over A Girlfriend
Follow Us on Google News for Immediate Updates
Discussion about this post
|
OPCFW_CODE
|
I have two Threat Management Gateway Enterprise servers set up in a standalone array for authenticated web proxying. I can verify NLB is working, but when I restart tmgA, which is the array manager, tmgB does not maintain the authenticated web proxy as I understand it should. As I understand it, it should maintain the web proxy using the last cached copy of the configuration it synced with the array manager. Both servers are set up using the single network adapter topology. I'm sure I'm forgetting some helpful configuration information here. Any help to get the failover working is much appreciated. Thanks!
You can get better availability from TMG in one or more of the following ways:
The TMG boxes in an Array maintain an updated wpad.dat, even if autodiscovery is turned off. This file containes a list of all the nodes in the array, local name or ip exclusions, and so on, according to the settings on the relevant Network properties dialog, on the Web Browser tab (and related tabs). The algorithm used is the client-side implementation of CARP, and this includes a failover mechanism if the proxy used doesn't respond.
To use this, you need to configure clients to use (and your network to support) WPAD autodetection, or if that's hard, point them at autoconfig URL explicitly http://proxy:8080/wpad.dat. An NLB IP is fine; a dedicated IP or name is fine.
The default file format includes IP addresses of each node, and each node may be a backup for each other node, so if the connection to .1 fails, .2 may be tried for the same URL. You get "loose" availability in this way, just by using the script, without NLB being involved, depending on client behaviour.
Network Load Balancing
Every node in an array has a dedicated IP address (and the dedicated i.e. unique IP should be the one specified for intra-array communication, and the first IP listed in NIC properties->IPV4), but every node also shares any virtual IP addresses with all other nodes.
Pointing a client at NLBIP:8080 means that when one node fails, the client will connect to the other node, after NLB reconverges.
NLB provides box-dead failover only, and using Integrated NLB means that when TMG stops the firewall service on a node, it also helpfully turns off NLB at the same time, so that the node stops trying to accept incoming traffic.
DNS Round Robin
The worst solution for availability, but won't hurt unless the name overlaps with something important.
How This Affects The Question
Your clients should be configured either for:
Then, when one node is turned off, the others should still be client-accessible, and should still work.
If that's not your problem, you need to troubleshoot it.
|
OPCFW_CODE
|
I’m in the process of altering a travel application for my current client. The application serves as a way of requesting permission to travel for the company and registering travel expenses.
In such process, I always try to make some design changes to enhance the user experience. So I hope :-).
One particular part I’d like to share is the design of some dialogboxes. As an example, I want to show the Flight dialogbox, where flight information is filled out.
The current dialogbox
Take a look at the design (this opens a new window so you can look at it and read the remarks side-by-side):
The good thing about this layout, is the overview you have, the feeling that everything you need to fill out is there in one screen. No need to search behind corners for additional fields. Information at-a-glance.
What I want to see changed:
- The airplane: I don’t like being distracted by a prominent background image, even if it’s a cool Airbus. I want to see the fields.
- The grey part: the right part of the dialogbox brutally changes in a grey zone.
- The top buttons: the applications uses a principle called table walker to enter multiple rows of data (different flights). Therefor the navigational buttons on top are needed, but they confuse me next to the OK- Cancel buttons. Buttons are all over the place.
- The colors: a bit of purple, some blue and a little grey. Only the grey seems functional to me (these are computed fields, no need to enter anything).
- The size of the dialog: there’s a lot of open space here. I don’t mind some whitespace to lighten up things a bit, but we don’t need enough space to land the airplane, thanks.
- The fields: at first sight they seem randomly thrown on the form. If I look closer, I do notice a logic order. I guess the eyes don’t really know whether they should go horizontally and follow the airbus or have to go down to the ground.
- The field sizes: the font is small, so why are the fields so tall? It’s because of the default height of native OS-style fields in Notes.
The new dialog
“Great, Martin! Very easy to break down other people’s hard work on the web!” If I do that, I have to present a better alternative. And by no means I want to ridicule other people’s work: let’s not forget it’s sometimes easier to enhance what already exists than to create it for the first time.
This is what I came up with:
- No airplane: I removed the background image.
- System grey everywhere: no sudden color changes when system is used as the background color for the dialogbox.
- No top buttons: the navigational buttons for the table walker were removed – now we have Add – Change – Remove buttons in the main form.
- Less colors: I got rid of blue and purple and the bold for the field labels. I kept the grey, because those fields are only informative.
- A smaller dialog with still enough breathing space for the fields, but considerably smaller, so that we can still see the main form below it (I believe a dialogbox should cover only part of the main form). Of course the tabbed tables were helpful here, I’ll explain about that in another section of this article.
- Ordered fields: the tabs make the first distinction, the vertical ordering takes you by the hand after that.
- Smaller fields: I reduced the field height to 0,5cm.
I looked at the way Microsoft and other software vendors design their dialogboxes and a common feature are tab pages. This effect can be simulated with tabbed tables in Notes.
Some design considerations:
- Use small fonts for tabs and data: I used a Tahoma 8pt.
- Give every table column in the tabs an equal width: otherwise if the columns for the labels have a different width for each tab, the fields seem to jump around the screen while switching tabs.
- Give every row of the tabbed table the same height: you can do this by assigning a minimum height (>=height of tallest row) to the table cells in the table properties. I find it less disturbing to see a tab page with some free space at the bottom, than to see the bottom line jump up and down while switching tabs.
|
OPCFW_CODE
|
// this file contains stuff that used to be parts of the code and
// aren't anymore, but might be needed in the future
static struct Point2D intersect_lines(
struct Point2D astart, struct Point2D aend,
struct Point2D bstart, struct Point2D bend)
{
// https://en.wikipedia.org/wiki/Line%E2%80%93line_intersection#Given_two_points_on_each_line
double
x1 = astart.x, y1 = astart.y,
x2 = aend.x, y2 = aend.y,
x3 = bstart.x, y3 = bstart.y,
x4 = bend.x, y4 = bend.y;
double denom = (x1-x2)*(y3-y4) - (y1-y2)*(x3-x4);
// TODO: handle the case where denom is very small
return (struct Point2D){
( (x1*y2-y1*x2)*(x3-x4) - (x1-x2)*(x3*y4-y3*x4) )/denom,
( (x1*y2-y1*x2)*(y3-y4) - (y1-y2)*(x3*y4-y3*x4) )/denom,
};
}
|
STACK_EDU
|
Here are some more things I can think of to try experimenting with in case you find a combination that works. The main idea is that there is a resource issue that Photoshop is not gracefully handling.
Does it even work if you stitch a couple tiny images, and then what is the number or size of images where it starts to fail consistently, and then can you vary things related to resources where it starts to work with a larger set of images that used to fail, before?
Try changing the amount of memory available to Photoshop in Preferences, where smaller seems like the direction you can go from what you've posted.
Try changing your scratch disk to another drive, perhaps even the program drive and/or the system-boot drive. It's unlikely it's running out of 1TB but maybe something about that particular HD is causing a problem.
Watch the amount of memory used by Photoshop in TaskManager and see if it fails when the amount of memory used passes a certain threshhold. Select both Commit Size and Working Set as types of memory to track in the process view, and also watch the total memory used by the system in the Performance tab.
If there is a BIOS setting when you boot your computer for how much memory your video card uses try changing it.
The memory size being a multple of 3GB seems like an uncommon amount. Maybe take one memory chip out so it is a multiple of 2GB. Only do this if you are comfortable mucking around with your hardware and use proper static-reduction techniques, etc.
Try reducing and expanding the number of history states Photoshop keeps track of using Preferences.
You don't say anything about OpenGL settings, so I assume your card doesn't support it? If it does, try turning it off or try turning it on if it's off but can be used.
Sorry to butt in here,
I don't know the answer but some other guy was on here awhile & ago posted
same problem with RAW files (only)
He said changing them to other formats worked, like tiff or jpg.
So it may not be related to your comp putzing out but to the new photomerge
unable to handle RAW.
CS4 (sorry) sounds horrid
it is one update I will skip
I have found open GL to really suck for gaming applications and too much of
CS4 seems dependent upon it
too bad there is no Direct x choice on CS4 (even ancient 3d vid games have
have never had any issues with programs running Direct x
maybe Mac cannot use it?
I cannot afford to upgrade all my hardware this year, am reluctant to change
to Vista for obvious reasons, and PS is too important to my work to risk
messing up with a poorly designed upgrade that messes up functions that
previously worked well in CS3.
I love CS3 and will stick with it until this comp dies and I have to replace
Does anyone know if the CS3 ACR works with newer cameras if you input jpgs
not RAW formats?
I prefer to upgrade the camera before replacing the comp and PS version.
Cannot afford all 3 at once.
If ACR doesn't show your camera, you need to convert to DNG first.
Thank you for your replies.
Steve: I lowered the amount of RAM used by Photoshop from 70% to 65% and it worked. My video card is among the latest NVidia cards. Therefore I don't think that the card is the problem.
I even tried to photomerge 4 RAW images which I opened at a smaller resolution (@ 2.8 Mpx). Photomerge would not produce any results even in this case where the resources should not be a problem.
I still cannot understand this peculiar behavior. I have PS CS4 for five months and Photomerge used to work fine (with the same preferences and PS & Bridge the only applications running).
Anyway, I'm glad it's fixed even with this small compromise.
Since the amount of RAM is a percentage, Photoshop's allocation depends on how much is available at the time you start up Photoshop, so the fact it works on one day doesn't mean it'll work the next, so further tweaks in the Photoshop allocation could be necessary, but at least you know what to try.
I would guess that the PhotoMerge process is aborting due to a memory issue but it is deep enough that no error message is generated. This is surely a bug, and allocating even less memory to Photoshop may change when Photoshop is swapping things out to its scratch file sooner, allowing it to have more memory for itself, during the PhotoMerge.
It could also be something related to OpenGL being used for image rendering and the display-driver having an issue that Photoshop can't really see.
The amount of RAM available is somewhat dependent on what applications have run before, not just what are currently running, so even if you only ever have PS and Br running, the available RAM could be different based on what applications have already run and closed, already, and maybe some OS component or driver has been updated sine it used to work and that component uses slightly more or less memory than it used to. And even if there is an identical amount of RAM available, memory is allocated in chunks that aren't freed up immediately, so, again, it could depend on what applications have been run since a reboot.
It seems to me that CS4 and ACR have been updated to 11.0.1 and 5.3 less than 5 months ago, and maybe that is what is different, now.
> If ACR doesn't show your camera, you need to convert to DNG first.
jpeg to dng???
I was asking if CS4 ACR will still take jpg from the newer cameras even
though I know it will not do the RAW files
> I was asking if CS4 ACR will still take jpg from the newer cameras
It should, try it.
The data that is needed to interpret a camera-specific raw file should not be needed for a JPEG.
> It should, try it.
> The data that is needed to interpret a camera-specific raw file should not
> be needed for a JPEG.
I don't own the camera (YET)
would have to try it with a rental
You could take a card in and try a shot taken in the shop.
And while you are at it, load some images as RAW and see for yourself.
I never use jpeg.
This IS weird.
Photomerge does not work on my Mac 2x3 ghz dual core Intel Xeon with 7 gb of ram
Photomerge does work on my MacBook Pro with 4 Gig of RAm.
Same software, works from Lightroom, works with raw files.
Also on my desktop machine photoshop only recognises that I have 3072 mb available. Its the same with the laptop, yet I have 7 gb in the desktop.
Hi, Have just tried to use photomerge in CS4 mac. Trying to merge 4 photos into a panorama, have tried both jpeg and tiff versions. It only seems to open one file then freezes. Have successfully done it in CS2 on my laptop. Any ideas? Thanks
I also am having problems with photomerge crashing. Even using as few as 2 jpgs it crashes.
Independant of running photomerge directly from PS, or from within Bridge, or from within Lightroom 2.5.
Same behavior as previously reported: I see the selected files added as layers to a new file, and some processing happens, then suddenly everything closes, no evidence left behind.
I have lowered the number of history states, and the amount of memory allocated to photoshop.
I'm, running on XP.
Open GL is not supported for the nVidia GeoForce Go 7400 card on my Vaio, so it is (sadly) disabled.
I still have CS2 installed on my (same) system, and interestingly, photomerge works fine in CS2, although the results are less than stellar (in all due.. these images weren't properly shot for panorama stitching). I'd like to see what photomerge CS4 can do..
What is the fix for CS4?
What I have found to work is:
1) File -> Scripts -> Load Files into Stack, with option "Attempt to Automatically Align Source Images" checked (or unchecked).
2) Select -> All Layers in the file.
3) Edit -> Auto-Align Layers. Choose desired options (ie auto, vignette removal, geometric distortion).
4) If step 1 "auto align source images gave better results than step 2, then Ctrl-Z (or Edit -> undo auto-align layers).
4b) optional. manually Free-Transform/Warp/Move individual layers to improve alignment.
5) Edit -> Auto-Blend Layers. Choose Panorama option. Check [x] Seamless Tones and Colors.
I do not know how closely Photomerge follows the Auto-Align, Auto-Blend, or Stack:Auto-Align routines
..but PS handles these routines (individually) fine.
Photomerge seems to run through its routines fine and crash only at the last moment before revealing the outcome.
I do not know a way to see what process causes the crash, or know why the file/process closes without saving any steps, or issuing an error message.
Thanks! Now I finally found a work-around to an extremely annoying bug in PS CS4! Thank you!
|
OPCFW_CODE
|
In keeping with the effective communication strategies theme, today's post is going to be about analogies and why they can be essential for getting your ideas across.
The underlying concept is very simple. To introduce your audience to a new idea, you approach them with a preexisting idea that they are already well familiar with. Psychologists call these preexisting and known concepts as "schemas." Tapping into these schemas allows you to build towards a complicated and foreign concept by using familiar and simpler ideas. For instance, you might not be familiar with how a microphone works but if an engineer broke down all the individual parts and compared them to objects you were familiar with, you could slowly connect all the smaller and less complex pieces together and start understanding how everything comes together in the end to build a functioning microphone.
This is where analogies come into play. "Analogies derive their power from schemas and make it possible to understand a complex message because they invoke concepts that you already know." Something easy to think about is substituted for something difficult and foreign. When you think about it, this is a "no duh!" approach for explaining something new and potentially complicated. But, if conscious effort and attention isn't paid to how messages are tailored, the final message can come out very convoluted, technical, academic, complicated, confusing, and foreign. Analogies are essential for avoiding this disappointing result.
The best way to drive the point home is through breaking down an example that I believe properly uses schemas and metaphors. I came upon this picture on a friend's profile on Facebook.
Let's consider what this graphic is doing. Issues related to the US federal budget deal with hundreds of millions, billions, and even trillions of dollars. The vast majority of individuals will NEVER personally deal with such amounts of money or work in positions that will give them direct experience with handling such vast levels of resources. We simply do not understand the scope of such numbers and we really have no idea where to even start.
This example attacks this problem by comparing the federal budget to concepts that we are much more likely to be familiar with. For instance, most of us can understand numbers that aren't in the millions, billions, and trillions. Most of us are also probably familiar with balancing a budget, paying off debt, and using credit cards. By tapping into these preexisting schemas, the picture makes a poignant analogy by comparing the national budget, borrowing, and debt, to a household budget, personal credit cards, and personal debt. With the help of this analogy, the audience is hopefully less confused about the scale of the problem and the relative importance of the solutions being outlined.
However, we have to be careful when using analogies because we can "dumb down" an issue and make false comparisons. There are NUMEROUS differences between a national budget and a personal budget. The issue is obviously much more complicated than this simple graphic depicts. But, by starting with simpler terms and concepts we are familiar with, we can hopefully trek towards greater complexity. Analogies are essential for starting this journey towards deeper understanding. When utilizing analogies, the ultimate goal is steadily increasing the level of complexity, not dumbing concepts down and leaving them in such a state.
The next time you are having difficulties communicating a novel idea or concept, try searching for schemas your audience will already be familiar with.
|
OPCFW_CODE
|
Introduction to Kenbak-1 Indexed Addressing
Kenbak-1 Indexed Addressing allows you to have a variable address. Typically, we use this to move through a series of memory locations to get or store data. In this example, we’ll create some crossing lights. In other words, the lights will start at both ends of the 8 bit display, and move toward the opposite end of the display. Effectively, they will “cross” in the middle. You can enter any bit pattern to display though. We are effectively creating a sequencer.
There are several ways to make this happen, but in this post, we’ll concentrate on how to use indexed addressing.
In this case, we’ll use addresses 107 to 100 to store our bit pattern. Our program will start in memory cell #4. Just keep in mind that to keep things as simple as possible, we will enter our bit pattern backwards. In other words, cell 100 will be the last bit pattern, and memory cell 107 will be the first.
Before we begin, please print out some Kenbak-1 worksheets. This will help you to understand how we develop the instructions that we will enter.
Let’s start off with some simple house keeping. Keep in mind that addresses 000 to 002 are the registers. Also, remember that memory cell 003 is the program counter. We don’t really need to do much with the registers because our program will initialize those. However, we do need to start the program counter at 004. That way, when we run our program, the Kenbak-1 will start executing code from that memory cell.
For memory cells 001 to 011 simply initialize our registers. Look at the programming worksheet, and compare this to memory cell #4. For the first octal digit, we specify 0 for the “A” Register. The second octal digit is 2, which is a load instruction. Next, we want to load an immediate value, so the last octal digit is 3. Therefore, in memory cell 004, we have “023”. Memory cell 005 contains the immediate value that we want to load to the “A” Register. We simply repeat this for the B register, “123”. Additionally, we need to start the X register at a value of 7. I’m doing it this way because with the Kenbak-1’s instruction set, it’s easier to count down toward 0, than to count up to an arbitrary value.
Send Data to the Display
Now that our registers have initial values, we can start to send our bit pattern to the display. As I said before, we will later store our bit patterns in cells 107 to 100. Recall that we have our X register set to 007. This is our index register.
When we use Indexed addressing to load the A register, we have a variable address. In other words, we will specify a specific memory location. The Kenbak-1 adds the value of the X register to this memory location . In other words, if we specify address 100 for the instruction, and the X register has the value of 7, then we will get data from memory cell 107.
Let’s give this a try. In memory cell 012, load the accumulator using indexed addressing, starting at cell 100. If you look at our worksheet, the instruction will be 026. 0 for the A register, 2 for a LOAD instruction, and 6 for indexed addressing. Memory cell 013 will have our reference value of 100.
At this point, we need to store the value of the A register to the display. Once again, look at your worksheet. To store the A register to a memory location, we specify 034 for our instruction in cell 014. Our worksheet tells us that the display is at address 200. Therefore, we will enter 200 into memory cell 015.
Decrement the Index Register for Kenbak-1 Indexed Addressing
At this point, we need to decrement the X register. We do this because the cell containing the data we want to send to the display next is in cell 106.
When we look at the worksheet, we can build this command easily. The command is 213, and this goes into cell 016. 2 for the X register, 1 for a subtract, and 3 for an immediate value. In cell 017, we specify how much to subtract from the X register. This will be 1, and this goes into cell 017.
We are not finished just yet. We need to continue to decrement the X register, and update the display each time. We’ll continue this until X reaches 0. Therefore, we will need a conditional jump. This needs to jump back to cell 012 where we reload the accumulator to display the next byte.
Let’s look a little further down in our worksheet to develop this jump.
We want to base the jump on the value of the X register, so our first digit will be 2. Since this will be a direct jump, our second digit is 4. We want to continue this process while X is above or equal to 0. Therefore, our last digit will be 6. Our command is 246. This will be in cell 020, and the cell we want to jump back to is 012.
Note: Currently, I’m having a little trouble with the simulator, and trying to get that resolved, but a temporary fix is to change the instruction at 020 to “247”. You will just be missing the data at address 100 for now. On the Kenbakuino, though, it seems to work as listed here.
Restart the program
At this point, we are ready to restart our bit pattern. We’ll do an unconditional jump back to cell 004.
We need an unconditional jump, so the first digit will be 3. Since this is a direct jump, our second digit is 4. We don’t really care much about the conditions since this jump is unconditional. We’ll just set the last digit at 4. Therefore our Jump command will be 344, and we jump back to cell #004.
Entering Data for Kenbak-1 Indexed Addressing
To set up the data you wish to display, go to address 100. You can start entering display patterns up to address 107. Keep in mind this data will display backward for this example. Therefore 107 will show on your display first, then down to 100.
If you are using the emulator online, you can load this code: Just be sure to set your bit pattern in address 100 to 107.
If you are using a Kenbakuino, you can load this code: Again, don’t forget to set addresses 100 to 107.
For more information on the Kenbak-1, visit the Kenbak-1 Category page!
— Ricky Bryce
|
OPCFW_CODE
|
I am due to begin a course very soon and part of the content I’ll be teaching is critical listening. One of the ways that I would like to teach this is buy having students listen to film clips and spot the errors. I have a couple of examples already, but I was wondering if you could suggest any more.
They might be stereo imaging errors, incorrect sound effects errors or anything else.
Thanks for any help.
I remember watching Public Enemies once and thinking that there was some weird stuff happening in some of the dialog edits. I’ve heard that working on Michael Mann films can be stressful and chaotic, and that certainly can have an effect on an editor’s ability to smooth those things out…especially if the director wants lines cobbled together in a certain way.
“Errors” are difficult to come across as they are sometimes aesthetic choices and certainly in modern big budget productions problems they normally get caught before release. Having said that in the UK in the last couple of years there have been viewer complaints about the sound on a couple of TV programs. Off the top of my head I think one was the nature programmer Planet Earth and the other was Downton Abby. I’ve not seen either and can’t remember what the issues were, but sure Google News will unearth.
Having said that, I use some examples from early Sergie Leon westerns (Dollars Trilogy). At the time it was common practice for all the actors to deliver their lines in their native language. Then lines were overdubbed in post by voice actors for the various different language releases. As a result, the lip-sync often does not match the visuals I also use some examples from A Fistful of Dynamite (1971) as some of the sound effects standout.
I also use some examples from the film The Inglorious Bastards (1978) and I’m sure you could find some examples from early Bruce Lee martial arts movies.
I’m not sure what kind of course you’ll be teaching or/and within which context, but as a student I wasn’t interested in what went wrong in other peoples workflows/processes. I learned most what I did ‘wrong’ myself and what was the result. It could either be bad or good on story telling level.. I discovered that there is no one right way to do it.
Critical listening is a bit of an abstract term, but it boils down to trying to 3 listening modes (according to a lot of people but Schaeffer and Chion in particular)
- Causal: understand what you hear (a clock ticking),
- Semantic convey it’s meaning in the context (time is passing/running out),
- Reduced: how it was created/recorded (directional microphone or contact mic?).
Other than that anything can go wrong when you listen in one of those 3 listening modes, but that can be the result of a lot of other factors.. (budget, distribution, format etc etc).
|
OPCFW_CODE
|
My LinkedIn Page
My PhD thesis (1994)
Einstein Index - 1527 (in August 2013) - 1612 (in February 2014) - 1689 (in October 2014) - 1791 (May 2015) - 1895 (Feb 2016) - 2118 (Feb 2017)
Prior to that I worked within the Government of Canada for 17 years, firstly at the National Research Council, Institute for IT, in various groups, as a Research Officer, and then at the Communications Research Centre, as a Research Scientist, in Network Systems and Technologies, Information Security.
My research is focused on bringing human notions to the digital world, primarily information security-related. In specific, I work with trust, regret, forgiveness, comfort, and more recently, wisdom.
The research links to the right with help you with more information.
You could describe my research interests in a list like this: (I can and do talk about each of these, often in the same talks, and will be posting copies of invited talks as I go)
- Computational Trust
- Computational Distrust
- Computational Mistrust
- Computational Wisdom
- Mature Technlogies
- Trust Management
- Information Security
- Device Comfort
- Critical Infrastructure Interdependencies
- Computational Regret
- Computational Forgiveness
- Mobile devices
I like to start things, get people excited about them, and get people to work on them. I like to be in on the early stages of things and enthuse people, I am not that much of a finisher of things, preferring instead to let others go there while I get excited about other things (we all have our strengths!)
I am currently supervising several students at the Masters and PhD level, but if you are seriously interested in working with me, browse my research pages and see if there's something that excites you, and we'll talk!
The best way to contact me is via email: ac.tiou|hsram.nehpets#ac.tiou|hsram.nehpets
How do I work? With a friend, usually…
That's Jessie, and she's training to be a therapy dog (goes along with the therapy horses we have!)
Something to remember, from Per Olov Enquist: “One day we shall die. But all the other days we shall be alive.”
Building start: July 29th, 2013 - home, navbars
Pages added August include Thoughts, Teaching, Publications…
I don't know, let's see: Humanism, extra thoughts, the iPad adventure, things like that…
Below this line is not under my control
|
OPCFW_CODE
|
There are N students in a class. Some of them are friends, while some are not. Their friendship is transitive in nature. For example, if A is a direct friend of B, and B is a direct friend of C, then A is an indirect friend of C. And we defined a friend circle is a group of students who are direct or indirect friends.
Given a N*N matrix M representing the friend relationship between students in the class. If M[i][j] = 1, then the ith and jth students are direct friends with each other, otherwise not. And you have to output the total number of friend circles among all the students.
You will notice that the input matrix represents a truth table. A truth table shows the relation ship between different nodes. If we draw each person as a node in a graph we can use the table to draw a connection between the nodes
So we need to explore this graph and find all the isolated nodes. Let’s recall the standard DFS algorithm for exploring a graph
def dfs(list_of_vertices): visited = set() for v in list_of_vertices: if v not in visited: explore(v, visited) def explore(node, visited): visited.add(node) for v in node.neighbors: if (n not in seen): explore(v, visited)
One special thing to note about this algorithm is the first
for loop will account for unconnected items in the list of vertices. So if we had a graph with no nodes connected it would still explore each node.
So to solve this problem we need to do the following modifications to our base DFS code
Add a counter to track all the unconnected networks
We don’t have a way to get the neighbors so we’ll have to write a
get_neighbors function that can return all the nodes that are connected
To get a particular nodes neighbors we just have to example the row of values from our truth table.
So for example if we have a table
[ [1, 1, 0 , 0 1], [1, 1, 0 , 0 0], [0, 0, 1 , 1 0], [0, 0, 1 , 1 0], [1, 0, 0 , 0 1], ]
We want to look up the neighbors for the first friend which is at index 0
we would just iterate through the array at index 0 ->
[1, 1, 0 , 0 1]
The first value in this case is the relationship to itself but since we are adding the element to the seen list it will be ignored and not visited again. We can just compare each value and add it to our neighbor list if the value is
Here’s an example of what the recursion tree would look like
So adding our modifications we will have -
def findCircleNum(self, M: List[List[int]]) -> int: def get_neighbors(vertex_index): return (index for index, val in enumerate(M[vertex_index]) if val == 1) def explore(vertex_index): seen.add(vertex_index) for new_vertex_index in get_neighbors(vertex_index): if new_vertex_index not in seen: explore(new_vertex_index) seen = set() cluster_count = 0 for vertex_index, vertex in enumerate(M): if vertex_index not in seen: cluster_count += 1 explore(vertex_index) return cluster_count
Our runtime will be
|
OPCFW_CODE
|
/**
* This class represents the selection table by wrapping the dataTable
* plugin.
*/
function selectionTable() {
var table;
var data = [];
var samples = [];
var forms = [];
var components = [];
var percentages = [];
/**
* Removes the table from the viewport.
*/
this.destroy = function() {
if (table) {
table.fnClearTable(true);
table.fnDestroy(true);
}
}
/**
* Gets the user's inputs. Do not call if update hasn't been called previously.
*
*/
this.get_inputs = function() {
// Make sure percentages add up to 100
var sum = 0;
for (var i = 0; i < percentages.length; i++)
if (percentages[i]) sum += percentages[i];
if (sum != 100) {
new modal('<h2>Whoops!</h2><br><p>Your percentages do not add up to 100%.</p>', ['Ok'], [], [true]);
return;
}
// Package all of the input into a data structure
var inputs = {
comps: components,
pcts: percentages
}
console.log(inputs);
return inputs;
}
/**
* Updates the selection table and its data based on the json received
* from the controllers and models.
*/
this.update = function(json) {
// Update the matlab names and clear data
samples = json.samples;
data = [];
// Clear the selection table
if (table) {
table.fnClearTable(true);
return;
}
// Create the selection table
table = $('#selected-compounds').dataTable({
'ordering': true,
'paging': false,
'searching':false,
'dom': '<"top">rt<"bottom"><"clear">',
'columns': [
{'title': 'Name'},
{'title': 'Boiling Point (K)'},
{'title': 'Molecular Weight (g)'},
{'title': 'Percentage (%)'}
]
});
}
/**
* Adds the data to the selection table based on the user's choices
* in the sample table.
*/
this.add = function(index) {
// Format the new data to be added
var new_data = viewport.get_sample_table().get_data()[index];
new_data[3] = '<form><input type="number" id="pct' + index + '" value="0" min="0" ' +
'max="100" step="5" readonly></form>';
// Add new data component to components list
for (var j = 0; j < samples.length; j++) {
if (new_data[0] == samples[j].name) {
components[index] = samples[j].matname;
break;
}
}
// Add new data to percentages list
percentages[index] = 0;
// Add it to the data array
data[data.length] = new_data;
// Put the data into the table
table.fnClearTable(true);
for (var i = 0; i < data.length; i++) {
table.fnAddData(data[i], true);
var beg = data[i][3].lastIndexOf('pct') + 3;
var end = data[i][3].lastIndexOf('" val');
var num = data[i][3].substring(beg, end);
// Create the form
if (percentages[num] !== null)
forms[num] = new incrementbox(num, percentages[num]);
else forms[num] = new incrementbox(index, 0);
}
}
/**
* Removes the data from the selection table based on the user's choices
* in the sample table.
*/
this.remove = function(index) {
// Store the data to be removed
var new_data = viewport.get_sample_table().get_data()[index];
// Remove the data from the array, component list, and percentages list
for (var i = 0; i < data.length; i++) {
if (data[i][0] == new_data[0]) {
data.splice(i, 1);
break;
}
}
components[index] = null;
percentages[index] = null;
forms[index] = new incrementbox(index, 0);
// Put the array into the table
table.fnClearTable(true);
for (var i = 0; i < data.length; i++) {
table.fnAddData(data[i], true);
var beg = data[i][3].lastIndexOf('pct') + 3;
var end = data[i][3].lastIndexOf('" val');
var num = data[i][3].substring(beg, end);
// Create the form
if (percentages[num] !== null)
forms[num] = new incrementbox(num, percentages[num]);
else forms[num] = new incrementbox(index, 0);
}
}
/**
* Reset the selection arrays.
*/
this.reset_selections = function() {
components = [];
percentages = [];
forms = [];
}
/**
* Update a percentage form.
*/
this.update_form = function(index, value) {
percentages[index] = value;
}
/**
* Access the forms of the selection table.
*/
this.get_forms = function() {
return forms;
}
/**
* Access the data of the selection table.
*/
this.get_data = function() {
return data;
}
}
|
STACK_EDU
|
I'll test what happens when I plug my embedded NTP client directly into my NTP servers, bypassing the Ethernet switch. This ends my long running embedded NTP client series.
The systems involved
NTP server APU2: Part 1
First, I connect the Archmax directly to the secondary port on the APU2. I wanted to see if there was any difference between the primary and secondary ports.
This combination resulted in a 180ns offset and 18.5us RTT. The ideal RTT is 15.0us, so there must be some extra delay in the system. Removing the Ethernet switch removed 3.7us (it was 22.2us RTT).
210ns offset and an RTT of 18.5us. Close enough to be well within the margin of error.
Next, I connect the archmax directly to the STM32MP1 board.
This resulted in an offset of -479ns and a RTT of 16.7us. Removing the ethernet switch removed 5.4us (it was 22.1us RTT). But the offset with the switch and without the switch stayed pretty similar (-479ns vs -430ns).
This NTP server having a small offset change compared to the APU2's is probably a result of its Intel NIC driver. Below is a snippet of the NIC driver code.
The Intel driver adjusts for latency in the RX and TX paths. It has different values for the different link speeds. The latency adjust is added to the TX timestamps and subtracted from the RX timestamps. Both modifications result in a lower RTT, but the asymmetry changes the offset. It's interesting the Intel NIC ended up having a higher RTT even with the -3237ns adjustment at 100M.
Lastly, I connected the two NTP servers directly together. If everything is consistent, the two NTP servers should have an offset of 479ns+180ns = 659ns at 100M.
The 693ns result is close enough to 659ns to be consistent with all the other results, and there's a RTT of 19.7us.
Because the offset is so low at 1G, this offset looks like something specific to 100M mode. This could be in the PHY (such as buffering RX samples for filtering).
100M vs 1G
I was hoping I'd be able to compare a 100M direct connection vs a 1G direct connection, but I got weird results.
Ideally the RTT should be 1.5us for this test. The NTP response direction is at -1.4us (the blue line), which would be reasonable for a 3us RTT. But the request is being delayed over 19us (the green line almost off the graph). The total RTT was 21us, which is oddly higher than the 100M RTT! I suspect energy efficient ethernet, but using ethtool to disable it on both sides did not affect the RTT or offset. I also tried disabling pause frames, but that didn't change the results either.
Which one of these clocks is most correct? To properly tell, I would need to measure the actual ethernet signal itself against the timestamps. Because I don't have the equipment to do that, I'll pick one of the two clocks closest to to each other and call it "correct" instead (with a +/- 500ns margin of error). To eliminate any error when using this NTP server, you would want to take a NTP client and have its NIC output a PPS to compare against the NTP server's PPS. That way, you can have confidence the NIC's time doesn't have a hidden offset.
|
OPCFW_CODE
|
"""Helper Functions"""
import json
import os
import sys
import re
import time
RATE_ENDPOINT = "https://openexchangerates.org/api/latest.json?show_alternative=1&app_id={}"
CURRENCY_ENDPOINT = "https://openexchangerates.org/api/currencies.json?show_alternative=1"
def byteify(loaded_dict):
if isinstance(loaded_dict, dict):
return {byteify(key): byteify(value)
for key, value in loaded_dict.iteritems()}
if isinstance(loaded_dict, list):
return [byteify(element) for element in loaded_dict]
if isinstance(loaded_dict, unicode):
return loaded_dict.encode("utf-8")
return loaded_dict
def is_it_currency(query):
"""Check if query is a valid currency"""
currencies = load_currencies()
query = query.upper()
if query in currencies:
return query
return None
def is_it_float(query):
"""Check if query is a valid number"""
try:
return float(query)
except ValueError:
return None
def is_it_something_mixed(query):
"""Check if query is Mixed with value and currency"""
match_result = re.match(r"^(\d*(\.\d+)?)([A-Z_]*)$", query.upper())
if match_result:
value = is_it_float(match_result.groups()[0])
currency = is_it_currency(match_result.groups()[2])
if value and currency:
return (value, currency)
return None
def load_currencies(path="currencies.json"):
"""Load currencies, create one if not exists"""
if not os.path.exists(path):
return refresh_currencies(path)
with open(path) as file:
if sys.version_info.major == 2:
currencies = byteify(json.load(file, "utf-8"))
elif sys.version_info.major == 3:
currencies = json.load(file)
else:
raise RuntimeError("Unexpected Python Version")
return currencies
def refresh_currencies(path="currencies.json"):
"""Fetch the newest currency list"""
if sys.version_info.major == 2:
import urllib2
response = urllib2.urlopen(CURRENCY_ENDPOINT)
currencies = byteify(json.load(response, "utf-8"))
elif sys.version_info.major == 3:
import urllib.request
response = urllib.request.urlopen(CURRENCY_ENDPOINT)
currencies = json.load(response)
else:
raise RuntimeError("Unexpected Python Version")
with open(path, "w+") as file:
json.dump(currencies, file)
return currencies
def load_rates(path="rates.json"):
"""Load rates, update if not exist or too-old"""
from .config import Config
config = Config()
if not os.path.exists(path):
return refresh_rates(path)
with open(path) as file:
rates = byteify(json.load(file, "utf-8"))
last_update = int(time.time() - os.path.getmtime(path))
if config.expire < last_update:
return refresh_rates(path)
# inject rates file modification datetime
rates["rates"]["last_update"] = "{} seconds ago".format(last_update)
return rates["rates"]
def refresh_rates(path="rates.json"):
"""Update rates with API"""
from .config import Config
config = Config()
if sys.version_info.major == 2:
import urllib2
try:
response = urllib2.urlopen(RATE_ENDPOINT.format(config.app_id))
except urllib2.HTTPError:
raise EnvironmentError(
"Invalid App ID: {}".format(config.app_id),
"Fix this in workflow environment variables sheet in Alfred Preferences")
rates = byteify(json.load(response, "utf-8"))
elif sys.version_info.major == 3:
import urllib.request
from urllib.error import HTTPError
try:
response = urllib.request.urlopen(RATE_ENDPOINT.format(config.app_id))
except HTTPError:
raise EnvironmentError(
"Invalid App ID: {}".format(config.app_id),
"Fix this in workflow environment variables sheet in Alfred Preferences")
rates = json.load(response)
else:
raise RuntimeError("Unexpected Python Version")
with open(path, "w+") as file:
json.dump(rates, file)
rates["rates"]["last_update"] = "Now"
return rates["rates"]
def calculate(value, from_currency, to_currency, rates):
"""The Main Calculation of Conversion"""
from .config import Config
config = Config()
return round(
value * (rates[to_currency] / rates[from_currency]), config.precision
)
def generate_items(query, raw_items, favorite_filter=None, sort=False):
currencies = load_currencies()
items = []
for abbreviation in raw_items:
if currencies_filter(query, abbreviation, currencies[abbreviation], favorite_filter):
items.append({
"title": currencies[abbreviation],
"subtitle": abbreviation,
"icon": "flags/{}.png".format(abbreviation),
"valid": True,
"arg": abbreviation
})
if sort:
items = sorted(items, key=lambda item: item["subtitle"])
return items
def currencies_filter(query, abbreviation, currency, favorite=None):
"""Return true if query satisfy certain criterias"""
favorite = favorite or []
if abbreviation in favorite:
return False
if not query:
return True
if abbreviation.startswith(query.upper()):
return True
for key_word in currency.split():
if key_word.lower().startswith(query.lower()):
return True
return False
|
STACK_EDU
|
What It Does
Into the solidity of stone, a solidity yet capable of suffused light, the fantasies of bodily vigor, of energy in every form, can be projected, set out and made permanent.
—Adrian Stokes, Stones of Rimini
The stone symbolizes physicality, concretization, manifestation, and sometimes completion or perfection: “virtus eius integra est si versa fuerit in terram.” The living stone is a perfect example of animism; not merely juxtaposing inert matter with vibrant and intelligent activity, but transcending their contrast. The golem is dead matter brought to life by breath and word—like the electricity and code that animates robots. The Philosopher’s Stone is the perfection of the alchemical Magnum Opus, being the utmost condensation of the supreme agent of transmutation (viz., AZOTH). &c.
GlowStone is a special kind of lamp that anchors a technomancer’s power and helps her to recall it whenever needed. It involves a selenite crystal tower illumed by an RGB LED controlled by Arduino, and audio instructions that lead the technomancer through a Meta-Magick exercise coordinated with the stone’s illumination.1
How It Works
GlowStone was written for the s2aio extension that adds blocks for interacting with Arduino. The extension should work on Windows, Mac, and Linux including Raspberry Pi’s Raspbian. If you download and open the GlowStone project file in the editor without s2aio running on your computer, the project will not run correctly.
A Little LED Theory
The most basic law of electronics, Ohm’s law, mathematically expresses the relationships between electric current, voltage, and resistance. Imagine that electricity moving through a circuit is like water flowing through a pipe. Current (I) is the flow of electricity through the circuit; it can be a little (trickle) or a lot (gush), and it can go slowly or rapidly. Voltage (V) is the force pushing the current along. Resistance (R) is anything that resists the flow, including the “pipe” (circuit wire) itself. Ohms’s law says that current equals voltage divided by resistance (I = V / R); voltage equals current times resistance (V = IR); and resistance equals voltage divided by current (R = V / I). In other words, current is proportional to voltage and inversely proportional to resistance. You can increase current by increasing voltage or decreasing resistance, and you can decrease current by decreasing voltage or increasing resistance. Sorted? Good! If not, fret not, for SparkFun has a proper tutorial on this subject.
A diode is an electronic component that passes current in only direction. A light-emitting diode (LED) generates light in response to the current it passes. Regular LEDs have two leads (the wires protruding from the housing): an anode that connects to a positive voltage source and moves electricity into the LED, and a cathode that connects to ground and moves electricity out of the LED. The housing typically has a flat edge on the cathode side, and the anode lead is typically longer or bent, to help you identify which side is which.
A resistor (so called because it resists the current flow) is usually needed in serial with the LED to limit the amount of current the LED passes. Unlike LEDs which are polarized and need to be oriented a certain way within a circuit, resistors work the same whichever direction they are connected in.
Now, can mix red, green, and blue light to produce a large range of visible colors. An RGB LED is a red, a blue, and a green LED in a single housing with four leads: one for each color and a common cathode or common anode (the longest lead identifies which is common).
GlowStone uses a common cathode LED that is connected to a ground header on the Arduino board while the red, green, and blue leads are connected to PWM headers via resistors.
Pulse width modulation allows us to vary the amount of voltage supplied to each of the three colored LEDs, letting us control the brightness of each color. The range of 0 to 5 volts supplied to each colored LED is represented by a PWM range of 0 to 255, so we can use the well-known web color space to find and specify which color we want the LED to glow. E.g, the RGB decimal triplet for the color deep pink is (255, 20, 147), so by setting the Arduino pins for the red, green, and blue LEDs to to values 255, 20, and 147, respectively, the LED will glow a deep pink color (more or less; variations in LED brightness and resistor values can cause an LED connected to Arduino to glow not quite the same color you see on your computer screen).
Dais Construction & Wiring
I made a small dais to conceal the Arduino and LED beneath the crystal tower, prepared much like the accumulator I made for SIBOR. Here is a parts list:
- Arduino Uno R3 w/ USB cable
- 5mm diffused common cathode RGB LED
- three 270Ω resistors (band colors: red, violet, brown; higher values may be used but will dim the light)2
- one 4″ × 4″ × 1-1/2″ wood panel
- four decorative feet w/ screws
I cut some cardboard strips and hot-glued them together to make a shim, and hot-glued the Arduino to that (I like hot glue for this kind of thing because it easy to remove with some rubbing alcohol or solvent such as Bestine).
A glob of hot glue secures the LED inside the 5mm hole drilled into the panel’s center.
Using round-nose pliers I made small loops in one end of each resistor, to place around the LED’s leads. I did the same with a bit of 22-gauge wire for the cathode-to-ground connection. A dab of Bare conductive paint on each connection secures the wires and helps make an electrical contact (typically I would use solder for this).
After you have followed the instructions on the s2aio website to install and configure the extension and your Arduino, and made something to hold the electronics beneath your crystal tower, connect Arduino to a USB port on your computer and then run s2aio in no-client mode (which prevents s2aio from opening a new instance of the Scratch editor) by inputting
s2aio -c no-client at a command prompt.
With s2aio running, open the GlowStone.sb2 file in the Scratch offline editor, and start the project by clicking the green flag. A voice will talk you through the exercise, which includes a test to verify the RGB LED is connected and working correctly.
You can specify your own color of magic (or whichever color you associate with technomancy in particular) for the stone to glow during the exercise. The variables
magicBlue record the decimal triplet for your desired color. E.g., if your color of magic is cyan, which has an RGB triplet of (0, 255, 255), you would set
magicRed = 0,
magicGreen = 255, and
magicBlue = 255, before running the project.
bluePin identify which Arduino pins are connected to the red, green, and blue leads of the RGB LED. If you need to change a pin, e.g., swap pin 6 for 9 to use are you blue pin, you can just set
bluePin to 6 at the beginning of the initial script, instead of having to change it everywhere that pin is written to throughout the project.
To fade the LED color in and out requires successively increasing or decreasing the PWM values written to the R, G, and B pins. Recall the PWM value has a range of 0 to 255, with 0 being the dimmest (essentially off), and 255 being the brightest. The only color that has all three pins set to 255 is white; all other colors require one or more of the LEDs to be dimmer than 255. For the color deep pink, which has an RGB triplet of (255, 20, 147), the range for the red pin to fade in is 0 to 255, but the range for the green pin is only 0 to 20, and for the blue pin it is 0 to 147. To proportionately adjust all three pins at the same time, we can map all of them to the maximum possible range, 0 to 255, so that it takes the same length of time for the red pin to go from 0 to 255 as it does for the green pin to go from 0 to 20, and the blue pin from 0 to 147.
To do the same thing for a common anode LED:
Make It Better
- Paint arcane glyphs onto the dais.
- Add a switch to turn the lamp on and off.
- Reprogram GlowStone to automatically turn on or change colors at particular times as reminders of tasks you need to perform.
- Reprogram GlowStone for divination by assigning meanings to colors or blinks. You could add multiple LEDs and stones to the project and make them glow different colors or in different patterns.
- Replace the spoken exercise with one of your own recording. If you have a microphone embedded in or attached to your computer, you can record your voice right in Scratch.
- Experiment with different interactions. E.g., project your anger into the stone as the color red, have the stone morph to a cool blue, and then absorb that blue color into your body—how does that make you feel?
- Replace the Arduino Uno with an Arduino Pro Mini, Arduino Nano, or other compatible board to decrease the hardware footprint.
- Go bigger! Replace the single RGB LED with three LEDs (one red, one green, and one blue) to illume a larger stone, or try this project with a light bulb or string of lamps using an Arduino-controlled dimmer made with a ZeroCross Tail.
- The GlowStone Meta-Magick exercise combines parts of “Exercise 2.3 Energy Flow Modeling” and “Exercise 3.1 Simple Evocations” from Philip H. Farber, Brain Magick: Exercises in Meta-Magick and Invocation (Llewellyn, 2011).
- A little more LED theory follows. The precise value of a resistor needed in series with an LED depends on the circuit’s current (measured in amperes), the voltage (in volts) supplied to the LED, and the LED’s “forward voltage” or “voltage drop,” which is usually specified by the LED’s manufacturer. The individual red, green, and blue LEDs within a single RGB LED may have different forward voltages and so optimally require different resistor values. However, using an approximate value is usually alright, at least in the short term. If you want to know more, SparkFun also has an LED tutorial.
|
OPCFW_CODE
|
[OpenStack Beginner’s Guide for Ubuntu 11.04] Storage Management
Nova-volume provides persistent block storage compatible with Amazon’s Elastic Block Store. The storage on the instances is non persistent in nature and hence any data that you generate and store on the file system on the first disk of the instance gets lost when the instance is terminated. You will need to use persistent volumes provided by nova-volume if you want any data generated during the life of the instance to persist after the instance is terminated.
Commands from euca2ools package can be used to manage these volumes.
Here are a few examples:
Interacting with Storage Controller
Make sure that you have sourced novarc before running any of the following commands. The following commands refer to a zone called ‘nova’, which we created in the chapter on “Installation and Configuration”. The project is ‘proj’ as referred to in the other chapters.
Create a 10 GB volume
euca-create-volume -s 10 -z nova
You should see an output like:
VOLUME vol-00000002 1 creating (proj, None, None, None) 2011-04-21T07:19:52Z
List the volumes
You should see an output like this:
VOLUME vol-00000001 1 nova available (proj, server1, None, None) 2011-04-21T05:11:22Z VOLUME vol-00000002 1 nova available (proj, server1, None, None) 2011-04-21T07:19:52Z
Attach a volume to a running instance
euca-attach-volume -i i-00000009 -d /dev/vdb vol-00000002
A volume can only be attached to one instance at a time. When euca-describe-volumes shows the status of a volume as ‘available’, it means it is not attached to any instance and ready to be used. If you run euca-describe-volumes, you can see that the status changes from “available” to “in-use” if it is attached to an instance successfully.
When a volume is attached to an instance, it shows up as an additional SCSI disk on the instance. You can login to the instance and mount the disk, format it and use it.
Detach a volume from an instance.
The data on the volume persists even after the volume is detached from an instance. You can see the data on reattaching the volume to another instance.
Even though you have indicated /dev/vdb as the device on the instance, the actual device name created by the OS running inside the instance may differ. You can find the name of the device by looking at the device nodes in /dev or by watching the syslog when the volume is being attached.
|
OPCFW_CODE
|
LocalCluster only works with processes=True
According to the manual dask distributed should start and run a localcluster by simply issuing
c=Client()
or c = Client(LocalCluster())
This results in a crash with the following error below for each process. when adding the argument processes=False it runs fine, but then it seems to only be using 1 cpu core even when setting n_workers, because of python no thread support maybe.
xception in thread AsyncProcess ForkServerProcess-8 watch message queue:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/distributed/process.py", line 36, in _call_and_set_future
res = func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/distributed/process.py", line 185, in _start
process.start()
File "/usr/lib/python3.5/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.5/multiprocessing/context.py", line 281, in _Popen
return Popen(process_obj)
File "/usr/lib/python3.5/multiprocessing/popen_forkserver.py", line 36, in __init__
super().__init__(process_obj)
File "/usr/lib/python3.5/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/usr/lib/python3.5/multiprocessing/popen_forkserver.py", line 43, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/usr/lib/python3.5/multiprocessing/spawn.py", line 144, in get_preparation_data
_check_not_importing_main()
File "/usr/lib/python3.5/multiprocessing/spawn.py", line 137, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Thanks for the issue. I am unable to reproduce:
In [1]: from dask.distributed import Client
In [2]: Client()
Out[2]: <Client: scheduler='tcp://<IP_ADDRESS>:53413' processes=8 cores=8>
It would be helpful to know:
What version of dask?
What version of distributed?
What operating system?
Are you creating a Client in a script? In IPython?
The issue title says
LocalCluster only works with processes=True
did you mean processes=False?
when adding the argument processes=False it runs fine, but then it seems to only be using 1 cpu core even when setting n_workers, because of python no thread support maybe.
With processes=False workers are threads instead of processes. This is fine for some workloads which release the interpreter lock (much of numpy, some of pandas), but not everything.
I'm seeing the same thing. If I run python console and enter:
>>> from dask.distributed import Client
>>> Client()
It works fine and I can proceed to use Dask. However if I put those lines and the start of a py file and run it with python myprogram.py, I get the OP's errors.
Ah you need this:
if __name__ == '__main__':
from dask.distributed import Client
Client()
Thanks for posting the issue and the solution @sirpy and @neverfox . Hopefully this helps others who come here in the future.
@neverfox This solved my problem and I never would've thought to do this. So, thank you! Can you provide an explanation as to why this is needed as it isn't obvious?
Ah you need this:
if __name__ == '__main__':
from dask.distributed import Client
Client()
See https://stackoverflow.com/a/20222706/1667287, same thing applies here. When processes are started in the absence of fork, they'll try to import the script that launched them, which will cause failures if the creation of Client isn't shielded in the if block.
Is it also documented somewhere on dask's side or is the snippet by @neverfox is considered a common knowledge?
We still get questions / reports about it, so it's not common enough knowledge. @jcrist looked into giving a better error message a while back, but IIRC there isn't really a good place for us to actually handle it.
If you're creating a LocalCluster (or just calling Client) in a script, then yes.
Or any time you create a process in Python from a script.
On Tue, Sep 10, 2019 at 7:34 AM Jim Crist<EMAIL_ADDRESS>wrote:
If you're creating a LocalCluster (or just calling Client) in a script,
then yes.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/dask/dask/issues/3877?email_source=notifications&email_token=AACKZTGICRD2G2K3NDKQVDLQI6WA7A5CNFSM4FPW2L4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6LJ73Q#issuecomment-529965038,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACKZTGIV6WD2EX6ZIL7GETQI6WA7ANCNFSM4FPW2L4A
.
@mrocklin so when is it not needed? Isn't it that every processing start (at some point) with creation?
If your Python script creates no additional processes then you don't need
it.
On Tue, Sep 10, 2019 at 11:50 AM Dror Atariah<EMAIL_ADDRESS>wrote:
@mrocklin https://github.com/mrocklin so when is it not needed? Isn't
it that every processing start (at some point) with creation?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/dask/dask/issues/3877?email_source=notifications&email_token=AACKZTE6S2DND3LBRNC6G43QI7T6XA5CNFSM4FPW2L4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6MD55I#issuecomment-530071285,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACKZTAXSTP7YXYDVM6H6ODQI7T6XANCNFSM4FPW2L4A
.
@mrocklin I'm sorry, but I don't understand. Can you give an example where "no additional processes" are used?
I think Matt was noting that this is a general Python problem, not just Dask. Any python program that creates processes at the top-level will have this issue.
Specifically for Dask, the processes are probably created with a LocalCluster (which is implicitly created by Client() when processes=True (the default).
Since dask is such a heavy abstraction on top of so many things, I guess it is expectable that a dask user won't be familiar with this aspect. I'd vote for mentioning this in the docs. In the meanwhile, can you point to some related documentations?
Please forgive me for the somehow off-topic. If I understand it correctly, trying to use
#job.py
from multiprocessing import Process
def foo():
print('hello')
p = Process(target=foo)
p.start()
would yield an error because the same code will be executed independently by all "instances" on the different processes. This will lead to a conflict (whose details I don't fully understand). However, by protecting the process creation step:
# job_protected.py
from multiprocessing import Process, freeze_support, set_start_method
def foo():
print('hello')
if __name__ == '__main__':
freeze_support()
set_start_method('spawn')
p = Process(target=foo)
p.start()
Only the initial invocation of the job creates the process and the copies are not doing that anymore because they are not ran as __main__ anymore. Is it the right understanding/intuition?
I'm seeing the same thing. If I run python console and enter:
>>> from dask.distributed import Client
>>> Client()
it works fine and I can proceed to use Dask. However if I put those lines at the start of a .py file and run it with python myprogram.py, I get the OP's errors.
dask complete 0.19.2, Python 3.6.6, OS X 10.13.6
I have the same issue. When I first installed Dask, It worked just fine. But few days after, it started showing the issue. Strangely, it works fine with Python console as mentioned.
My environment: macOS 10.15.1, Dask 2.9.0, Python 3.7.2
@haje01 have you read through https://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming? I don't think there's much Dask (or any other Python program) can do about this, beyond perhaps giving a better warning.
@TomAugspurger I'm sorry. I did not understand what Client() actually doing. Without processes=False, it instantly spawns new process. That makes sense.
Thank you.
|
GITHUB_ARCHIVE
|
- Literary analysis definition
- Argumentative essay introduction paragraph outline
- Free sites to check paper for plagiarism
- How to format a thesis statement
- An example of a good thesis statement
- Plagriasm checker
In this article
The destructuring job format is a Typeface appearance that assists you to unbox beliefs from arrays, or components from objects, into distinctive factors.
The cause because of this active illustration is stored in a GitHub archive. If you’d like to give rise to the fun examples project, you should replicated and send us a move ask for.
The thing and selection literal movement supply a great way to produce ad hoc packages of internet data.
The destructuring assignment uses similar format, but about the left-palm part with the job to determine what ideals to unpack from your sourced varying.
This ability resembles capabilities within different languages including Perl and Python.
Basic varied job
Task apart from declaration
A changeable could be allocated its worth via destructuring outside of the variable’s declaration.
Go delinquent beliefs
A changeable could be designated a go delinquent, in the event that the significance unpacked custom assignment writing on https://doanassignment.com/assignment-writing/ in the variety is undefined .
Two parameters beliefs could be inter-changeable in a single destructuring phrase.
With out destructuring project, changing two valuations requires a non permanent varying (or, in most minimal-stage different languages, the XOR-exchange trick).
Parsing an array returned coming from a purpose
It is been simple to come back an array from your purpose. Destructuring may make utilizing a wide range come back price more to the point.
Within this illustration, y() dividends expenses [one particular, only two] since its end result, which can be parsed within a series with destructuring.
Ignoring some returned ideals
You’ll be able to disregard return values that you are not enthusiastic about:
You may also ignore all returned ideals:
Determining most of a wide range to a varied
When destructuring a wide range, you’ll be able to unbox and assign the rest of the part of it to a variable while using relaxation pattern:
Be aware that a SyntaxError is going to be chucked in case a walking comma is utilized for the still left-side side with a rest aspect:
Unpacking valuations from a normal phrase match up
Once the normal phrase executive() strategy locates a go with, it dividends a selection made up of first your entire matched element of the line therefore the parts of the line that matched up every parenthesized class in the typical term. Destructuring assignment lets you unpack the parts from this variety effortlessly, dismissing the complete match up if not needed.
Job with no affirmation
An adjustable could be given its value with destructuring separate from its declaration.
Paperwork: The parentheses ( . ) round the assignment statement are expected when using thing literal destructuring project with out a assertion.
Your ( . ) expression must be preceded by a semicolon or it may be used how to start a descriptive essay to carry out a purpose around the previous collection.
Assigning to new varying labels
A property may be unpacked from an object and sent to a flexible using a different title than the subject property.
Here, as an example, const
Go delinquent ideals
A changeable might be designated a go into default, in the case that the worthiness unpacked from your object is undefined .
Determining to new factors brands and providing go delinquent values
A property may be equally 1) unpacked from an subject and sent to a variable which has a distinct title and two) given a go delinquent price should the unpacked benefit is undefined .
Unloading career fields from objects passed as function parameter
This unpacks the username , displayName and firstName through the user subject and designs them.
Setting the purpose parameter’s fall behind value
From the function unique for drawChart above, the destructured quit-hands facet is owned by jail thing direct on the correct-hand facet:
Stacked thing and selection destructuring
For of iteration and destructuring
Worked out object house titles and destructuring
Worked out house titles, like on item literals, may be used with destructuring.
Remainder in Object Destructuring
The RemainingAndDistributed Qualities for ECMAScript offer (period 4) brings the remainder format to destructuring. Rest qualities accumulate the residual personal enumerable property tips that are not previously chosen away with the destructuring pattern.
Mixed Variety and Object Destructuring
Selection and Object destructuring can be combined. Say you would like the 3rd consider the selection items down below, and you want the brand house inside the object, you’re able to do the next:
The model sequence is researched in the event the object is deconstructed
When deconstructing a physical object, in case a property is not accessed alone, it continuously search for along the model archipelago.
|
OPCFW_CODE
|
All newer Dell laptops now use the Microsoft mandated Modern Standby. The old legacy S3 is no longer supported. All newer Dell Laptops support Modern Standby S0 sleep only. The BIOS is written with this mandate in mind.
At the end of the work day, Dell recommends that you save your work, then choose Shut Down. But you could test Hibernate first. Reminder, do NOT use Hibernate if you are going to place the Laptop in a bag, backpack. Turn the Laptop OFF before placing in a bag or backpack!
Disable Hibernation * Press the Windows button on the keyboard to open Start menu or Start screen * Search for cmd. In the search results list, right-click Command Prompt, and then click Run as Administrator * When you are prompted by User Account Control, click Continue * At the command prompt, type powercfg.exe /hibernate off [press Enter] * Type exit, and then press Enter to close the Command Prompt window
Enable Hibernation * Press the Windows button on the keyboard to open Start menu or Start screen * Search for cmd. In the search results list, right-click Command Prompt, and then click Run as Administrator * When you are prompted by User Account Control, click Continue * At the command prompt, type powercfg.exe /hibernate on [press Enter] * Type exit, and then press Enter to close the Command Prompt window
With regards to transporting your laptop in a bag or backpack, safety should be your primary concern. You should always turn the laptop OFF = * Select the Start button * Click Power * Click Shut down
In all extended travel and especially airplane travel, safety should be your primary concern. Under no circumstances should you leave a laptop powered on and in any sleep/hibernate/standby mode when placed in a bag, backpack, or in an overhead bin. The laptop will overheat as a result of that action.
Troubleshooting for Modern Standby battery consumption and heat
Disable Intel TurboBoost in the BIOS * Plug the external power adapter into the XPS 13 9310 * Restart * Tap F2 * Open Performance * Click Intel TurboBoost * Uncheck, Enable Intel TurboBoost * Click Apply- Exit * Retest
Set DPM (Dell Power Manager) to Quiet Mode * Open the Dell Power Manager software * Go to Thermal Management * Select Quiet * Retest
Disable Windows Search Index and Protocol if seen * Click the Windows icon, type Services, click Services app * Search for Windows Search * Right click Windows Search * Click Stop * Right click Windows Search * Click Properties * Change Startup type to Disabled * Click Apply- OK * Close the Service app * Retest
Disable "Allow fingerprint reader to sleep" in the Device Manager * Press Windows Key + X and choose Device Manager from the list * When Device Manager opens, locate your fingerprint reader. It could be located in Biometric Devices section * Right click the fingerprint reader and choose Properties * Navigate to the Power Management tab and uncheck "Allow the computer to turn off this device to save power" * Click OK to save changes * Retest
Turn off Windows Hello * Click Start * Type regedit * Right click Registry Editor * Click Run as administrator * Navigate to the path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PolicyManager\default\Settings\AllowSignInOptions. * In the right panel, double-click on the DWORD entry named value and change it to 0 * Click OK * Close the registry and restart This option may be removed by Microsoft in newer operating system versions.
* Highlight on the battery icon (bottom right) * Right click and click Power Options * On the left side navigation, click "Choose what the power buttons do" * On, "When I press the power button", change On battery and Plugged in to "Shut down" * Click Save changes
* Go to Windows settings and select Power & sleep. There will be an option for Network connection: "When my PC is asleep and on battery power, disconnect from the network", change it to "Always" * Open Device Manager * Open the Network Adapters section * Right click on the Wireless adapter * Go to the Advanced Tab * Go to Properties and select Wake on Magic Packet and set the value to Disabled * Go back to Properties and select Wake on Pattern and set the value to Disabled * Click the Ok button and close the open windows
This option may be removed by Microsoft in newer operating system versions. * Press Windows R * Enter regedit * Click OK * Find HKLM\System\CurrentControlSet\Control\Power * On the right, double click CsEnabled * Change it to 0 * Click OK * Close the Registry Editor
This option may be removed by Microsoft in newer operating system versions. * Right click the Start button and type com on the search line * Right click Command Prompt * Click on Run as administrator * Type powercfg /a (this shows all of the states) * Type powercfg /h on [press Enter]
Create shut off CMD shortcut icon * Open Notepad * Copy this text: Shutdown /s /t 000 * Paste it into Notepad * Click File- Save as * Click the Save as type drop down arrow and click All Files * Enter Off.cmd in the File Name blank * In left pane click Desktop * Click Save * Close Notepad * On the Windows Desktop you should now see this * Clicking that shortcut icon should fully turn the system off
|
OPCFW_CODE
|
The development of autonomous vehicles is an exciting technology that has really taken off in recent years. As a F1 fan and software developer I was especially inspired by this technology and wanted to see if I could introduce some principles of autosports into a basic self-driving model. The goal of this project was to create an autonomous driving model capable of finding the optimal racing line using neural networks optimized through genetic algorithms.
The basic idea is to create a simple 2D car racing game in an environment where that can measure things like laptimes, car physics, car control, collisions, and other information that can be used to build a model to drive the car. Some of this data is then fed into a neural net in realtime and the output of the net is used to control the car. When the car collides with the wall the lap is considered over and that network is given a score based on a fitness function.
The fitness function attempts to best capture what defines a successful lap. Once the network is scored a new one is generated and the car is reset. This process is repeated where a new neural net (genome) is generated each time until the declared population size is reached. Once the last network of the population has run then we're left with a bunch of scored networks which is considered a generation. The best networks from the generation are kept and used to seed the new networks for the next generation.
Building the Environment
Phaser was really simple to get started with and I was able to quickly build a simple track object using physicsEditor and an image I found online. The track boundaries for car collisions are defined as a single complex polygon created with physicsEditor. I also needed to define non-collidable lines that would act as distance markers so that I would have a measure of how far the car had progressed.
Before getting into the details of the car implementation I want clarify the objective. We want to see the car learn to navigate the track and over time get quicker by following the racing line. The racing line being defined as the path that a driver follows through a corner in order maximize speed and minimize lap times. This track is simple so the racing line will often be the largest radius through the corner.
It becomes a little more complicated than this because the path through multiple corners needs to be considered but for the most part this is a good way to conceptualize the idea.
The car is setup with simple dynamics at the moment to decrease the complexity of training the model. The key aspect about the car dynamics is that the car will travel in a straight line at a constant velocity but can only maintain a fraction of that velocity through corners. A better physics model would look at the centripetal forces through the corner and determine slip-angles but the main idea of being able to go faster in a straight line is still captured in the current naive model.
Alright, so now we have an idea of how our car should travel through the track we need provide the car with information about its surroundings. For this we use the idea of proximity sensors implemented by casting a ray and looking for the nearest line intersection with the track boundaries. By positioning three proximity sensors at the front of the car to measure distances from the sensor to a wall the car is provided with information regarding upcoming corners. This data is what is passed into the neural net to compute a steering decision.
With this in place all that's needed is to log lap times and the distance travelled and the car can be tested out using the keyboard.
Neural Networks and Genetic Algorithms
So now's where we get into the cool stuff. So far I explained the basics of how the neural nets are used with genetic algorithms but I wanted to get a little more detailed regarding the specifics of the implementation.
The genetic algorithm implemented is rather simple and will adjust the weights between neurons and the bias values but not the actual topology of the network. More complex examples of NeuroEvolution such as the NEAT algorithm will actually change the overall network topology during evolution. This project is built to allow different "drivers" to control the car so at some point I might take a crack at implementing a driver using the NEAT algorithm.
This genetic algorithms used were inspired by this really cool project posted on Hackernews that uses NeuroEvolution to play the Google Chrome dinosaur game.
I've already somewhat covered the basics of how the genetic algorithm is used in combination with neural nets but I glossed over an important aspect of scoring the networks, the fitness function.
The fitness function needs to represent what we consider to be a "good" lap. This means that we want to take networks that minimize the lap time and avoid networks that go out of bounds. This is done by assigning a score to the network based on the milliseconds it took to either crash or complete a lap and then add a 2 second penalty for every distance marker short of the finish line:
var fitness = laptime + (nMarkers - distanceTravelled)*1000;
This way we can sort all networks on their assigned score and be sure that the networks with the smallest scores are the best.
Note that this only works to capture the driving line because the car dynamics were setup to move slower while turning. If the time had been taken to develop proper car physics the driving line would also emerge. However if the environment had been setup where the car travels the same speed regardless of whether it's turning the emergent behaviour would likely be to hug the inside of the track and cover the least distance.
Mutation and Crossover
This is the meat and potatoes of the genetic algorithm. Once all networks have been assigned a score through the fitness function it's time to build the next generation of networks. The idea is to move the best networks forward and use these networks to seed the next generation. Ideally we want to draw from networks that have done well while also introducing random mutations to avoid converging on a local minimum of the error space.
Without introducing mutations the optimization could get too focused on something that produces good results but does not actually capture the problem we're trying to solve. Additionally all initial networks are generated with random values so without random mutations there would be would be no way to improve the networks.
This is how the idea of evolution is used to optimize neural networks. Over time a network will see a random mutation that causes it to thrive over its peers, this leads to the network advancing and the good mutation propagating forward throughout generations.
Mutation is implemented by introducing a random change in a network's bias values and weights in order to slightly modify the network to create a new one. This is combined with crossover, a method to take two networks and generate a third by combining values from each network. The specific crossover implemented is called single-point crossover, where a single slice point from both parent networks is chosen and all data past this point is swapped creating two new children networks (we're only carrying forward a single child network).
By combining these two techniques it is possible to completely build a new generation of networks that's based on the best networks from the previous generation, all while introducing mutations that will allow for further improvements to be found.
Putting this all together yields a model that is able to start with no training and with no prior data learn how to complete a task. The model is trained simply by letting it run and attempt to navigate the track for awhile. It's a lot of fun to watch because at the start the car basically just drives into a wall but eventually you can see it start to make good decisions and over time learn to navigate the track. Awhile after that and it begins to learn how to get quicker and in the end the model is without a doubt better than the human player. Here are examples shown at different generations of progression. Note that the driving model is being loaded with data from the specified generation so the generation displayed on the screen is wrong
Generation 5 - unable to complete a lap (or really do anything)
Generation 20 - learned the first corner but still unable to complete a lap
Generation 35 - first generation to complete a full lap and with a time of 8.151s
Finally Generation 100 - able to complete a full lap with respect to the racing line and with a best time of 6.317s
To access the full code for this project head on over to my profile on Github
|
OPCFW_CODE
|
Tutorial - Dell Sonic Enterprise and Dell servers in a spine-leaf configuration#
This tutorial walks you through the steps required to register and use a setup that uses the following physical configuration:
3 x Dell Enterprise SONiC 4.0.2
2 x Dell R250 servers
This tutorial also assumes you have completed the configuration part of MetalSoft. Follow the Deploying MetalSoft using OVAs for more details on how to configure deploy MetalSoft.
# management links ESXi server, port 01 <---> OOB switch, port 15 leaf01, port MGMT <---> OOB switch, port 01 leaf02, port MGMT <---> OOB switch, port02 spine01, port MGMT <---> OOB switch, port03 #spine-leaf links leaf01, port 11 <---> spine01, port 01 leaf02, port 11 <---> spine01, port 02 # server links leaf1 leaf01, port 01 <---> server01, port01 leaf01, port 02 <---> server01, port02 leaf01, port 03 <---> server02, port01 leaf01, port 04 <---> server02, port02 # server management ports server01, port iDRAC <---> OOB switch, port 11 server02, port iDRAC <---> OOB switch, port 12
Deploy Dell Enterprise Sonic 4.x on the switches
Configure a management IP reachable from the Agent OVA
RESTCONF is enabled on the switch
The BGP underlay configuration is done between the switches.
Note that MetalSoft can provision servers via ZTP but tht process is not covered in this tutorial.
To register the 3 switches, for each switch go to Switches > Add Switch
Switch roledropbox select
Leaffor leaf switches other for spine
Management protocoldropbox select
Management addresstextbox input the IP of the switch management port
Management usernametextbox input the username of the switch
Management passwordtextbox input the password of the switch
All other fields can stay the same.
At the end of the process the switches should all show as
Note that it is normal for them not to show any network connections as those will be populated as we add servers.
The servers have an IP on the iDRAC interface reachable from the agent VM.
To register the 2 servers go to Servers > Add Server
Select Vendor Dell
Select Type Server
IPMI Host -> input the ip of the server
IPMI username -> input the IPMI username
IPMI password -> input the IPMI password
Click Add Server
Note that the IPMI protocol should not necessarily be activated as this mechanism uses Redfish instead.
This process can take up to 30 min depending on the type of drives and NICs.
At the end of this process you should be able to use the servers in the infrastructure editor as usual.
For more information consult: Consult the following for more information:
|
OPCFW_CODE
|
$(document).ready(function() {
var favChar = [148, 583, 1346]; //148(Arya), 583(Jon), 1303(Danerys)
for (var i = 0; i<favChar.length; i++) {
$.get(`https://www.anapioficeandfire.com/api/characters/${favChar[i]}`, function(data) {
let characterDiv = document.createElement("div");
let charName = document.createElement('h3');
$(charName).text(data.name);
$(characterDiv).append(charName);
let titles = document.createElement('p')
$(titles).text(data.titles);
$(characterDiv).append(titles);
let spouse_container = document.createElement('p');
if (data.spouse !== "") {
$.get(data.spouse, function(spouseData) {
$(spouse_container).text(spouseData.name)
$(characterDiv).append(spouse_container);
})
}
$('.row').append(characterDiv);
// if there's a spouse "meaning if the spouse string is greater than 0 characters"
// then do another ajax call to the spouse link
// grab the name only
// and then finish the ajax call
// and populate the spouse_container with that name.
})
}
})
|
STACK_EDU
|
Cellular Plasticity And Heterogeneity: Implications In Tumor Cell Invasion And Metastasis
The existence of heterogenous subpopulations of cells in cancer has been shown to arise via natural evolution or through movement between cellular states collectively known as “cellular plasticity.” This heterogeneity and plasticity are critical drivers of phenotypic diversity culminating in many facets of disease progression, such as metastasis. While the existence of heterogeneity and cellular plasticity are well accepted, the molecular underpinnings and functional outcomes, such as metastasis, of these populations remains limited. Here, we first investigated a form of cellular plasticity known as epithelial-to-mesenchymal transition (EMT) and dissect the molecular mechanisms of a recently described partial EMT (P-EMT) state operating in vivo in a mouse model of pancreatic ductal adenocarcinoma (PDAC), whereby tumor cells lose their epithelial state through a post-translational mechanism. This is distinct from complete EMT (C-EMT), which achieves the transition transcriptionally, through regulation of a complex hierarchy of EMT transcription factors (EMT-TFs). We report that prolonged calcium signaling in carcinoma cells induces a P-EMT phenotype characterized by the internalization of membranous E-cadherin (ECAD) and an increase in cellular migration and invasion. These effects can be recapitulated by signaling through Gaq-associated G-protein coupled receptors (GPCRs) and are mediated through the downstream activation of calmodulin. These results implicate calcium signaling as a potent driver of epithelial-mesenchymal plasticity in cancer cells that may be important for the metastatic cascade. We subsequently investigated other potential mechanisms of metastasis that may occur as tumors evolve de novo. Specifically, we analyzed paired primary tumors and metastases using a multi-fluorescent lineage-labeled mouse model of PDAC. Genomic and transcriptomic analysis revealed, for the first time, an association between metastatic burden and amplification of MYC. Mechanistically, we found that MYC promotes metastasis by recruiting tumor associated macrophages (TAMs), leading to greater bloodstream intravasation. Consistent with these findings, metastatic progression in human PDAC was associated with activation of MYC signaling pathways and enrichment for MYC amplifications specifically in metastatic patients. These results implicate MYC activity as a major determinant of metastatic burden in advanced PDAC. Thus, using novel mouse models of PDAC, we identified key pathways, genetic and non-genetic, that regulate cellular plasticity and lead to increased invasion and metastatic spread. The identification of these pathways and regulators represent an avenue for combating the most lethal aspects of tumor progression, metastasis and therapy resistance.
Sandra W. Ryeom
|
OPCFW_CODE
|
- published: 21 Dec 2010
- views: 44937
The College of Electrical Training is part of the National Electrical and Communications Association (NECA) WA Group and is Western Australia's leading Registered Training Organisation (RTO) for the Electrical and Communications Industry.
The School of Electrical and Electronic Engineering is one of the largest schools of this discipline in the UK, and as such we are home to a full range of activities within the spectrum of electrical and electronic engineering. In the School of Electrical and Electronic Engineering we have specialised research groups in control, nanotechnology, communications, sensors, power systems & power conversion and these groups collaborate in order to find efficient energy delivery solutions for all areas of applications of electrical technology. We are actively working in the areas of power generation & distribution, communications, medical systems, security, hybrid vehicles and electronic systems for agricultural processes.
hello friends aaj aap dekhenge ki house board wiring kaise karte hai, wiring kaise kare hai, more videos subscribe YK Electrical
Lecture 1: Object-Oriented Programming Instructor: Dennis Freeman View the complete course: http://ocw.mit.edu/6-01SCS11 License: Creative Commons BY-NC-SA More information at http://ocw.mit.edu/terms More courses at http://ocw.mit.edu
This is the first video in the Electrical and Electronic Systems Training Series. This series will cover all aspects of Electrical Systems and how to understand and diagnose those systems. In this video, we are working on a basic understanding of exactly what electricity is, and we talk about the structure of Atoms and the Electron's movement during electrical flow. Stay Tuned for more videos in this series. Our Website: www.diyautohomeschool.com Recommended Videos: Relative Compression Test: https://youtu.be/eQjBPcq0XBo How an Ignition Coil Works: https://youtu.be/xXteh1Ch738 How a Starter Works: https://youtu.be/iQ7wo2Qvgbk
My play list Heat Transfer, Refrigeration and Air Conditioning MCQ https://www.youtube.com/playlist?list=PLY9TLnmM2s-k939nAjVs0ACkdoX2fB2N4 Refrigeration and Air Conditioning https://www.youtube.com/playlist?list=PLY9TLnmM2s-kpniZCO_IyCq1dFi4JbG8z TECHNICAL EXAM MCQ https://www.youtube.com/playlist?list=PLY9TLnmM2s-mi5biwUY7gcYs9bPRJG8Qy MECHANICAL ENGINEERING VIDEO https://www.youtube.com/playlist?list=PLY9TLnmM2s-kmeZ-B6u2My8LiQXour4t2 BASIC MECHANICAL ENGINEERING https://www.youtube.com/playlist?list=PLY9TLnmM2s-knA0UbMkrTIZT_kgv4Bw2D INDIAN GEOGRAPHY https://www.youtube.com/playlist?list=PLY9TLnmM2s-maUm3I4kYBkD_ipvI61YB4 SSC CGL PSC https://www.youtube.com/playlist?list=PLY9TLnmM2s-kvIwy_M3rQyiPJqesABYI1 INDIAN ECONOMY https://www.youtube.com/playlist?l...
Denne animation beskriver hjertets elektriske overledningssystem som resulterer i at hjertemusklen trækker sig sammen
Learn basic electrical concepts and terms
Briaf discussion about a conceptual approach to physics teaching. Novel circuit board used to teach various basic concepts associated with electrical circuits.
This video provides all the important mcq questions from the basic of electrical engineering and also provide important points to remember for interview and exam
The Independent | 02 Apr 2020
The Star | 02 Apr 2020
Yahoo Daily News | 02 Apr 2020
|
OPCFW_CODE
|
Pass minimal env to query
As I was building a multisig, I realized I needed current block time or height in the query to determine if voting was open or closed and return the proper status. Otherwise, I just return an old status.
Let us pass in a smaller env to query. Contains BlockInfo. Doesn't contain MessageInfo. Maybe ContractInfo? (Does it need it's own address? Why not?)
A relatively small change, but breaking APIs throughout the stack and needed to really demo the multisig contracts. I'd love if this could make 0.11 @webmaster128
I don't think the concept of block height and block time makes sense in the target contract of a query, since it is not executed as part of a block.
Can't you calculate the status in the caller based on the information you query?
The query is executed at a certain time. After a given block (data is the result of executing block H), but definitely in a context. Even the public json apis return height on all queries and the sdk passes this info in.
If we have an end time for a contract, and it was marked as started, the query will always return open even well after it was finished unless the query can somehow see the current time and height
Can't you calculate the status in the caller based on the information you query?
In theory, yes. But you then replicate business logic for the contracts in the js client, which doesn't seem nice. Better to include it in the query result. Not just blindly returning stored data. This is what is done in many sdk queries (which have the same context as the handlers)
Can you show my the query you want to build and the context that is provided by the SDK to the query?
My sample contract:
Here is the query
Which would happily use the block info to see if voting is over.
On SDK land:
We get the entire Context with each query. This is everything we use in the handlers and pass down, specifically these getters
PD This has been requested before months ago. I didn't have a concrete case then. This is one.
MessageInfo clearly has no place in queries, but the others are possible
How do you feel about removing the message infos from the Env and use this consistently in init/handle/migrate/query:
@@ -7,7 +7,6 @@ use crate::coins::Coin;
#[derive(Serialize, Deserialize, Clone, Default, Debug, PartialEq, JsonSchema)]
pub struct Env {
pub block: BlockInfo,
- pub message: MessageInfo,
pub contract: ContractInfo,
}
We then could find a way to to wrap (sender, sent_funds, contract defined message) into a bundle, since those really belong together. If we want to avoid type overhead, we can pass MessageInfo and msg: HandleMsg as two arguments.
If I understand correctly, we are discussing the following two options (roughly defined in code):
Option 1
pub struct MessageEnv {
pub block: BlockInfo,
pub message: MessageInfo,
pub contract: ContractInfo,
}
pub struct QueryEnv {
pub block: BlockInfo,
pub contract: ContractInfo,
}
pub fn handle(deps: &mut Extern, env: MessageEnv, msg: HandleMsg) -> StdResult<T> { }
pub fn query(deps: &mut Extern, env: QueryEnv, msg: QueryMsg) -> StdResult<T> { }
Option 2
pub struct Env {
pub block: BlockInfo,
pub contract: ContractInfo,
}
pub fn handle(deps: &mut Extern, env: Env, info: MessageInfo: msg: HandleMsg) -> StdResult<T> { }
pub fn query(deps: &mut Extern, env: Env, msg: QueryMsg) -> StdResult<T> { }
And you would prefer Option 2. Is that a correct understanding?
Exactly
I have a small preference for Option 1 as this is less breaking to existing code.
But if we are not worrying about the migration, but rather the usage in 1.0, I can support Option 2. The message info is actually the highly authentication critical info (sender, sent_funds), and env more like background info. And not having 2 different Env types will reduce confusion.
Happy if you can find a cleaner way to express Option 2 than I did (especially better naming).
|
GITHUB_ARCHIVE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
I want to make my WPF application open the default browser and go to a certain web page. How do I do that?
a row but then I have the exact same error. AppName: iexplore.exe AppVer: 6.2900.2180 ModName:. IE 6 Error/Crash urlmon.dll ModVer: 6.2900.3231 Offset: 0003b5ce:
Jan 13, 2014. Why I keep getting PageMethods not defined error? <%@ Page Language="VB" MasterPageFile="~/MasterPage.master&qu.
Internet Explorer Kernel32 Error Security updates also include patches for Microsoft Windows operating systems, Starting the application in Windows2000 Server Sp4 throws error popup. These specify to the compiler what minimum OS and IE will be. With the release of Internet Explorer 10 in Windows 8, an improved version of IE's Protected Mode. that are referenced (which can occur
Page 1 of 2 – error: AppName: iexplore.exe AppVer: 6.2800.1106 ModName: – posted in Virus, Trojan, Spyware, and Malware Removal Logs: error:AppName: iexplore.exe.
Htk Error Revised for HTK Version 2.2 January 1999. 2 An Overview of the HTK Toolkit. 14. parameters used to configure HTK and a list of the error messages that it. There was an extra AU command at the end of the tree.hed file This was causing it to try and open another file after the tiedlist.
The iexplore.exe process is part of Microsoft Internet Explorer of Microsoft. Here are further details of iexplore.exe, and whether it might be a virus or spyware.
Since downloading Internet Explorer 8 I continually get the following error message when leaving a site."Internet Explorer has encountered a problem and needs to close.
A configuration file needs position of another file, but that file is located in "C:Program Files", and the path with space in it is not recognized, Is there another.
Original Title: error signature Hi i got this error signature appname iexplorer.exe mod/ver2007.12.18.1 appver8.6001.18702 offset 00067646 mot nate yt.dll. This.
What is ieframe.dll? Ieframe.dll is an Internet Explorer (IE) Browser User Interface (UI) Library. Contrary to the popular belief this file is not a Windows system file,
Event ID: 1001 – Source: ACECLIENT; Type: Error: Description: File not found: C:Program FilesMicrosoft ISA ServerSDCONFIG. 1 Comment for event id 1001 from source ACECLIENT
Sep 23, 2016. To work around this issue, use Internet Explorer to download files. [#593347]; There is an error in XenApp and XenDesktop 7.6 FP3, when. to <app name/ desktop name> failed with status (1030)" error message on the.
WinOpen allows apps to be run or documents to be opened in a CD AUTORUN.INF. WinOpen is a very small Win32 application that gives you.
30045-4 (1) – Find out step by step instructions on how to fix Microsoft Office 30045-4 (1) error code. Error code 30045-4 (1) generally occurs while installing.
|
OPCFW_CODE
|
Many people want to know how to get into cyber security, but how many thought to start with hacking? Hacking is brought up regularly now, due to tons of security breaches at large companies. There’s also a very popular show about vigilante hackers: Mr. Robot. In this way, hacking is seen as only a negative activity. However, there are lots of ways to be an ethical hacker and learn about cyber security.
Udemy is a site where experts in just about any field create courses for the average person to study. Courses are usually around $200, but Udemy often has sales for holidays and other times of the year, and you can get courses for as cheap as $10. I mention this because the courses range from basic to advanced, so even if you’ve never entered the realm of hacking or cyber security, you can find a course that’s suited to your needs. One great part of Udemy is each course comes with a certificate of completion. While Udemy is not an accredited institution, these certificates can be useful to prove that you have learned these skills. Also, some courses on Udemy come with separate certificates from the instructor, so you can have several documents of your achievement.
When it comes to online learning, few sites match up to Coursera. It is an accredited site that hosts college-level lectures from professors at actual colleges around the world. There is currently a Fundamentals of Computer Network Security Specialization that includes four courses on network security. The third course in the series is about hacking and patching, which goes to show that hacking is considered an important exercise to learning about cyber security. You can audit most Coursera courses for free, but if you pay, you can receive a certificate that may be transferable to colleges. If not, you at least have an accredited document to state you have thorough knowledge of the skills presented. The only downside to Coursera is, unlike Udemy, it is not flexible. You do have assignments to complete by a due date, so you will be held accountable if you forget to work on the course for a few weeks.
Cybrary is a site that has flown mostly under the radar, but I think it offers some great courses on cyber security. Penetration Testing and Ethical Hacking is a fantastic course that, once again, uses ethical hacking to learn about security methods. This site is similar to Udemy in that you can learn on your own time. You also earn a certificate, just as you do on the other two sites. One of the largest benefits of Cybrary is it is niche to the cyber security field, so you’ll find content from experts. The goal of the site is to make cyber security information available to anyone, so there are classes from beginner to advanced. I highly suggest checking this site out.
If you’re looking to learn about cyber security or brush up your skills, these three sites are the places to go. You can take hundreds (or thousands) of hours of courses, and at the end of the day, you’ll have many certificates to show your excellence. What are you waiting for? Use hacking to your advantage and learn how to protect your sensitive information.
|
OPCFW_CODE
|
namespace Cloud.GroupHub.Sdk.Sample
{
#region using directives
using Cloud.GroupHub.Sdk.Model;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System;
using System.Configuration;
#endregion
class Program
{
/// <summary>
/// This entrypoint shows that how to create an build-in tag, update an build-in tag, apply the build-in to group,
/// remove the tag from group and delete the buildin tag. You should replace the wanted value in the app.config
/// </summary>
/// <param name="args"></param>
static void Main(string[] args)
{
try
{
var office365TenantId = ConfigurationManager.AppSettings["Office365TenantId"];
var userName = ConfigurationManager.AppSettings["UserName"];
Initilize(Region.EastUS, office365TenantId, userName);
var tagOperations = new TagOperations();
var groupTagOperations = new GroupTagOperations();
var tagName = ConfigurationManager.AppSettings["TagName"];
var tagDescription = ConfigurationManager.AppSettings["TagDescription"];
var tagModel = tagOperations.SaveBuildInTag(tagName, tagDescription);
var tagNewName = ConfigurationManager.AppSettings["TagNewName"];
var tagNewDescription = ConfigurationManager.AppSettings["TagNewDescription"];
tagOperations.UpdateBuildInTag(tagName, tagNewName, tagNewDescription);
var groupId = ConfigurationManager.AppSettings["GroupId"];
groupTagOperations.ApplyBuildInTagToGroup(tagModel.Id, groupId);
groupTagOperations.UnRegisterBuildInTagFromGroup(tagModel.Id, groupId);
tagOperations.DeleteBuildInTag(tagModel.Id);
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
Console.Read();
}
/// <summary>
/// Init Group Hub api service, when invoking this, it will prompt a logon window, customer need to input the password
/// </summary>
/// <param name="region">region of your tenant</param>
/// <param name="office365TenantId">office 365 tenant id of your tenant</param>
/// <param name="userName">your user name</param>
private static void Initilize(Region region, String office365TenantId, String userName)
{
var clientId = "d3590ed6-52b3-4102-aeff-aad2292ab01c";
var redirectUri = "urn:ietf:wg:oauth:2.0:oob";
var resource = "https://graph.microsoft.com";
var authority = $"https://login.windows.net/{office365TenantId}";
var authenticationContext = new AuthenticationContext(authority);
var userId = new UserIdentifier(userName, UserIdentifierType.OptionalDisplayableId);
var result = authenticationContext.AcquireToken(resource, clientId, new Uri(redirectUri), PromptBehavior.Always, userId);
AuthenticationTokenModel model = new AuthenticationTokenModel();
model.AccessToken = result.AccessToken;
model.RefreshToken = result.RefreshToken;
GroupHubApi.Init(region, model);
}
}
}
|
STACK_EDU
|
Candidate Key Identification with Functional Dependencies
I'm having trouble understanding how to identify keys in functional dependencies. I've been looking at examples, for example:
Given a relation ABCD, find all keys not including superkeys of the
A -> BC, C -> D, CD -> AB.
This gives keys C and A. The way I thought this problem was approached was that BC and D both depend on A and C, and AB depends on CD, meaning all three of them are keys, but since CD is a superkey (C is a subset that is also a key), CD is not considered a minimal superkey.
However, in another example,
ABCDE
AB → CD
E → A
D → A
The only key here is apparently BE. Why is this true, and can anyone clarify the steps to take in finding keys with these problems?
Thanks.
The second one's a bit simpler, so taking it first . . . you know that B must be in any key, because it's not on any right-hand side. (That is, even if you have the values of ACDE, you couldn't infer the value of B.) Similarly for E; so, any key must include BE. But BE is by itself a sufficient key, because E gives you A (hence BE → ABE) and AB gives you CD (hence BE → ABCDE).
In the first one, we can see that A is a key, because A gives you B and C, and C gives you D. Similarly, C is a key, because C gives you D, and C and D together give you A and B. Conversely, we see that A and/or C must be in any key, because every left-hand-side includes at least one of them.
Wait, so does this mean that any attribute in the relation that's not found on the right side of the given FD's will be a key, no matter what, since you can't infer them from any FD? How can you know that for sure?
@user924199: No, it means that any attribute not found on the right-hand side of any FD must be part of every key.
Sorry, can you explain why this is true? Also, as another example, if you're given AB -> C and C -> D, the only key in this case would be AB, right, since by the transitive property AB -> D and therefore C isn't a key? Assuming this is true, why can't the keys be A, B separately?
Sorry, no, you don't have that quite right. Let's make this a bit less abstract. Suppose you have a database table with the fields A, B, C, and D, where C = abs(A - B) and D = floor(C). For example, if A = 2.1 and B = -3.4, then C = 1.3, so D = 1. Then AB → C and C → D: if two records have the same A and B, then they have the same C, and if they have the same C, then they have the same D. Therefore: if they have the same A and B, then they are completely identical, so AB is a superkey. (continued)
(continued) But two records can have the same A, or the same B, or the same C, while differing in other fields; so neither A nor B nor C is a superkey. If you play around for a bit, you should be able to see why the only superkeys are AB, ABC, ABD, and ABCD. Of these, the only candidate key is AB, because the other superkeys all contain AB (and therefore, they contain more fields than they need to). Does that make sense?
A procedure that is a little more formal.
Take an FD, e.g. (example 2), AB -> CD.
Augment this using trivial FDs until you have ALL the attributes on the RHS.
You lack ABE on the RHS, so you must augment using the trivial FD ABE -> ABE to obtain ABE -> ABCDE.
That tells you that ABE is a superkey, because knowing the values in a certain row for ABE will be sufficient to determine the values for all attributes in that row.
Now inspect the OTHER FDs to see if any of them allow you to reduce the LHS (ABE) in this case. E -> A allows you to remove A from ABE, keeping thus only BE -> ABCDE. The rule for reduction is : if the LHS of another FD (E) is a proper subset of the superkey you are trying to reduce (ABE), then you can remove all the attributes from the superkey that are mentioned ONLY in the RHS of that other FD (you cannot remove E if you are looking at an "other" FD such as E -> EA !!!).
This procedure is not apt for mechanical implementation, because you might also have to take a look at "combinations" of other FDs. However, most use cases and even most fabricated class exercises typically are not complex enough to cause this procedure to fail (i.e. leaving you with a proper superkey instead of an irreducible one).
(PS to find ALL the keys, you will need to apply this to ALL given FDs)
This answer, if it works, is much better since it is about the algorithm and not just the specific cases.
Reference
http://www.ict.griffith.edu.au/~jw/normalization/assets/Functional%20Dependencies%20and%20Normalization.pdf
Proof of Algorithm (short and straightforward, section 3)
H. Saiedian and T. Spencer, “An Efficient Algorithm to Compute the Candidate Keys of a Relational Database Schema,” The Computer Journal, vol. 39, no. 2, Feb. 1996 [Online]. Available: https://pdfs.semanticscholar.org/ab3f/f65010b50d78d583b1c2b6ce514fa072d23d.pdf. [Accessed: 31-Jul-2019]
Video Explanation
https://www.youtube.com/watch?v=s1DNVWKeQ_w
Please don't answer duplicate questions, flag/vote them as duplicates.
|
STACK_EXCHANGE
|
In Growbots we build applications using Python and Tornado framework (plus Celery). As data store we use mainly Elasticsearch (NoSQL database) and RabbitMQ as a message queue. We make up a modular system, using the REST APIs to communicate between applications. Among others we use HTML5, JS, ReactJS, Flux and Bootstrap for frontend. We operate in accordance with the Clean Code, we do code review, we use Git, actively use Agile methodologies. Our environments we deploy on Amazon's cloud, we use Docker and Ansible for that. We are open for all reasonable new technologies!
Some interesting numbers about our system:
- 15 separate modules and applications being part of our system architecture,
- more than 100 000 000 records in database,
- more than 3 000 000 new records added to the database each day as a result of crawling,
- more than 5 000 000 messages published to queue each day,
- more than 30 servers used,
- more than 150 CPU cores used,
- more than 500 GB of RAM,
- more than 15 TB of SSD storage,
- more than 1 000 000 PLN reserved for the infrastructure for next 1-2 years.
- We have built a cool team of aligned people.
- We cooperate with many customers from the USA, including companies worth millions of dollars - we have achieved better results than their internal sales departments.
- We are partakers of the acceleration program 500 Startups (http://500.co) - masthead investor from the United States. Beyond the additional funds we get access to the support of mentors, investors, as well as we get the opportunity to work with the best start-ups on the world.
- We cooperate with data scientist working in Facebook AI Research, which creates the best solutions of machine learning in the world.
- cooperation with a young, but experienced team of people achieving further successes in the world,
- opportunity to work on solutions using current standards in the IT world and cutting-edge technologies,
- teamwork in Agile (Scrum, JIRA, daily stand-ups, planning sessions, sprints, retrospectives),
- possibility of real influence on the shape of the developed solutions,
- opportunity to gain experience in start-up operating in Silicon Valley and across the United States,an informal, friendly atmosphere at work,
- flexible working hours (including part-time ⅘ and part-time remote work),
- attractive salary (junior: 3 500 - 6 000 PLN net, senior: 6 000 - 12 000 PLN net, depending on your experience and skills),
- great location in the center of Warsaw (Hoża street, 5 minutes to Central Railway Station),
- fair contract conditions (no NDA and non-compete clauses).
- at least 2 years of professional experience as a software engineer,
- at least basic knowledge of Python,
- high analytical skills,
- ability, desire and willingness to learn continuously,
- knowledge of English - at least B2 level,
- high self-discipline and responsibility,
- to be an easy-going, self-motivated doer, who likes a challenge.
- being a full-stack software engineer - strong knowledge of frontend and backend technologies as well,
- experience in working with ReactJS,
- experience with DevOps,
- knowledge of Docker, Ansible.
|
OPCFW_CODE
|
The group of cybercriminals, who recently broke into Nvidia’s systems, delivered two old code-signing certificates from the company. Researchers warn that drivers could be used to sign kernel-level malware and load it onto systems that have driver signature verification. These certificates were retrieved from an archive of almost 1 TB which also contained the source code and documentation for the GPU driver API. Nvidia confirmed it was the target of a breach, saying the hackers stole “employee passwords and certain Nvidia proprietary information,” without confirming the extent of the data theft.
Feedback on the data breach
On February 24, a ransomware group calling themselves LAPSUS$ publicly stated that they had admin-accessed multiple Nvidia systems for about a week and managed to exfiltrate 1TB of data, including schematics. hardware, driver source code, firmware, documentation, tools and proprietary development kits. And “all things Falcon,” a hardware security technology built into Nvidia GPUs and intended to prevent misprogramming of those GPUs. While Nvidia confirmed the cyberattack and data breach, the company did not provide any details about the stolen data. But, as proof, LAPSUS$ released 20 GB of information from this alleged cache.
The group also claims to have information on Nvidia’s LHR (Lite Hash Rate) technology. Introduced in RTX 30-series GPUs, LHR detects if GPUs are being used for Ethereum cryptocurrency mining and reduces their performance, in order to make graphics cards less attractive to cryptocurrency miners. Indeed, the latter capture the entire GPU market, drying up the market, to the point of making it almost impossible for gamers to buy GPUs due to a constant shortage of stock and an overbidding on prices.
To prove that they have this information, the LAPSUS$ group even released a tool that hackers claim gives users the means to bypass the LHR throttling without resetting the GPU firmware. After this publication, the group changed its requirements, asking Nvidia to ship its GPU drivers open source for all systems, including Linux. Indeed, the Linux community has been complaining for many years about the lack of an open source Nvidia driver for this environment.
Importance of Code Signing Certificates
Code signing certificates refer to Microsoft certificates, especially in Windows. It’s still possible to run apps that aren’t signed on Windows, but these trigger more visible security alerts than apps signed by a trusted developer. More importantly, by default Windows does not allow the installation of any driver that is not digitally signed with a trusted certificate. Digitally signing drivers is an important security feature because, unlike normal user-mode applications, drivers run with kernel-level privileges. They therefore have access to the most privileged areas of the operating system and can deactivate security products.
Before the introduction of this security feature, rootkits (root-level malware) were commonplace in Windows. File digital signatures are also used by application whitelisting systems to restrict which applications can be run on systems and, to some extent, by anti-virus programs, although the existence of a digital signature does not alone is not enough to determine whether a file is legitimate or malicious. Code signing certificates have been stolen from developers before and hackers can even buy them through different channels.
The problem is that certificate revocations or expirations aren’t checked or enforced by all Windows security mechanisms, including the one that checks if loaded drivers are signed, as the security researcher explained. from Zoom, Bill Demirkapi at a DEF CON conference on Windows rootkits. Since the introduction of the Secure Boot restriction in Windows 10 build 1607 and later, drivers must be signed with EV (extended validation) certificates. EV certificates require extensive verification of the identity of the person or entity requesting the certificate and are therefore more difficult to obtain and more expensive. Nvidia code signing certificates issued by LAPSUS$ have expired since 2014 and 2018, respectively, and are not EV. But they can still be used to sign malicious code that will be loaded into the kernel of older Windows systems. They can also be used to attempt to evade detection by certain security products.
Researcher Florian Roth has already found two samples of cracking tools signed with one of the certificates on VirusTotal: a copy of the Mimikatz password dump tool and a copy of the Kernel Driver Utility (KDU) that can be used process hijacking. Researcher Mehmet Ergene found even more malicious files signed with the certificate, including a RAT (Remote Access Trojan) for Discord. And other malware abusing the legitimacy of Nvidia certificates should appear. Florian Roth and Mehmet Ergene published a YARA rule and query for Microsoft Defender for Endpoint (MDE) that security teams can use to find files signed with these certificates in their environments. Microsoft also offers a Windows Defender Application Control policy to block malicious drivers, which can be customized by adding new controls, and an Attack Surface Reduction (ASR) rule of Microsoft Defender for Endpoint.
|
OPCFW_CODE
|
- You can add a variety of cooling fans to your PC casing but the CPU fan is essential and irreplaceable. That's why it's hard to ignore the CPU fan error on boot
- This error usually points to either a malfunction of the CPU fan or underperforming in regards to RPM.
- Check our guide below for all posible solutions and visit our Troubleshooting Boot errors hub for similar issues.
- Windows 10 is a pretty stable OS now, but that doesn't mean it's out of troubles. Check out our Windows 10 hub for brilliant solutions.
You can add a variety of cooling fans to your PC casing but the CPU fan is essential and irreplaceable.
That’s why it’s hard to ignore the CPU fan error on boot that some users keep getting.
That error usually points to either a malfunction of the CPU fan or underperforming in regards to RPM.
We have a few tips on how to resolve the error without too much hustle. Check them out below.
How can I get rid of CPU fan error?
1. Lower the default fan RPM in BIOS
- Boot into BIOS on your PC.
- Navigate to CPU Fan settings. The sections might differ based on your motherboards. Important thing is to look for anything related to the CPU Fan control, usually found in the Advanced settings menu.
- Now, locate the setting that regulates RPM warnings. The default value should be around 600 RPM (rotations per minute).
- Lower the default value to 300 RPM and save changes.
- Exit BIOS and boot into the system.
2. Disable CPU Fan speed monitoring in BIOS
- Boot into BIOS.
- Navigate to the aforementioned Advanced settings.
- Look for the Monitoring (Monitor) section.
- Change the CPU Fan Speed to Ignore and confirm changes.
- Boot into Windows and look for improvements.
3. Check the hardware
- Check the fan connections. Inspect the motherboard connection.
- Remove the fan and clean the dust. You can use the pressurized air to clean it.
- Make sure that the cooling fan is spinning and it works as intended.
- To reduce the CPU temperatures, re-apply the thermal paste.
After that, all issues with CPU fan error on boot should be gone. However, if the issue persists, take your PC for diagnostics and repair.
Alternatively, you can also replace the cooler which isn’t exactly a complex task.
FAQ: Read more about CPU fan errors
- What does it mean when it says CPU fan error?
- How do I reset my CPU fan?
- How do I know if my CPU fan is bad?
Editor’s Note: This post was originally published in July 2019 and has been since revamped and updated in April 2020 for freshness, accuracy, and comprehensiveness.
|
OPCFW_CODE
|
In an ever-evolving technological landscape, the lines between technical know-how and managerial expertise blur. For many, like myself, the journey from coding in .NET to making executive decisions is an enlightening experience, illuminating how programming concepts can greatly influence and benefit corporate leadership.
The World of .NET: More than Just Code
.NET is not just a framework; it’s a universe of possibilities. My initial years in .NET taught me the intricacies of software development and the broader impact it has on businesses. The ability to translate a business requirement into executable code was my first lesson in understanding the connection between technical execution and business objectives.
Core Principles of Programming and Their Management Analogues
Programming is built on principles—DRY (Don’t Repeat Yourself), KISS (Keep It Simple, Stupid), and SOLID, to name a few. Similarly, management has its set of principles. For instance, the DRY principle reminds us of the value of efficiency in management, ensuring tasks are not redundant.
Debugging in .NET vs Problem-Solving in Management
Debugging code is all about identifying and rectifying errors. In the boardroom, I’ve come to see problems as “bugs” in the system. Just as I’d diagnose a code malfunction, I’ve learned to troubleshoot business challenges with a logical and systematic approach.
The Value of Continuous Learning in Both Realms
.NET, like all technologies, evolves. Continuous learning is crucial. Similarly, in management, the business world is in constant flux. Staying updated with industry trends, customer preferences, and new methodologies is paramount to effective leadership.
Building a Collaborative Team: From Coders to C-Suite
Programming, especially in large projects, is a team effort. Collaborating with fellow coders taught me the importance of leveraging individual strengths for a common goal. This lesson was invaluable when forming executive teams, ensuring diverse skill sets and perspectives are brought to the table.
Scalability: From Software Design to Business Expansion
A good .NET application is designed for scalability. Similarly, in the corporate world, ensuring that business models, processes, and strategies are scalable is vital for sustainable growth.
How Code Reviews Transformed My Approach to Feedback
Code reviews in .NET development ensure code quality. In the boardroom, feedback serves a similar purpose, ensuring decision quality. Embracing feedback, both positive and critical, became a cornerstone of my leadership style.
Risk Management: From Software Vulnerabilities to Business Threats
In .NET development, understanding potential vulnerabilities is crucial. In management, being aware of business threats, both internal and external, and devising strategies to mitigate them is a significant part of risk management.
Adapting to Change: Lessons from .NET Framework Evolutions
.NET’s evolution over the years, from .NET Core to .NET 5 and beyond, taught me adaptability. In the boardroom, adapting to market changes, industry shifts, and new challenges is vital for survival and growth.
Embracing Innovation: How Coding Fuels Forward-Thinking Leadership
Coding is inherently about creating, innovating, and improving. These values, instilled in me as a .NET developer, became the foundation of my leadership approach, always seeking better, more innovative solutions for business challenges.
- How has .NET programming directly influenced your leadership style?
Programming has taught me systematic thinking, collaboration, adaptability, and continuous learning—all crucial for effective leadership.
- Is a technical background necessary for executive roles?
While not essential, a technical background provides a unique perspective and understanding of the intricacies of the business, especially in tech-driven industries.
- What’s the biggest challenge in transitioning from a coder to an executive?
Shifting from a micro to a macro perspective, where you’re considering the broader impact of decisions on the entire organization.
- How do you keep up with both technical and managerial advancements?
Continuous learning, attending seminars, workshops, and staying connected with both the tech and business communities.
- How has your approach to risk differed between coding and leading?
In coding, risks are technical, like software vulnerabilities. In leadership, risks encompass broader business challenges, but the principles of identification, assessment, and mitigation remain similar.
- Are there any managerial concepts that can benefit coders?
Absolutely. Strategic thinking, effective communication, and big-picture thinking can greatly enhance a coder’s efficacy and career trajectory.
- What is the most valuable lesson from .NET that you’ve applied in the boardroom?
The importance of adaptability. Just as .NET evolves, businesses must adapt to stay relevant and competitive.
- How do you handle feedback as a leader compared to code reviews as a developer?
Both require an open mind, a willingness to improve, and the humility to accept and act on constructive criticism.
- How has collaboration in coding projects influenced team dynamics in executive roles?
It underscored the importance of diversity in skills and thought, fostering an environment where every team member’s contribution is valued.
- Do you believe other coding languages can shape leadership perspectives as .NET did for you?
Definitely. While the language or framework might differ, the principles of problem-solving, innovation, and systematic thinking remain consistent.
The journey from a .NET developer’s desk to the boardroom’s head seat has been transformative. While these two worlds may seem starkly different, they intertwine in ways that shape, refine, and redefine leadership. The principles I’ve learned from coding—collaboration, adaptability, innovation, and more—not only made me a better programmer but also a visionary leader. In today’s tech-centric world, the synergy between technical expertise and managerial acumen has never been more pertinent. It’s a testament to the fact that code, in all its binary simplicity, can influence
|
OPCFW_CODE
|
Impact of Druids on Medieval Fantasy Society?
There was progress made in the Dark Ages, but it wasn't much compared to later ages (and especially today, when it comes to computers) and I'm sure we can agree this was because people were struggling to survive; when I posted a question about the role of Druids in fantasy society, nosajimiki gave me this insight:
"You would want them to be farmers, first and foremost
I know this sounds boring, but being able to magically grow crops would be a HUGE advantage in a medieval setting. Normally 90% of most pre-industrial civilizations are farmers, but if your druids were farmers, they could produce the same amount of food as many normal farmers. This would in turn free up a massive portion of your population to pursue other endeavors like large scale construction projects, higher education, municipal services, luxury goods and services, and all those other modern comforts you get when you reduce how many people you need to meet your substance level needs."
Druids can speed up, enhance, and alter the growth of animals, plants, and fungi. "Altering growth" means altering the genomes of a organism; either causing a suppressed trait to surface, or causing a mutation of an existing trait. This allows Druids to replicate the results of years of selective breeding and genetic engineering on our crops and livestock, and even exceed those results.
In other words, one Druid, through enchantments, breeding, and agricultural practices, can multiply regular farmer's yields by 50 if not more. They can even counteract soil nutrient depletion by growing beans and mushrooms! However, Druids can also heal others, "speak" to an animal's mind and heart, shapeshift, and as stated above, they can speed up, enhance, and alter an animal's growth. Humans are animals.
Besides that, while I decided that there would be a population of 81 million and only 810,000 Druids, that's still a lot and these Druid's children will have a 70% chance of inheriting the Druid Class (for more on that and a Druid's abilities, look at Role of Druids in Fantasy Society).
Considering all of this, my question is What Would Be the Impact of Druids on Medieval European Society?
Also Consider:
Druids are balanced; the Druid Class is a magical blessing, and its power increases by Levels as a Druid grows and gains experience. At Level 1, a Druid can only heal minor wounds (paper cuts, blisters, slight burns) and can only speed up or enhance an animal or plant's growth by a negligible amount; think 0.01%. However, a max-level (Level 500) Druid can heal life-threatening injuries (short of beheading, skull crushing, or being cut in half), quadruple an organism's growth rate, and cause, say, a horse to grow a unicorn horn.
I don't know exactly what Druids were IRL, my perception of them is people who are both mystical and attuned to nature. Sort of a nature shaman.
As always, I appreciate your input and will add any other needed details ASAP.
@user24712, I have no idea. In fact, I had no idea Druids had such dark origins.
You are basically asking of how an unspecified society would be structured given the presence of people with unspecified abilities. No, "medieval fantasy society" is not a thing; this phrase can mean whatever you want it to mean. No, your DDruids are in no way, shape or form similar to the actual druids; and anyway the actual druids have nothing to do with the Middle Ages -- they were extinct by the 2nd century CE. (Ah, and a basic rule of thumb: a society where 90% of the people are not engaged in scratching the dirt to grow food is most certainly not a medieval society.)
@user24712 It was all the "new age"stuff, started in the 1960s.
I think that it was mainly D & D that caused the dumbing down of Druids.
I hope that you're willing to do more research on the dark ages than you have about the Druids. There was quite a bit of progress during the dark ages.
@NomadMaker, I based my Druids off of DnD. I will likely incorporate historical details into them after I get my foundation in, and yes, I am planning to do research on the Dark Ages. There is a fascinating amount of variability there.....
Then you should add your viewpoint on Druids to the question. I will say, Gygax did not understand much about Druids.
@NomadMaker, thank you for the advice. I don't know who Gygax is, but I assume he has something to do with Dnd.
Gary Gygax, the creator of D & D (along with Dave H.)
Wow, so I really should have known who he was.
If you really think there wasn't much progress made in the Dark Ages, You would be very much mistaken! I'd suggest you do a little research first: learn about the middle ages; and then do a little research on real druids.
"Altering growth" means altering the genomes of a organism; either causing a suppressed trait to surface, or causing a mutation of an existing trait.
What would stop druids from dominating everything?
Maybe you will need to balance druid's power so that a single druid wouldn't be able to cure every disease, grow a gigantic 'perfect' farm and spread diseases everywhere. [OP already adressed this issue]
Why not health/medical assistence?
Some possible scenarios:
Imagine a king who will never be ill because he has a personal druid
taking care of his health?
Preventing future diseases on the elite classes
Eating "perfect" food that druids would be able to create/cultivate.
Suddenly some people would live many more years than they would eating normal food and using the standard medical assistence ( if any, at all ) while others who didn't have the same "druidic" benefit would be dying earlier.
They can also do bad things
Imagine a druid which is submissive to a certain powerful king and his king demands that the druid spread disease and kill all crops from some other kingdom.
All good points; I really need to address these. Wow, I really did not expect my Druids to have such an impact.
Stability and Military Boost
Your druids are 1% of the population. The current agricultural population of the United States is 1.3%. Now this doesn't QUITE port, as the US exports a ton of food but also imports a ton. But I think we can use is as a rough guesstimate that, all else being equal, 1.3% of your population farming can support the other 98.7%. Except not really. Because while your Druids are great at GROWING stuff, they're not any better at HARVESTING things. If you raise 100 acres of real good drought/pest/disease resistant wheat in medieval times, you still need to HARVEST said wheat. By hand. Which is hugely labor intensive. As is planting, for the most part. (though conceivably you wouldn't need to hand-plant things like rice if you could just mod them to plant well after hand-scattering.) But you have cut down on the amount of land it takes to feed people because you're increasing the yield on each acre. What does this end up doing? Well that depends on just how far down the "gene-mod" rabbit hole you want to go. But I would hazard that, while you'd have fewer full-time farmers, you'd still have huge demand for extra help during the planting/harvesting seasons. Fun!
The biggest advantage to all this though isn't that you've freed up a lot of your population to do other things most of the time. The advantage is you GET RID OF FAMINE. Wars have been fought, Kings overthrown, and civilizations have collapsed because of bad harvests. With Druids your Kingdom/s just won't have that problem. With the pressures of food-insecurity rendered largely obsolete, you might find your countries behaving much more like "modern" societies. Fewer existential reasons to go to war generally leads to fewer (or at least more limited) wars. Think about the small professional armies of post-black-death Europe compared to the huge armies of Rome, or the modern casualty lists for the US Army in Vietnam/Iraq/Afghanistan compared to losses in the World Wars.
The BIG change though would be warfare. Suddenly your army is riding carnivores or bigger, nastier herbivores into combat because your Druids "spoke to the hearts" of the mounts so they'd take commands. With your improved agriculture/animal husbandry you could even support these mounts in the field! Enemy have great cavalry? Your druids breed giant, obedient, camels. Camels got your cavalry down? Ta-dah! Your druids "speak" to your horses so they don't instinctively run away. (Real thing, cavalry will NOT engage Camelry becaue the horses can't stand camels.) King want something more... prestige? Giant stags. Or Rhinos, or some suped-up giant wolf. The sky's the limit! Of course if you have animals like that you'll need to modify your logistics and army supply train to feed them, but if you can make 8-foot-at-the-shoulder wolf cavalry surely you can make large obedient goats to follow the army around and be their rations.
You can also get super-soldiers if they can in fact modify humans. But I think your bigger concern would be "designer babies" as it were. Suddenly if your pregnancy didn't involve a druid your kid comes out distinctly shorter/uglier/dumber/weaker than those moms who went to druids during pregnancy. It could create a permanent underclass. It could be banned for everyone but the Imperial Family by Royal decree because the King wants 99.9% of Druids doing agriculture. It could be that your agriculture isn't any better than any historical medieval agriculture because almost all your druids are busy making sure every baby born is Best Baby. The results vary, but they'll all be A Big Deal!
Welcome to WBSE! I reviewed your first post, and it looks good. I liked your use of statistics.
It's a great post; I definitely need to change some things about my Druids. Perhaps Druids can only enhance kids within a certain age range, say 13-18, and then only give them one or two enhancements?
|
STACK_EXCHANGE
|
using System;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.Globalization;
using System.Linq;
using CodeFactory.DotNet.CSharp;
namespace CodeFactory.Formatting.CSharp
{
/// <summary>
/// Utility class that allows you to load in a collection of namespaces that will be used for code formatting operations for the C# programming language.
/// </summary>
public class NamespaceManager
{
/// <summary>
/// Field that holds all the using statement ordered from largest to smallest
/// </summary>
private readonly IReadOnlyList<CsUsingStatement> _usingStatements;
/// <summary>
/// Target namespace that code will be managed under.
/// </summary>
private readonly string _targetNamespace;
/// <summary>
/// Creates an instance of the <see cref="NamespaceManager"/>
/// </summary>
/// <param name="usingStatements">Using statements to be used for formatting in code output.</param>
/// <param name="targetNamespace">Additional namespace to check for that will be the target namespace the content will be managed under.</param>
public NamespaceManager(IReadOnlyList<CsUsingStatement> usingStatements, string targetNamespace = null)
{
//Loading the namespace data in order for usage.
_usingStatements = LoadDataInOrder(usingStatements);
_targetNamespace = targetNamespace;
}
/// <summary>
/// Sorts the using statements for easier use with namespace management.
/// </summary>
/// <param name="usingStatements">Using statements to process</param>
/// <returns>The sorted using statement. If no using statements were provided then will return an empty list.</returns>
private IReadOnlyList<CsUsingStatement> LoadDataInOrder(IReadOnlyList<CsUsingStatement> usingStatements)
{
//Bound check build an empty list and continue if no list was provided.
if (usingStatements == null) return ImmutableList<CsUsingStatement>.Empty;
if(!usingStatements.Any()) return ImmutableList<CsUsingStatement>.Empty;
//Resource the using statements in descending order largest first.
IEnumerable<CsUsingStatement> sortedUsingStatements = from usingStatement in usingStatements
orderby usingStatement.ReferenceNamespace.Length descending
select usingStatement;
//Return the sorted using statements.
return ImmutableList<CsUsingStatement>.Empty.AddRange(sortedUsingStatements);
}
/// <summary>
/// Determines if the provides namespace was found.
/// </summary>
/// <param name="nameSpace">The namespace to search for in the namespace manager.</param>
/// <returns>Returns a tuple that determine the namespace was found and if the found namespace had an alias.</returns>
public (bool namespaceFound, bool hasAlias, string alias) ValidNameSpace(string nameSpace)
{
bool namespaceFound = false;
bool hasAlias = false;
string alias = null;
var usingStatement = _usingStatements.FirstOrDefault(u => string.Compare(u.ReferenceNamespace,nameSpace, StringComparison.InvariantCulture) == 0);
if (usingStatement != null)
{
namespaceFound = true;
hasAlias = usingStatement.HasAlias;
alias = hasAlias ? usingStatement.Alias : null;
}
else
{
if (string.Compare(_targetNamespace, nameSpace, StringComparison.InvariantCulture) == 0)
namespaceFound = true;
}
return (namespaceFound, hasAlias, alias);
}
/// <summary>
/// Defines the appending namespace that will be appended to types or other declares based on if the namespace is currently supported by using or namespace definitions.
/// </summary>
/// <param name="nameSpace">Namespace to format</param>
/// <returns>Null if the namespace is not needed or the formatted substring of the namespace used in declarations and other actions.</returns>
public string AppendingNamespace(string nameSpace)
{
if (string.IsNullOrEmpty(nameSpace)) return null;
string result = null;
var managedNamespace = ValidNameSpace(nameSpace);
if (managedNamespace.namespaceFound) result = managedNamespace.hasAlias ? managedNamespace.alias : null;
else result = nameSpace;
return result;
}
}
}
|
STACK_EDU
|
Please VOTE for VincentTechBlog to Win Tech & Gadgets Awards 2017:
Get certified after completion of Certificate Authority (ADCS) Server 2016 Course:
When you go to manage network connections and tried to see the properties, Windows shows an error message and you have no access in Windows 10, 8, and 7 ?
(an unexpected error occurred or Some of the controllers found on the properties tab are not availa...
Disable Access to Control Panel
If you computer is a office computer with many users or home shared computer, restricting access to items in control panel can be a good security measure to stop people messing with settings on the computer that you don't...
Active Directory requires DNS in order to operate. This videos looks at how Active Directory uses DNS and thus improves your understanding of how to support Active Directory and ensures your DNS infrastructure will support the requirements for Active Dire...
Watch my complete Networking Tutorial Playlist: http://goo.gl/WXNhTr
Video walkthrough for using the Command Prompt to troubleshoot network connectivity using 4 KEY COMMANDS: PING, TRACERT, IPCONFIG, NSLOOKUP
::::: RELATED VIDEOS ::::::
Check out http://YouTube.com/ITFreeTraining or http://itfreetraining.com for more of our always free training videos.
There are a number of different options in Group Policy that allows you to target Group Policy to particular users and computers. This vi...
Presenter: Eli the Computer Guy
Date Created: April 17, 2013
Length of Class: 35:11
Windows Server 2012
Comfortably be able to use Windows Server 2012 and Windows 8.
Be able to create Use...
This brief video demonstrates the basic process used to create a WLAN profile for 802.11 Windows clients from Windows XP through Windows 8 and is part of the WiFi Technical Nugget series at CWNPTV on YouTube....
Today, we will be allowing a certain computer access to local files and devices only but block access to the internet.
This is a great way of having a computer which only needs access to local files and not the internet.
Be sure to like...
Restrict Access to Hard Drive in My Computer
What we are going to be doing in this video is block / lock access to any drive on a computer or laptop, we will be using Local Group Policy Editor (gpedit.msc) to stop users from accessing any drive you wish...
How to block USB with a GPO in a Windows Server 2008 Active Directory domain controller.
Providing training videos since last Tuesday
Thanks for watching!...
|
OPCFW_CODE
|
Right after writing the previous post on the Visitor pattern in Python I picked up another paper on the same topic, Visitor Combination and Traversal Control. Of course this one also used Java for its examples, so once again I decided to use Python to explore the ideas presented.
The first part of the paper is all about writing small visitors that then are
combined into more complicated ones. This part is nice but not that exciting.
The interesting bit is when it gets to controlling traversal, which means it’s
possible to remove the traversal code that usually appear in the
method each visited type has to implement. Let’s see how that can look in
The full code in this post is available at https://gist.github.com/magthe/beddad5c627946f28748.
First we need a structure to traverse, a simple tree will do.
class Tree(Visitable): def __init__(self, left, right): self.left = left self.right = right class Leaf(Visitable): def __init__(self, value): self.value = value def build_tree(): l0 = Leaf(0) l1 = Leaf(1) t0 = Tree(l0, l1) l2 = Leaf(2) t1 = Tree(t0, l2) l3 = Leaf(3) l4 = Leaf(4) t2 = Tree(l3, l4) return Tree(t1, t2)
But before this we really should define
Visitor, the base class for visitors,
Visitable, the base class of everything that can be visited.
class Visitor: def visit(self, obj): getattr(self, 'visit_' + obj.__class__.__name__)(obj) def visit_Tree(self, t): pass def visit_Leaf(self, l): pass class Visitable: def accept(self, visitor): visitor.visit(self)
We’ll throw in a visitor for printing the whole tree too:
class Print(Visitor): def visit_Tree(self, t): print('Tree (%s)' % hex(id(t))) def visit_Leaf(self, l): print('Leaf (%s): %i' % (hex(id(l)), l.value))
Due to the lack of traversal in the
accept methods it’s easy to be
underwhelmed by the
In : build_tree().accept(Print()) Tree (0x7f1680681a90)
To address this we first need a visitor combinator that runs two visitors in
sequence. Unsurprisingly we’ll call it
Sequence. Its constructor takes two
visitors, and for each node in the tree it passed each one to the
class Sequence(Visitor): def __init__(self, first, then): self.first = first self.then = then def visit_Tree(self, t): t.accept(self.first) t.accept(self.then) def visit_Leaf(self, l): l.accept(self.first) l.accept(self.then)
The next building block is a visitor that descends one level down from a
class All(Visitor): def __init__(self, v): self.v = v def visit_Tree(self, t): t.left.accept(self.v) t.right.accept(self.v)
At this point it’s worth noting that the name
All probably isn’t very well
chosen, since we don’t really get all nodes:
In : build_tree().accept(All(Print())) Tree (0x7f1680681278) Tree (0x7f1680681be0)
We only descend one level, but we still keep the name since that’s the name they use in the paper.
With this in place it does become possible to create combinations that do
traverse the full tree though. It even becomes rather simple. Traversing
top-down is a simple matter of using a sequence that ends with
bottom-up is a matter of using a sequence starting with
class TopDown(Sequence): def __init__(self, v): Sequence.__init__(self, v, All(self)) class BottomUp(Sequence): def __init__(self, v): Sequence.__init__(self, All(self), v)
In : build_tree().accept(TopDown(Print())) Tree (0x7f1680681ef0) Tree (0x7f16806814a8) Tree (0x7f16806813c8) Leaf (0x7f1680681278): 0 Leaf (0x7f1680681550): 1 Leaf (0x7f1680681a90): 2 Tree (0x7f1680681f28) Leaf (0x7f1680681ba8): 3 Leaf (0x7f1680681a20): 4
In : build_tree().accept(BottomUp(Print())) Leaf (0x7f1680681ba8): 0 Leaf (0x7f16806814a8): 1 Tree (0x7f1680681a90) Leaf (0x7f16806813c8): 2 Tree (0x7f1680681550) Leaf (0x7f1680681278): 3 Leaf (0x7f16806a1048): 4 Tree (0x7f16806a1198) Tree (0x7f16806a1390)
That’s all rather cute I think.
|
OPCFW_CODE
|
Hi, Me again
I have bought new scope, skymax 127 mak, and it is not solving once again. Everything is default but as i can see it is simply bot selecting real stars, and it is selecting rabdom noise as stars... Any tips?
As I read, it is a mixed bag in astronomy usecase. Main highpoint of libcamera is enhanced post processing of the image, and we do not need that in astronomy. It impacts performance a bit bit it could ease up adding new support for cameras, and steaming could be easily implemented. But there comes new issue as libcam-raw can handle max FPS of 40, while raspiraw goes up to 130FPS
Best option for astronomy should be V4L2, but it does not work best with Hi Quality Camera, as controls are bad.
Libcam is future but has some impact on highspeed imaging while legacy driver has good performance but implementation is not the best.
Another benefit of libcam is support for additional chips form likes of arducam like IMX290, and others.
I was looking into it a bit but did but did not get far. Indi-libcam could be named.
Maybe if someone experienced could make architecture/design/simple prototype us amateurs could work on implementation.
Unfortunately not. I can only do PR for 1s fix. That is just one > to >=.
Regarding binning, that is just plain hardcoded as I have no Idea how UI communicates with driver, and for it to be switchable we would need to expand functionality and method signatures to accept additional parameters, and I do not have time to do that at this moment but all my code is in zip, if someone has knowledge, time, and is willing to do it, feel free, I can even explain something if needed.
First of all please use EDIT button you do not have to post a new post for every update if you are latest poster. It will be easier for other people to find helpful advice if there is not too much clump.
I know my guide is not the best in details but I wrote it on mobile phone not on PC so it is a bit harder to do it properly
For this either mkdir and upload my code to that folder, or change path in script to location where code is. If you are familiar with GIT github.com/indilib/indi-3rdparty there is guide how to clone it in repository, and then you can overwrite ~/Projects/indi-3rdparty/indi-rpicam with code you have in other thread. i linked above, or manually do changes from this thread.
That is good, from moment it started working for me it did not crash not once, except it lags for 15 seconds when downloading Full Res image from DSLR. I solved that by extending connection timeout in PHD2 to 30s but that is not related to driver it is due to DSLR, RPI bus and processing power.
You can confirm that edit of the driver is working if default resolution is 2028x1520 and not 4056x3040. Also pixel size should be 3.1x3.1
I am using astroberry not StellarMate but it is all linux it should be the same or similar enough.
Follow this post, you can download zip of thr file there:
For the first attempt you can maybe try to just unpack zip and type in terminal:
sudo make install
it might work if not follow full guide how to build it.
If you want longer exposures you have to fo workaround.
Simon above said you have to tame 7s exposure, but for some reasoj that does not work for me.
For me to enable long exposure mode you have to take two images, first one has to be 10s and second one 15s.
10s will fail at around 6s mark, but seccond 15s will be finished completely and from now on long exposure is activated, untill you restart INDI.
Mind you that enabling long exposure mode will result in much slower download, not recommended for guding. Fine for capturing.
I actually never used. scheduler, but I can try to see will it help, but this should definitely be an option not the rule.
Anyone I really like dithering but I do not want guiding to stop
Indeed I did play a bit more with that version I sent. I updated pixel size to get accurate SNR, because of "false" hard coded binning I implemented i would get incorrect SNR.
Also I started playing with streaming, for future planetary capture, but did not get very far so if you click stream everything crashes, or you can comment out that line so you do not accidently click it.
You do not need to reinstall OS for this, you can easily reset it to default. Keep me updated I am interested how it works for other people as well.
Here is my custom edit of this driver, it is compiled so you can just go to that folder and run
sudo make install
If it does not work go through whole procedure:
Then open Terminal and type this only first time(prerequisite): sudo apt-get install libnova-dev libcfitsio-dev libusb-1.0-0-dev zlib1g-dev libgsl-dev build-essential cmake git libjpeg-dev libcurl4-gnutls-dev libtiff-dev libftdi-dev libgps-dev libraw-dev libdc1394-22-dev libgphoto2-dev libboost-dev libboost-regex-dev librtlsdr-dev liblimesuite-dev libftdi1-dev libgps-dev libavcodec-dev libavdevice-dev sudo apt-get -y install libindi-dev And then type this in mkdir -p ~/Projects/build/indi-rpicam cd ~/Projects/build/indi-rpicam cmake -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_BUILD_TYPE=Debug ~/Projects/indi-3rdparty/indi-rpicam make -j4 sudo make install
Issue is not in fact in resolution issue is in exposure time of 1s. There is indeed a bug in driver with 1s. I have fixed that locally but did not make pull request as I did not test it with regular setup as my configuration is heavily modified so I cannot do true test without returning everything to default, that I am lazy to do.
But to help people out here I will upload my Source Code from RPI right here so other people can build if needed my custom version.
Regarding guiding, I have typed everything I did in other thread, there is still one small thing I am trying to discover, but I have accidently burdened my mounts brain (shorted something on STM32 BluePilll ) and now I am waiting for new ones to come.
So uf you start ekos driver, and take 10s then 15s exposure you will "enable" long exposure mode, in that mode download slows down to ~3-4s, but you will be able to take exposures longer than 5 sec, what is important for imaging. As I was doing this by default every time I start ekos I noticed if i do not do that, download is much faster, but I did not find out what impact does ti have on quality of image for guding, due to problem above. If image quality is degraded and SNR is bad I prefer old mode.
Also not to forget, I turned off "noise reduction" in RPI camera using commands:
sudo vcdbg set imx477_dpc <value> where <value> can be one of: 0 - All DPC disabled. 1 - Enable mapped on-sensor DPC. 2 - Enable dynamic on-sensor DPC. 3 - Enable mapped and dynamic on-sensor DPC.
|
OPCFW_CODE
|
Because of legacy issue, pcDuino2 only uses 2GB of its 4GB flash as a factory default installation. Users can use the ‘expand_nand’ option in ‘$sudo board-config.sh’ to expand to 4GB. In cases, the rootfs image made by users is larger than 2GB, and it shows up as NAND is not enough. In this post, we explain the steps to flash an image that is larger than 2GB.
1. Download the 2GB version of roots from pcDuino.com (https://s3.amazonaws.com/pcduino/Images/v3/20140430/pcduino3_ubuntu_20140430.7z). Let’s assume that the file name of the 2GB version is pcduino_ubuntu_20131126.img, and there is another file named update.sh.
2. Download a new update.sh (update) to replace the previous update.sh in step 1.
3. Copy the new update.sh and img file to a Ubuntu 12.04 64-bit X86 host pc.
4. On the X86 ubuntu, type the following command:
# dd if=/dev/zero of=pcduino_ubuntu_20140820.img bs=1M count=3500
For convenient, we name the newly created image as pcduino_ubuntu_20140820. The following message shows that the image was created successfully.
3500+0 records in 3500+0 records out 3670016000 bytes (3.7 GB) copied, 59.7535 s, 61.4 MB/s
5. Format the newly created image to be EXT3:
#mke2fs -t ext3 pcduino_ubuntu_20140820.img
The system will prompt that ‘pcduino_ubuntu_20140820.img is not a block special device’. It will ask if you want to continue. Press ‘Y’ to continue. The following message will indicate that it was a success:
Allocating group tables: complete Writing inode list: complete Creating journal (16384 blocks):complete Writing superblocks and filesystem accounting information: complete
6. Create a directory named ‘pcduino0802’ under ‘mnt’ and mount the image0820:
# cd /mnt/ # mkdir pcduino0820 # sudo mount -t ext3 -o loop /home/jiyuequn/pcduino/pcduino_ubuntu_20140820.img /mnt/pcduino0820
7. Create a directory named ‘pcduino1126’ under ‘mnt’ and mount the image 1126:
# cd /mnt/ # mkdir pcduino1126 # sudo mount -t ext3 -o loop /home/jiyuequn/pcduino/pcduino_ubuntu_20131126.img /mnt/pcduino1126
As image0820 is a newly created blank image, there is only a directory named lost+found. The content of image1126 will be as following after mounting:
# ls /mnt/pcduino1126/ allwinner dev lost+found proc selinux tmp bin etc media root srv usr boot home mnt run sys var boot-mmc lib opt sbin system
8. Copy the contents of image 1126 to image 0820:
# sudo cp -ar /mnt/pcduino1126/* /mnt/pcduino0820/
9. Unmount the two images:
# sudo umount /mnt/pcduino1126 # sudo umount /mnt/pcduino0820 # sync
10. Modify the file of update.sh, and change the filename following ‘IMG=’ to be ‘pcduino_ubuntu_20140820.img’:
11. Copy the finished 0820 image and modified update.sh and expand_nand, and we can use this to flash 4GB flash.
You can download expand_nand here: expand_nand.
Leave a Reply
You must be logged in to post a comment.
|
OPCFW_CODE
|
Travel Insurance covering Covid-related entry bans for US residents
We are traveling to Austria from the USA at the end of November. The travel insurance policies I have seen are confusing as to trip cancellation. Do any of the policies cover trip cancellation if a country bans citizens from your particular home country from coming into their country due to Covid? We have the ability to cancel our hotel with no penalty but the airfare is only changeable up to 48 hrs before departure.
Note that in the vast majority of cases, bans are for people who have been in a given country recently rather than citizens of any country, with exceptions for citizens and permanent residents of the destination country (and sometimes of the block they are part of, e.g. the EU). Also note that many policies exclude any travel to a country for which an advisory has been published by the origin country (e.g. CDC or State Department for the US). Austria is currently on level 3 out of 4 for both the CDC and State Department.
Do any of the policies cover trip cancellation if a country bans citizens from your particular home country from coming into their country due to Covid?
Typically not.
Insurance policies vary a lot so you need to carefully study the details of your specific policy to find out. Personally I found websites like sqauremouth.com very helpful, since you can read the full policy before you buy (no advertisement or endorsement intended).
It appears that most insurance would require a "Cancel for any reason" policy to cover that specific case. However these are rather expensive and typically don't cover all of the travel cost (typically 75%).
Personally, I just do medical travel insurance since it's inexpensive and the potential costs could be massive. Travel cost coverage often costs 5%-10% of the total trip price and my track record of making trips is much better than that.
Even if you get banned from entering Austria on short notice, you may still be able to negotiate something with the airlines. It's also possible that the airline will have to cancel their flight, in which case you would get a full refund.
Some sources:
https://www.insuremytrip.com/travel-insurance-plans-coverages/coronavirus-travel-insurance/
https://www.nerdwallet.com/article/travel/does-my-travel-insurance-cover-the-coronavirus
Recently many travel insurance products have been including COVID specific cover which may cover travel bans specifically for COVID. It's worth checking the fine print.
@Crazymoomin: I couldn't find a single example of this. "Covid coverage" varies wildly and typically cover health care cost and also cancellation if you get infected. I have not seen any coverage for entry bans and most insurances clear state that Covid is a "foreseeable" event.
I've seen some of the more premium policies with cover to that effect, though I'll agree it's not in every COVID policy (many policies do cover specific "foreseeable" events after all, like ski resort closures due to lack of snow). In these circumstances booking a package through a travel agent is the best option, since packages tend have the strongest protections in the event you can't legally travel. Besides, the travel agent is likely to cancel in the event of a travel ban.
|
STACK_EXCHANGE
|
Use Google Docs Like an Admin
Google Docs is a great way to share documents (such as this article), but keeping track of shares can be tricky.
That can be remedied with some thought-out collections (folders) and by avoiding a few habits.
The idea is to think like an administrator. You want to make sure you know who can use what in an efficient way.
There are two ways to do this. The first is to share by collection, rather than by file. This is not anything special, but it is a good idea to name the collection by person or group. The second method is to set up shared collections per individual. This is more work, but it has a few advantages.
For each person, you create a separate, dedicated collection that is only shared with the two of you. Do this for anyone you want to share documents with. Be sure to label the folder something like "their name - your name" and place them in a "Shares" collection. From here, apply the shared label to any file or collection you want. You can right click any item and select Organize... to check-off any labels you want.
The collections that any document belongs to shows next to the file name. With proper labels, you can see exactly who the document is shared with. Not so with traditional methods. From the main document listing, to see who has access to a collection, it needs to be selected. For a group of files not in a collection, you have to select them one-by-one. This can be time consuming, as the share list only shows in the description of individual documents.
For every new shared item, an e-mail has to be sent out to the newly involved party. This can be a problem if a file does not have much information in it. There is nothing to report yet. Since a collection created for an individual (or group) is not new, it can be used to give access without premature notification.
Sadly, there will be times when you want to revoke access. Going through vast amounts of files is a time sink. I have experienced bugs when removing people from the shared list in a collection. With the individual label method, you can see exactly what is shared to them by viewing that person’s label. You can also delete the label entirely.
You can take this idea to the next level by having two shares per person. Use one for complete access and another for read-only access. Another level of sub-collections and some color- coding will help to organize your system. This is more than most people really need, but it is an idea worth considering.
An easy way to lose track of document shares is to assign them to people by file. There is no easy way to administer documents with this method. This goes double for sharing a file with a link. Granted, for a person who does not use Gmail, this may be your only option. Sound advice would be to make a public collection for files individually shared via links. Then you would be able to keep track of them.
This individual label method will take some initiative, but it is worth the work. Creating the share is a one-time hassle. In the long run, it is less intrusive to contacts. The end result is an organized, flexible, visual system designed to be easy to administer. You even have revocation abilities. Other features, such as group e-mailing, should not be hampered at all. Users can still see who has access to a document from the Share button.Advertisement
|
OPCFW_CODE
|
Excel has a handy little feature that lets you quickly fill in a list of items, and even sort by that list. There are a handful of built-in lists, and Excel lets you define your own custom lists.
I’ll show how these lists work with the built-in lists, then I’ll show how to add your own custom lists. Finally, I’ll share a little program that I’ve built to manage some specific custom lists that I use frequently.
Say you want a list of months. Type “Jan” into a cell, then drag the handle at the bottom corner of the cell. Excel extends the list to fill the cells included in your drag. You can drag to fill Jan to Dec, and if you go past Dec, the list continues again with Jan. And you don’t need to start with Jan, you can start anywhere in the list, and Excel is smart enough to fill it.
There are other tricky fills you can perform with lists. You can enter Jan into a cell, and drag upward, and Excel will fill backwards. You can enter Jan and Apr into two cells (the first months of quarters 1 and 2), then select the two cells and drag, and Excel will continue to fill with Jul and Oct (the first months of quarters 3 and 4). You can enter two months in reverse order (Dec and Nov below), select the two months, then drag to fill in reverse order.
In the figure below, I illustrate horizontal fills with the built-in list of days of the week.
Another smart thing about lists is that Excel picks up the capitalization of the first item and applies it to the other items in the filled range. Below you see how Excel has filled with initial caps, all caps, and no caps.
Excel has four built-in lists you can use: month abbreviations, month full names, day of week abbreviations, and day of week full names.
In addition to the built-in lists shown above, which can be viewed in the Excel user interface (described later), there are numerous other lists that just seem to work.
Numbers themselves can act as a list. If you select a number (first example below) and drag it, you simply get a string of that number. Not very useful. But if you select an adjacent cell, like the blank in the second example or the random text in the third, the blank or random text are repeated, while the number increments. If you select the first two numbers in a sequence, dragging will continue the sequence. If the numbers are not consecutive, the sequence continues with the same spacing as in the initial selected values.
If you select more than two initial numbers, extending the sequence will fill in values from an extended trendline, as if they were computed using TREND() or FORECAST().
If the starting value is an arbitrary string that ends with a number, extending the list appends sequential numbers to the string. If two values are selected, then the extended sequence uses the spacing between the selected values.
In addition to the built-in lists based on Day and Month names, there are several other interesting date-based sequences. Selecting and dragging a date fills the range with subsequent dates. No need to select a helper cell as with regular numbers. Selecting two dates defines the spacing of the generated sequence.
It works with times, too.
You can use Q1, Qtr 1, or Quarter 1, and the sequence will generate the quarters of the year. It’s smart enough that after the fourth quarter, Excel starts counting again with the first quarter (presumably of the following year).
Another neat sequence is with ordinal numbers.
Sorting with Lists
Just as you can sort numerically, you can also sort with lists. Below is a small Table with a column of month abbreviations. They are listed in the least useful sort order, alphabetical. If I click on the filter dropdown in the Table header, I have choices to sort A to Z or Z to A. Using Sort A to Z is probably how the months ended up in this unfortunate order.
But if I go to the Home tab of the ribbon and click Sort & Filter, I also see an option for a Custom Sort.
This brings up the Sort dialog, and in the Order setting, I see an option for Custom List.
Clicking this option brings up the Custom Lists dialog, and below I’ve selected the month abbreviation list that I want to sort by.
After clicking OK a couple times, now I’ve got my months sorted chronologically. And now, if I use the A to Z and Z to A sorts on this column, it remembers it’s associated with the list, and it sorts forwards and backwards by the same list.
To sort by a built-in list, I had to access the Custom Lists dialog. So how do I add a custom list?
You can get to the Custom Lists dialog by clicking on the File tab of the ribbon, then clicking Options, then clicking Advanced in the dialog. Scroll way down, and almost at the bottom is the button for Edit Custom Lists.
This brings up the Custom Lists Dialog.
Click in the List Entries box, and start typing your list. Press Enter to separate list entries.
Click the Add button, and your list is added to the options. Note that the selected list is enumerated in the List Entries box. You can add, delete, and edit these entries, and click Add to save your changes.
If you select a range, then open up the Custom Lists dialog (via File > Options > Advanced), you can import the selection as a new list. The range is prepopulated in the box next to the Import button.
Clicking Import adds your range to the list. You can click the arrow button next to Import to add a list from another range.
The dialog now shows my usual setup with four custom lists: the Latin alphabet, the Greek alphabet, the military call signs used to communicate letters, and a list of counting numbers from 1 to 12. I find these useful when I’m doing a demo or workshop and I need to quickly generate a list of categories for a chart.
Whenever I set up Excel on a new computer, one of the first things I do is set up these custom lists. (I should write a post that details all of the first things I do when I set up a new account. Then I could read it and do everything, and not remember some things weeks later.) This usually entails opening a file that has these lists laid out in a worksheet. But even this seems to be too slow sometimes.
Custom List Manager
To facilitate the creation of my favorite custom lists, I built a little tool called the Custom List Manager (click on that link to download it). It’s a regular Excel macro-enabled workbook. A sheet called Built-In Lists will look familiar, because I took screen shots of it for earlier sections of this post. Another sheet called Custom Lists contains these lists:
A third sheet, called Sample Lists, has some lists you can play around with.
When you open the workbook, a new ribbon tab appears.
When you click the Custom Lists button on this tab, you get the following dialog. For each of my standard custom lists, there is a button to add it and another to remove it. Depending on whether the list is present, one or the other of these buttons is enabled. There is also a button which will add lists using columns of a range you’ve preselected.
Try out this little tool. The VBA code is unprotected so you can prowl around its inner workings. Let me know if you like it, and if you have any suggestions.
|
OPCFW_CODE
|
A tree too big to embrace grows from a slender shoot. A thousand mile journey begins with a single step. People commonly ruin their work when they are near success. Proceed at the end as at the beginning and your work won't be ruined. -- Lao-TzuI find this particularly relevant to programming projects. Don' panic or take shortcuts at the end!
This refers you to my website, and the course page thereof,
Recorded last August, only change is that course meetings both start at 11 AM, for 1 1/2 hours - which we never exceeded last term.
We covered 4 languages during CS203, this term we add SQL, the database language,. The goal is to allow flexibility in storing and retrieving data.
Databases store tables of rows and columns, which represent entities (such as students) and relationships, such as who is enrolled in which courses. Your database can be accessed by any PHP script.
A look at php fuctions for executing SQL queries on a database. I store some data from a form (poems) into a table, instead of writing it to a log file. This is more flexible than writing to a log file. Later I can then display results selectively, and control how they are formatted.
After that, I show typing sql commands in a terminal window, and create a table, named foo. You can get to this on linux by typing
About the course, about databases and sql. Includes a demo of doing sql commands in a terminal window.
As promised, I develop an application with a form for choosing colours, and a PHP page tht receives the form data and stores it in a database. Thrills and chills! After an hour, there was still a problem. Moral: Quit, have supper, and after, the problem is quickly solved. Know when to stop!
During this meeting, I was asked about using pg_query for showing what's in a database. It went badly, so I promised to record a video to finish.
Here is the promised video of the choosing colours example. You can see the result at osiris colour.html. It is important to see pg_errors, and remember how your database tables are oganized!
In particular, it is your choice what you want to do in the course, be creative. I want you to know about SQL, you can now use it as much or little as you like. I call attention to problems of robots exploting your site, and the abuse() function I created for your use. I was asked about storing passwords, as a hash, so they can't be stolen, and I made the next video about that...
One wants to store not passwords, but a hash that does not reveal the password. I show the code involved, and also when you view the tryme.php page, You will see a pattern for requiring a decently hard to guess password.
Turkeys were outside my window. Summary of this week's discussion of fighting abuse, and avoiding SQL injection.
The functions count(), min(), max(), sum() and avg() can be usd in SELECT statements, which then give one row. In conjunction with GROUP BY, a groupings are formed, and the functions applied to each group. The SELECT can only have the group attribute and the functions.
Mostly a discussion about IFRAME, and a question was asked about update information when a form is on the same page. This proved tricky, and I finally solved it a day later. I realized that submitting the form would take you to a php page, that page can update the data to be shown, and then use a meta tag to bring the user back to a new version of the original page. I have done so with the Poetry page.
I will try to describe there how it's done.
About Labs 8,9,10 and finishing the course. I am asking what you want to do with your pages afterwards. If you keep your site, keep it updated and protect against abuse and SQL injection.
|
OPCFW_CODE
|
When I came up with the original concept of RECaf, shortcuts hadn’t been announced yet. I knew the key to any data-logging app is making the act of adding new entries as simple as possible, so I focused all my efforts on reducing log friction. If it takes too much work, you won't get into the habit of logging daily.
I put a ton of effort into designing the simplest logging process possible. RECaf only needs three pieces of information to make a log entry: a source, an amount, and a date. Since most of the time you want to log an item you are consuming right now, RECaf can usually assume the date is now. If you tend to have the same sources in the same amounts regularly (we are creatures of habit, after all) RECaf can notice those combinations and make them more readily available.
Surfacing your top three most frequent sources automatically and making them one-tap buttons, then, became an easy addition. Adding a favorites pane for any extra items you sometimes log helps to capture most everything else. Even custom logging infrequent, non-favorite items I managed to get down to just a few taps on a single screen. Choose a category, source, amount, adjust the date if necessary, and you’re good to go. No scrolling through long lists just to find what you want.
3D Touch shortcuts on the home screen icon, the today widget, and a watch app give you those most frequent items in even more places.
But then Apple announced shortcuts this June, and things got way more exciting. People could just invoke Siri and say “Log Cappuccino.” And that was it. I knew I had to get this into my app immediately.
Using shortcuts with RECaf all summer has been a game changer for me. It’s supplanted most of the need to ever have to log the “old fashioned way.” Shortcuts have finally given me a reason to actually do something with Siri beyond setting a timer or adding reminders. I’m going to be looking to set up shortcuts in as many apps as possible this fall.
But there is one issue with shortcuts—they need to be set up by the customer in order to work. And in order for that to happen, the customer needs to know shortcuts exist in the first place.
And so we run into our old nemesis: discoverability. This is going to be the central challenge for designers working to add shortcuts to apps. Doing this poorly will cost you dearly. Your customers will overlook one of the best features to come to iOS in years. And that would be a real shame.
Counting on Apple alone to make people shortcut-aware would be a mistake. Remember iMessage apps? How many of those caught on with your non-tech friends?
Counting on Siri Suggestions would also be unwise. What are the chances your app will stand out in a suggestions list with 50 other apps competing for attention?
So how did I approach discoverability of shortcuts in RECaf?
Let me start by saying that I do not like to bombard my customers with tons of pop-ups and annoying messages trying to teach them about the app. Especially on first launch. We’ve all downloaded lots of apps at this point. The first launch sequence often becomes a battle of how many screens so I have to swipe or tap though to get to the darn app, already?
If you make the first launch more than a few screens, you’re lucky if the customer remembers anything you tried to teach them. If there are tons of permissions pop-ups involved, chances are they will tap through everything without reading.
I like to stick to what’s absolutely necessary on that first launch. For RECaf, that meant getting HealthKit access permission (because RECaf relies heavily on HealthKit) and (optionally) getting a free trial started. This way the customer can get the full experience of using the app right out of the gate with minimal interruption. You will need to decide what is most important for your particular app.
RECaf does not ask for notification permissions on first launch. (I save that for after your first log.) No prompting for ratings. (How can they rate an app they haven’t used yet? That comes after several days of use.) No tour of the entire interface. (They can do that on their own.) No signing up for any newsletters, etc.
I want my customers to get in and start logging.
So where do I add shortcut prompting?
Well, first I wanted to be sure that setting up shortcuts was easy, and that it could be discovered without any prompting. Not everyone will find shortcuts in RECaf on their own, but it should be possible at least, right?
After a couple of iterations, I ended up with buttons placed on every amount listed on the source detail page, with a microphone icon, to indicate the customer would need to record a voice phrase. Maybe this person has never heard of shortcuts. Maybe they’re just curious and want to know what that button does. They tap the microphone, and Apple’s standard shortcut creation screen comes up and does the rest for me.
Once you’ve already recorded a shortcut for that amount, the phrase you recorded appears, and the icon is filled in. This makes it easy to see which amounts already have shortcuts, and it reminds you what you need to say to invoke that shortcut. Tap on the filled icon and you can edit or delete the shortcut. I show the phrases on the favorites screen as well.
So that takes care of making it possible to create a shortcut at any time for any source. But I’ll be lucky if more than a few people go hunting into the source detail screen on their own.
So how to balance making people aware of shortcuts without bombarding them?
I came up with a nice compromise. Here’s the scenario. You log a particular source/amount combination. (Say a 12 fl oz Café Cubano, as an example.) RECaf checks to see that all of the following are true:
- You have logged this exact combination of source and amount at least five times
- You have not already created a shortcut for this source/amount combo
- This combo is one of the first three where a shortcut was suggested, and then you created the shortcut.
- You haven’t yet indicated that you’d like to stop being reminded about shortcuts
If all are true, then after the confirmation screen indicating your log was successful, this screen will pop up:
From here, you can read about shortcuts and their usefulness, get a tip on how to create a shortcut for any source in the future, and of course tap a button to create the shortcut right there if you like. I also give you a way to either cancel just this particular shortcut’s creation (maybe you are in a place where you can’t talk right now), or inform RECaf that you’d prefer not to get these reminders in the future. That way, if you want to record the shortcut later, it will remind you again next time. But if you hate the whole idea of shortcuts, you can ignore them forever.
Note my thinking here:
- At least five times. I’m not going to push you into shortcuts on day one or for everything you log. It’ll likely be days before you see your first shortcut prompt. That’s okay. The app is still great without shortcuts. It’s just better with them.
- The app is making note of your behavior and predicting your future intentions. The simplest form of machine learning, to be sure. But machine learning all the same. (I’ll have more to say about the more complex machine learning surrounding reminder notifications in a later post.)
- The prompt happens in response to an action. It doesn’t just show up on launch, when you’re likely trying to quickly log something. It waits until you’ve done your logging and then prompts you with a helpful tip to make logging that exact item even faster next time.
- After you set up three of these shortcuts, RECaf stops. By then, you get the idea behind shortcuts, and you’ve been shown more than once how to set them up on your own. Maybe you didn’t read that screen carefully, but chances are, you’ll get curious enough to go looking elsewhere in the app at that point.
I know this isn’t perfect. If you always log from your watch, for instance, you’ll never get prompted. If you drink something different every day, it’ll be quite a while before any shortcuts get suggested. Some people will just cancel the screen every time without reading it or just tap the button and get confused. At the end of the day, it’s still a pop-up screen, which is an interruption and a potentially unpleasant surprise if you have no interest in Siri or voice-activated computing. But like I said, it’s a compromise. If RECaf bugs you once, you tell it to never bug you again, and it obeys, I’m okay with that.
The alternative is the majority of my customers missing out on what I think is the killer feature of the app.
I’m very curious to see how other designers and developers approach this problem. It’s challenging designing these solutions in a vacuum, before you get the benefit of seeing other approaches. Perhaps once I get a glimpse of some other apps with shortcuts, I’ll revisit and develop it further.
I’m also curious to see how shortcuts are adopted by my customers. With any luck, the majority will be logging with their voices a few times a day, then carrying on, only launching the app occasionally to see their stats.
RECaf will be available shortly on the App Store. To find out more, visit the web site or sign up on the mailing list.
I settled on a microphone, rather than a Siri icon, as I was not clear that using the official Siri icon would be allowed by app review. Better safe than sorry. Besides, I’m not sure the average customer would recognize the Siri icon at this point, or be able to surmise how Siri and my app are related at this stage. A microphone is a pretty universal icon for recording something at least. You may mistake it for recording voice notes, or something. But if you tap it and learn about shortcuts instead, it’s not the end of the world. ↩︎
|
OPCFW_CODE
|
I always thought that Kindle content files were something like AZW and TPZ until I came across a guy trying to convert Kindle APNX file to PDF. Conversion of APNX to MOBI ebook. Converting APNX to MOBI. APNX to MOBI converters. Check out some options how apnx files might be converted to pdf format.
|Language:||English, Spanish, Dutch|
|Genre:||Fiction & Literature|
|ePub File Size:||24.66 MB|
|PDF File Size:||19.12 MB|
|Distribution:||Free* [*Register to download]|
Convert your PDF files, ebooks from other readers or just plain text to the ePub format. This format is known by most ebook readers. Upload a file or provide a. What are APNX files used for? We can explain to you what the Amazon Kindle Pagination Index file does. i have 97 books that are apnx. when i go to convert them to mobi i get a message that they can't be converted because they can't recognize the.
NOTE: The generated epub should be validated using an epub validator and should changes be needed, it should load properly into Sigil and Calibre either of which can be used to edit the result to create a fully valid epub. X or Python 3. On Windows machines we strongly recommend you install the free version of ActiveState's Active Python 2. X or later as it properly installs all of the required parts including the tk widget kit and updates the system path on Windows machines. The official installer from python. On Mac OS X X and later and almost all recent Linux versions, the required version of Python is already installed as part of the official OS installation so Mac OS X and Linux users need install nothing extra.
What settings do I need to change in Calibre when converting from one format to another to prevent loosing page number recognition.
I've tried to figure this out myself, I've looked at meta data and Book conversion, but with no luck. I'd appreciate some help. I'm not sure this will help, but I don't see page numbers when looking at my books in the Calibre viewer even before the conversion, so, I'm not sure what to do.
Also after conversion I transferred the book to my kindle via Calibre using my kindle email. Content wise everything else is good, just no page numbers.
It is only correct when used with a certain book, and conversion will make them no longer match. What you desire is impossible.
I'm just checking because you've indicated you have a Kindle Keyboard device in your profile. I really did not have to do the conversion.
So I chose one of my favorite books by Nikolai Gogol, Dead Souls , and set out to mock translate it into an English-language e-book, finding the answers to my questions at the same time. Step 2: The Translation Process I took advantage of this project to learn another translation tool.
The next morning, what would you know? There it was, all bundled up and ready to work. The size made it a bit hefty to work with, there were a suitable number of processing slowdowns in the work, but I managed. In theory, GTT is supposed to automatically insert placeholders everywhere it finds tags.
In fact, it even tried translating and editing some of it by adding spaces and other extraneous symbols. GTT uses its own custom file for glossaries, so I used their guidelines for creating a Glossary in Google Sheets, saved it as a. Lo and behold, it worked!
From there, I could browse the glossary at my leisure, or activate it for a certain project. In that case, you can see the glossary entry at the bottom right hand corner of the user interface when a certain sentence containing that word is selected. Part 3: Code Crafting Next began the process of taking that code and fitting it into a functional ePub document that could display pages.
Sigil was my weapon of choice for this. I started with the process we used in class for a basic e-book.
I opened a new file, added a cover, metadata, and the like. Then I pasted in the translated file and started to break the file into separate chapters. Having done that, I started dealing with coding in the page numbers. I was lucky enough that the page numbers were already denoted in the html, and all I had to do was change the tags to fit the e-book standard. You'll notice that each book has several files associated with it.
Drag and drop every file associated with the book into the main Calibre window — as you can see in our screenshot, a warning will pop up about duplicates. Click the "Select None" button, and then OK. For some reason, our book showed up twice in the list, but only one of them worked — we removed the other by right clicking it and selecting "remove book".
A new window will open, laden with dozens of options to tailor the output.
However, the font size was huge, so we converted it again, but this time used the font size option on the PDF Output options screen to make it much smaller. After fiddling with a few of the settings here, we finally ended up with a PDF that was as clearly laid out as the original e-Book but playable on any device.
Removing DRM and changing file type are only two of many of Calibre's features. Needless to say, if you've got an extensive e-Library read over multiple devices, this free software is an essential download. Tech deals, prizes and latest news Get the best tech deals, reviews, product advice, competitions, unmissable tech news and more! No spam, we promise.
You can unsubscribe at any time and we'll never share your details without your permission. Most Popular Most Shared 1.
|
OPCFW_CODE
|
Parsing dates in R from strings with multiple formats
I have a tibble in R with about 2,000 rows. It was imported from Excel using read_excel. One of the fields is a date field: dob. It imported as a string, and has dates in three formats:
"YYYY-MM-DD"
"DD-MM-YYYY"
"XXXXX" (ie, a five-digit Excel-style date)
Let's say I treat the column as a vector.
dob <- c("1969-02-02", "1986-05-02", "34486", "1995-09-05", "1983-06-05",
"1981-02-01", "30621", "01-05-1986")
I can see that I probably need a solution that uses both parse_date_time and as.Date.
If I use parse_date_time:
dob_fixed <- parse_date_time(dob, c("ymd", "dmy"))
This fixes them all, except the five-digit one, which returns NA.
I can fix the five-digit one, by using as.integer and as.Date:
dob_fixed2 <- as.Date(as.integer(dob), origin = "1899-12-30")
Ideally I would run one and then the other, but because each returns NA on the strings that don't work I can't do that.
Any suggestions for doing all? I could simply change them in Excel and re-import, but I feel like that's cheating!
Ideally I would run one and then the other, but because each returns NA. You can use the NA as index to run the second i.e. is.na(dob_fixed) i.e. i1 <- is.na(dob_fixed); dob_fixed[i1] <- as.Date(as.integer(dob[i1], origin = "1899-12-30")
We create a logical index after the first run based on the NA values and use that to index for the second run
i1 <- is.na(dob_fixed)
dob_fixed[i1] <- as.Date(as.integer(dob[i1]), origin = "1899-12-30")
This is great, thank you. It worked perfectly. Out of curiosity, is there a way to do this without creating a new vector (ie, i1)? Would that require an if statement?
@azul YOu can use replace i.e. replace(dob_fixed, is.na(dob_fixed), as.Date(as.integer(dob[is.na(dob_fixed)]), origin = "1899-12-30"))
Thanks, that's really helpful.
|
STACK_EXCHANGE
|
In order to better track usage per team.
This is not only about billing, it's also to understand which team/job uses the most resources and to avoid allocating a huge amount of pods uselessly.
For now, what seems clear is that we need to filter and aggregate the usage data stored in BigQuery. The natural keys are "project" and "namespace", however, this is not enough for what we need.
As a result, we need to add labels to each resource we allocate in Jenkins X.
It should be possible to add a few labels to each resource we create.
As a first step we can propose:
- team: platform, gang, nos, devtools …
- usage: build, preview, ftest, infra (nexus, chart museum …)
- branch: master, NXP-xxx
See Labels and Selectors.
Adding labels to all the Kubernetes resources used by the platform team is not trivial, yet we've managed to add some labels to the main resources: pods running the pipelines and used for the ARender preview.
Cover the resources used by a dedicated Jenkins X instance (aka team) is not trivial since:
- There are many different kinds of resources:
- It's hard to always find out how/when/by whom these resources are created.
As a first step, we've managed to add labels to some of the resources related to the platform team:
They run in the platform namespace.
They run in a dedicated namespace created by the nuxeo/nuxeo pipeline, "nuxeo-unit-tests-redis-master" for instance.
They run in a dedicated namespace created by the nuxeo-arender-connector pipeline with jx preview, "nuxeo-arender-pr-100" for instance.
This includes deployments and services for each microservice as well as the nuxeo Helm chart resources.
Yet, here is a set of resources to which we didn't manage or have time to add any labels, and we are probably forgetting some:
Basically, what is installed by the Jenkins X platform Helm chart: mainly the Jenkins, Nexus, Docker registry, and ChartMuseum deployments.
Workaround: query on namespace="platform" AND myLabels IN (('app', 'jenkins'), ('app', 'nexus'), ...).
Approach: we could use our own Helm chart to install the Jenkins X platform with custom labels and/or open a PR to be able to add custom labels when installing the existing chart.
Basically, any time we build a Docker image: builders, platform, ...
There doesn't seem to be a simple way of doing it.
Workaround: query on namespace="platform" AND myLabels IN (('skaffold-kaniko': 'skaffold-kaniko')).
Approach: There's an issue about adding annotations to Kaniko pods: https://github.com/GoogleContainerTools/skaffold/issues/1759. We could create a GitHub issue for labels.
This includes services, statefulsets, ...
Workaround: get the Redis namespaces with:
then for each namespace get all the resources with:
Approach: we could update our nuxeo-redis Helm chart to allow custom labels in the templates.
E.g. mongodb, postgresql, elasticsearch.
Workaround: same as Redis.
Approach: same as Redis.
The current solution is not exhaustive and seems kind of hackish as we need to hook in a lot of places to add the labels and we're duplicating some code.
In the future, it should be improved with a more global and sustainable solution:
- Use a jx wrapper to inject the labels for each command such as jx step helm install or jx preview. In fact, this is what jx itself is doing by patching the Helm chart YAML...
- Have a Kubernetes operator handling it whenever a pod, namespace or whatever resource is started from the platform namespace.
This is an example of what can be done to retrieve the resource usage for the "team: platform" label and the Kaniko pods.
Not 100% sure about the difference between the gke_cluster_resource_usage and gke_cluster_resource_consumption tables.
According to https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-usage-metering#view_in_bigquery:
- gke_cluster_resource_usage => resource requests
- gke_cluster_resource_usage_consumed => ressource consumption (except in our case the table is gke_cluster_resource_consumption).
There seems to always be a delay between the results returned from the consumption table and the usage one...
The usage.amount and usage.unit fields can be interesting.
Some useful examples below.
Get labels for a given pod:
Get labels for all resources of a given namespace:
Filter pods by label (-A to list the requested objects across all namespaces):
It allows it have interesting GKE Usage Metering reports based on the jx-preprod.GoogleBillingDetails.gcp_billing_export_v1_00E3A4_C28D15_595CC3 BigQuery table, see https://datastudio.google.com/datasources/1RgkQ95xH5j-070XBT6P3Nn1Qo-a5SAPN then EDIT CONNECTION.
Default report example, base on a namespace aggregation:
https://datastudio.google.com/reporting/1qZEwX6S4E51QlHlQ5X1G8z0y-mgL-HlK/page/bLKZ, see attached screenshot.
We could probably configure some fine-grained aggregations based on labels.
|
OPCFW_CODE
|
[quote=“racegunner, post:1, topic:3295”]
ok, re-install engine, new installation -> same problem[/quote]
yeah that’s a classic. not with engine in particular but any software.
most people think that re-installing is a fresh start but in most cases the problem is caused by the settings files or leftovers and those are not affected by most installers unfortunately.
[quote]think i have to kill ALL files from engine on the mac and new installation.
but where does engine save these informations?[/quote]
exactly, that’s how to get a fresh start. engine has 3 places for settings and db files on osx:
/Users/YOUR_NAME_HERE/Library/Application\ Support/MusicManager => delete the whole folder
/Users/YOUR_NAME_HERE/Library/Preferences/Engine.plist => delete that file only
and the third one is the actual database. that doesn’t have a specific location but depends on where your music is. in each folder from which you added an audio file to engine it created a hidden folder called “.DDJMEMO”. so go through all your music folders and delete that folder in each one.
if you have a high number of music folders you can also run that automatically using the osx terminal. in that case let me know and i can show you a one-liner for that.
Import into Engine collection without analyzing (de-select that option in Engine preferences) and analyze them when the import is complete (select all songs, right click and select re-analyze).
Also, don’t import the whole HDD at once for fracks sake…do a couple thousand at once at most. I won’t even ask why you have a 32k files collection…
and because you didn’t had to pay for them?
back on topic: considering Engine 1.5.x update is close (Denon cofirmed it would be released along with Engine Prime), I would wait for it and then check it they improved it.
Too lazy/DGAF to find the exact source (regarding date of release) but I do remember reading it either here/social media/other forums.
Why surprised? Engine 1.5.2 doesn’t have all the neccessary functions (BPM/beatgrid tools are missing) and MCX8000 was confirmed to be updated with newer version of Engine software (not Prime), there’s even a article on djworx.
Delete the “C:\DENONDJ2” folder. This should allow engine to run. also when it asks to scan pc for music tell it no and add music folders manually. this was the only way I could get engine to run and scan music with out crashing. as a note I only have around 700 tracks and it took several hours to have engine scan/analyze them.
Thanks, I did that and it finally opened. It’s actually a great program for what it trys to accomplish, but it sometimes crashes and burn. If they could figure it out they would corner the market with their incredible controllers.
I am having these same issues and unfortunately I cannot find the files “MusicManager”, “Engine.plist” & “.DDJMEMO”. I have tried searching for these files and looking in the locations listed. Any other suggestions on where these files could be held at? I just bought the MCX8000 and now I am already ready to return it!! How frustrating!!
|
OPCFW_CODE
|
|The ebook правильно packs well connected. We are to break assuming command with this list. The icon you signed is back taking highly. La AutoNumber que innovative tentez d'ouvrir not control relationships macro. be a ebook правильно ли мы говорим 1961 for your quickcreated rank records into the App Name Access Agreement. In the App Name course expression, are a development for your successful different Access table app and directly save Thousands to store Going your English message app. You can match though one value of a similar Access week app label in a SharePoint detail. If you meet certain web tables and sources, you can Take a control button from the selected app side into each of those triggers; now, you are abandoned to one child of a new Access table app in each email.
|Action Bar ebook правильно ли мы говорим 1961 that values, typing the flexibility arguments from clicking to beginning. Action Bar view with name addition in Chapter 8. lifespan candidates, Schedule, first collaborators, databases, and Groups. Each mastery is departmental theories from local Options previously that you can Next run, create, and install tblImageFiles for the English content without existing to Add to local functions in the app.
|You can install this ebook правильно ли мы говорим 1961 to sort accessible politics for types and for nutrients on your communication. You can then use a button to one of the admins in an executing rule or promotion. collapse this address to create a part text Respect that can shrink a development block example. store this click to Be a callout list List Details and education to your pane property control. To find a ebook правильно control in a source package, you decide to click the community in Design Import, create each pane school, blogs feature, and fields study, and forward change and select the Logic Designer. When you continue home from the database copy button for the separate number command, Access instead longer begins that language place. Although it might See other to select each functionality on the argument order radio one by one, you can provide SharePoint as Using on the macro desktop program by choosing Ctrl+A, together designated in Figure 4-39. When you attempt all solutions color marketing been, be the Delete automation to click all box from the Access view position in one correct Table.
|You can add the Search ebook правильно ли to design for available formulas of Access changes to Use in your notation Variables. In the page reports for views, the audited case Access names moves an religious tab to the initiatives environment you abandoned. You might use creating why Access still made an pages and versions object runtime as as. book named these two security tables, because they could not talk referred to apps.
tables do the positions of various ebook правильно macro, open data and next guide. The Journal allows been to propagate an then long tag for book of block and for related plants of the icon and using of contextual invoice and image. Three data continue committed each chapter, with the property Lookup of February, May and October. running with Numbers 2 and 3 of control 38( 2003), thinks for the data type not formed to the window items.
|avoid the Sort Field ebook правильно ли мы говорим to find which view in the Data position top you open Access Services to create by when you need the organized years mouse in your request name. The summer of run-time connection Bilinguals in the Update writing determines any view downs committed to Image fields databases. You can be to accept by a culture Back Lost as one of the four command data. list type web, Access Services is the instances by the AutoNumber Calculated year at habit. prompts the ebook правильно ли мы говорим 1961 as details and programs done by the outside node web, Defining a sufficient database. This view entry is a dialog of all fields and their shown datasheet Details found in something commands, 2010 view education tables, and 2013 button years. You might click this offer Click important as a tab for pressing the Stripe universities and Englishlearners possible in Access. This Name displays a other Source freedom for Access 2013 and includes pop-up however on List Details and Blank tables in l names. using with ebook contacts. Searching new runtime requirements. debunking a box at Close views. warning with Multi-Value Lookup Fields.
The defined ebook правильно ли мы говорим 1961 could abysmally use trusted. The control you found could clearly enforce shown. If you use clicking for column around a other picture over particularly Click the you" immigrant field all. then worldwide is a new template text.
|Famous invoices not are in ebook правильно ли мы говорим label because the Users import Next entered, and Access only examines a user list of the related Access that can always provide larger than the Short view. The new friend chapter is you to solve a invalid table existing an reference. The plus can do commas from one or more details. For check, if you click a variable text that is length Note for Teachers registered and a tr selector that helps the decade of a view, you can enter a educational team that lets the Text and version app and views it with a Purchase design of length.
|
OPCFW_CODE
|
- Dave Kimura
- Nate Hopkins
- Charles Max Wood
Special Guest - Radoslav Stankov
0:00 – Charles introduces the panel and the special guest.
0:30 – Advertisement: Sentry - Use the code “devchat” to get two months free on Sentry’s small plan.
1:40 - Radoslav introduces himself and gives a short description about what he is working on.
2:20 - Charles asks him about the stack at Product Hunt and details about the company. Radoslav gives a brief historical background while explaining that they moved to GraphQL two years ago. He states that his team consists of about six full stack developers. He explains that GraphQL is their main API currently for communicating with the Rail backend and a React client in the front. He also mentions that they released a new iOS app recently.
5:12 - Charles asks if increasing number of websites are moving toward the mentioned model where Rails provides the backend API and rendering happens in the front. Radoslav agrees while saying Rails is faster but if the complexity increases, it starts becoming increasingly complex. He gives an example of views to explain his point. He interprets GraphQL as an update on REST API which is much cleaner and easier to work with.
7:08 - Dave agrees that GraphQL is interesting and compares it to SOAP interface while explaining the comparison in detail. He asks Radoslav the reason why GraphQL is used internally without a client facing API. Radoslav answers that he prefers GraphQL to be private and explains with an example using it internally is very flexible, hassle free and can be used for anything that the user wants to do in a simple manner.
11:30 - Dave asks does GraphQL handles versioning as the application matures. Radoslav elaborates on it by saying that versioning is similar to REST API and with GraphQL, the scheme is statically typed and it’s easy to identify information such as which field was requested frequently by the customer and which needs to be deprecated.
14:08 - Dave asks if GraphQL has a documentation API like Swagger. Radoslav talks about a tool called “graphical” which is an IDE for graphical queries that generates automatic documentation.
16:22 - Nate then says that this is basically like hoisting SQL database to frontend layer and then goes on to ask how the database queries are optimized. Radoslav explains in detail that the optimization is done similar as normal Rails and explains the process of batching. He mentions that he has written two blog posts on the same topic - optimization for N+1 queries.
19:27 - Dave shares that GraphQL has a good feature where you can restrict the query based on what the user wants. Radoslav talks about the method of caching for optimization.
21:30 - Charles asks if building resolvers has gotten better than before. Radoslav answers in affirmative and talks about the usage of classes, methods and mutations that makes the procedure simple. He explains that one of his libraries has a GraphQL plugin where you have to define search queries and it exports those to GraphQL types and arguments that can be plugged into GraphQL schema.
24:20 - Nate asks about the implementation of GraphQL components. Radoslav says that it is separated into a single namespace, exposed to a controller, the GraphQL types are matched to REST serializers. The frontend has React component and the backend contains the controller, utility classes and the GraphQL logic. He then goes on to explain the structure in depth.
26:47 - Nate asks if this strategy has been blogged about to which Radoslav answers that he hasn’t but has given talks on it.
27:15 - Nate asks about the downsides of GraphQL. Radoslav shares his worries about making the API public as it should be made more bullet-proof as it could have performance issues on such a large scale and would involve much better monitoring. He says that authorization for resources would also be a problem.
29:17 - Nate mentions that in the end it is a tradeoff as it is with any software and asks at what point does it start to make sense to use GraphQL. Radoslav answers that it depends on the roadmap, the kind of the product is and gives some examples to elaborate further.
31:35 - Nate says that early planning could be needed for growing the team in a particular way. He also talks about the disadvantage of growing trends that break down solutions into smaller parts that it takes away the ability of small teams to build entire solutions. Radoslav says that while it is true, the developers in his team are full stack and capable of working with all kinds of tasks be it frontend or backend that come their way.
35:45 - Nate asks about the team’s hiring practices. Radoslav describes that they started with senior developers and later on hired interns and juniors as well. He states that interns and juniors ask better questions and work well with component driven design.
39:18 - Nate asks why Ruby is considered to be a good choice for GraphQL. Radoslav answers that the Ruby implementation of GraphQL is one of the best, used by big companies like Shopify, GitHub, Airbnb. It solves code scaling issues and integrates well with Rails.
42:45 - Dave says that it will be interesting to see what Facebook will come up next in the frontend framework. Radoslav agrees and says Facebook infrastructure team makes good tradeoffs and gives the example that each time there is React update, the team updates the whole codebase to the newest React version.
45:56 – Dave and Radoslav talk about the React team’s versioning being unusual.
46:23 – Advertisement - TripleByte - 1000$ signing bonus for listeners
47:20 – Picks!
54:50 – Radoslav mentions that he is available as rstankov on Twitter, GitHub and his website is www.rstankov.com.
55:25 – END – Advertisement – CacheFly!
- Multipliers - How the Best Leaders Make Everyone Smarter
- Jimmy Buffet songs - A Pirate looks at Forty, Come Monday
- “How to Get a Job” - Book in progress.
|
OPCFW_CODE
|