Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Hyper-V and Broadcom and Intel?
We just received our new server to begin our Hyper-V deployment.
It was recommended that we get 4 NICs on the server. The server has 2 onboard BroadCom nics and 2 pci Intel Pro nics.
The Broadcom software will support teaming of both the broadcom and intel nics.
Question, has anyone had great success with teaming in Hyper-V? I have read many stories both ways. If teaming in Hyper-V works (I know MS does not support it), what is the best option to use the Broadcom software and the nic configurations? Load balance? Link Aggregation? Not sure if I should even use teaming?
Thanks
I've never been a fan of NIC teaming except in very specific circumstances. What's your intended goal for teaming?
Where's it say that MS doesn't support NIC teaming?
@Chris I believe Microsoft just doesn't test teaming - not that they don't support it. Some Hyper-V docs were unclear on that a while back.
For Great Success! Our Hyper-V Hosts are all configured like this:
An iSCSI NIC with iSOE - This NIC has two ports for all but NIC redundancy; MPIO handles redundancy, so no teaming here.
Two Dual-Port NICs with TOE - These are teamed, one port from each controller, two teams. One team is for Hyper-V guest traffic. The other is for management and heartbeat traffic (clustered systems).
All are configured for Least Queue Depth sending, failover receiving. Each function is on a separate vlan (all connections go to the same switch stack). All Broadcom chips, nothing against Intel, I just get paranoid about mixing hardware in critical production systems.
If the management and heartbeat teams were separated this would be exactly as MS recommends (according to my sources). Having those two very lightly used functions on separate teams is just overkill in my opinion, so we combine them. If traffic demanded it, I would add additional NICs/Ports to the Hyper-V team. If your switch stack supports InterSwitch Trunking, you should use LACP instead of Least Queue Depth (Round Robin is also an option, though I wouldn't recommend it).
We have a similar configuration and it's been online for over 18 months now. Definitely recommend using MPIO for iSCSI SAN connections. We noted that failover clustering does not take into consideration network link failures for connections that are not used for the service, but may be used by clients (i.e. Hyper-V guests). This is where we found NIC teaming to be VERY useful.
Teaming is probably best used when you've got some shared storage in place.
I like to team two (or more) of the nics back to the storage device, then use 1 nic for hyper-v(or more, depending on your scenario) and a separate nic for management.
In your situation I would probably just want to use 1 nic for hyper-v and 1 for management.
If you're using iSCSI for shared storage Teaming will cause iSCSI to fail in the event of a link failure. MPIO is the way to add redundancy to iSCSI.
We've got two Dell R410's in a Hyper-V Server 2008 R2 cluster, with two integrated Broadcom NICs for LAN access, and a 4 port Intel NIC for iscsi storage network.
The Intel NICs are configured independantly of each other, on separate subnets to be used for MPIO with our SAN.
The two Broadcom NICs are teamed together using the BACS utility with the LACP configuration. On the switch side, we have a 3COM 3848 providing the other side of the LACP config.
This setup has been running since December 26th, 2010 and have had zero issues so far.
|
STACK_EXCHANGE
|
Bug #10288: Fix newly identified issues to make our test suite more robust and faster
"all notifications have disappeared" step is fragile when network is unplugged
It relies on the fact that the notification applet is there. When we boot with network plugged, the (persistent) Tor is ready has to be here so we're fine. However, when booting with network unplugged, we can't rely on that, so instead we rely on subtle timing between the previous step ("the Tails desktop is ready", where we do more and more things) and the timeout of the "started in VM" notification. The changes brought to fix #11398 trigger this latent bug, that could hit us any time we add more stuff to "the Tails desktop is ready".
Test suite: fix TOCTOU race condition in "all notifications have disappeared" step.
The icon we're looking for may disappear between the time when we check if it's
there, and the time when we're clicking on it.
Test suite: fix the 'all notifications have disappeared' step.
#2 Updated by intrigeri almost 4 years ago
- File 00_28_15_Booting_Tails_does_not_automount_untrusted_fat32_partitions.mkv added
- Priority changed from Normal to Elevated
There's one more cause for unreliability here: there's a TOCTU race condition in:
next if not(@screen.exists("GnomeNotificationApplet.png")) @screen.click("GnomeNotificationApplet.png") @screen.wait("GnomeNotificationAppletOpened.png", 10)
... that triggers:
[log] CLICK on (1007,762) And I start Tails from DVD with network unplugged and I login # features/step_definitions/common_steps.rb:193 FindFailed: can not find GnomeNotificationAppletOpened.png on the screen. Line ?, in File ? (RuntimeError) ./features/step_definitions/common_steps.rb:210:in `/^I start Tails( from DVD)?( with network unplugged)?( and I login)?$/' features/untrusted_partitions.feature:78:in `And I start Tails from DVD with network unplugged and I login'
Maybe we should try clicking the image in a block that catches exceptions, and do
next iff any exception is raised? Sounds like this would easily eliminate the race condition, no?
Here again, I can't really mark scenarios as fragile (yet) since so many are affected, so bumping priority a bit.
#3 Updated by intrigeri almost 4 years ago
- Status changed from Confirmed to In Progress
- Assignee set to intrigeri
- Target version set to Tails_2.4
- % Done changed from 0 to 10
- Feature Branch set to test/11398-florence-hides-other-windows
Pushed a tentative fix to test/11398-florence-hides-other-windows, since I think that branch makes it more likely to hit such problems.
#6 Updated by intrigeri almost 4 years ago
Even with my fix in, this still sometimes fails: https://jenkins.tails.boum.org/job/test_Tails_ISO_test-10376-fix-startup-page-roadmap-test-is-fragile/28/. I think this shows that Sikuli's
screen.click is racy, and not reliable when used on a fast-changing UI. I'll try to thing of a workaround, but really IMO we should try to find another solution (would dogtail be less racy?) to make the whole notifications thing more robust.
#7 Updated by intrigeri almost 4 years ago
I think that another problem I've just fixed (eab5505) might have made this problem more likely to happen, because previously we were often running "all notifications have disappeared" twice, which at least doubles the chances it fails (I say "at least" because the 2nd call has more chances to be close to the time when the "running in a VM" notification times out, due to how the whole timing of our session initialization vs. test suite works).
#9 Updated by anonym almost 4 years ago
- Status changed from In Progress to 11
- Assignee changed from anonym to intrigeri
- % Done changed from 50 to 100
Seems to be pretty robust now!
Indeed, which is the reason I merged it, but you might want to look at this commit I pushed on top:
761f3df You cannot `return` in a step definition block.
Is there anything we can do to cleanly deal with this (unlikely?) error case? IMHO we can just forget about this and revisit if we see issues -- AFAICT the real fix of this branch is the other stuff.
#11 Updated by intrigeri almost 4 years ago
Indeed, which is the reason I merged it, but you might want to look at this commit I pushed on top:> 761f3df You cannot `return` in a step definition block. >
Indeed, right. IIRC I was not convinced either, but copied it from some other place that I can't find now, so probably I'm just wrong :)
Is there anything we can do to cleanly deal with this (unlikely?) error case?
Indeed we still have a TOCTOU problem here, but apparently it's less bad than the previous exists+click. I don't know. We could assume that a failure
screen.wait("GnomeNotificationAppletOpened.png", 10) means that the notification applet has disappeared, and then we need to unclick what we clicked, but it feels potentially fragile & premature optimization, so:
IMHO we can just forget about this and revisit if we see issues -- AFAICT the real fix of this branch is the other stuff.
#15 Updated by bertagaz over 3 years ago
- File devel_419.log View added
- File 00_02_43_Anti-test__no_memory_erasure_on_a_modern_computer.mkv added
- Status changed from Resolved to Confirmed
- Target version changed from Tails_2.4 to Tails_2.6
- % Done changed from 100 to 0
- QA Check changed from Pass to Dev Needed
Sadly we've seen this bug several time again while referencing the test suite failure in Jenkins for 2016 June (see #11087#note-9). Clearly Sikuli clicks too late on the notification area, which has already disappeared, and then ends up opening the workspace selection applet.
#16 Updated by Anonymous over 3 years ago
- Assignee set to intrigeri
While checking "Volunteers to handle important tickets flagged for next release, but without assignee" during the current contributor meeting we saw this ticket. Assigning to you, tentatively, because I don't know what to do with it exactly otherwise.
#17 Updated by intrigeri over 3 years ago
- Assignee changed from intrigeri to anonym
- Target version changed from Tails_2.6 to Tails_3.0
Thank you for putting it back onto our radar! Indeed it was a mistake to assign this to 2.6 with no assignee. I think that anonym has some improvements in this area on the feature/stretch branch, so moving it to 3.0.
|
OPCFW_CODE
|
CORTX is S3 compatible object storage designed for great efficiency, massive capacity, and high HDD-utilization. This is a 100% open source project with engineering effort provided by Seagate. Most of the project is licensed under the Apache 2.0 License and the rest is under AGPLv3.
CORTX aims to be uniquely optimized for mass capacity storage devices, work with any processor, highly flexible (works with HDD, SDD and NVM), and massively scalable, up to a billion billion billion billion billion exabytes (2^206) and 1.3 billion billion billion billion (2^120) objects with unlimited object sizes.
CORTX use cases include data mining, machine learning, archival storage, backup, and autonomous driving. Sage2, Maestro, and EsiWACE2 are the current R&D projects based on CORTX.
- Sage2 builds a CORTX cluster based for extreme scale computing/HPC and AI moving forward from the SAGE project using the SAGE prototype. The project consists of 9 partners led by Seagate and is funded by the European Commission’s H2020 program (Grant Agreement number: 800999).
Sage2 will demonstrate a multi tier storage system (distributed NVRAM, SSD, HDD), “SAGE”, capable of handling data intensive workloads. Seagate is providing the components of CORTX, primarily Motr, lib-motr and Hare, and some of the storage enclosures (SAS & SATA HDD tiers). Experimental pieces of Motr are developed by Seagate (eg: QoS, Function Shipping etc) to meet the needs of HPC and AI use cases. KTH adopts the CORTX object storage with an I/O programming model that is suitable for large scale HPC and emerging ML/AI applications.To demonstrate the features of CORTX, pilot applications such as iPIC3Dand StreamBrain are adapted to use CORTX object storage and function shipping to accelerate post-processing workloads. Kitware visualization utilities for applications run on SAGE. UKAEA adapts the novel storage system for a unique HPC application involving parallel in time. The combination of the tiered storage with parallel in time aims to demonstrate application portability to the Exascale era. UKAEA also applies to areas such as global memory abstraction and is closely working with other CORTX partners to develop and use tools for studying performance alongside data management and analysis.
- Maestro R&D project is building a data orchestration software layer that intelligently handles data movement in the I/O stack and has CORTX Motr as one of its primary backends. Apart from Seagate – CEA, Juelich, ETH, Appentra, ECMWF and HPE are the key players and the project is led by Juelich (administration) + HPE (technical).
- The EsiWACE2 project consists of 20 EU partners from the weather and climate community led by DKRZ (Germany). EsiWACE2 looks at weather and climate applications, leveraging CORTX Motr as one of the backends.
Learn more about CORTX at: https://www.seagate.com/products/storage/object-storage-software/
|
OPCFW_CODE
|
from django.utils.datastructures import MultiValueDict as MultiDict
from sheerlike import filters
class TestArgParsing(object):
def setup(self):
self.args = MultiDict([('filter_category', ['cats', 'dogs']),
('filter_planet', ['earth']),
('filter_range_date_lte', ['2014-6-1']),
('filter_range_comment_count_gt', ['100'])])
def test_args_to_filter_dsl(self):
filter_dsl = filters.filter_dsl_from_multidict(self.args)
# the existing tests here seemed to depend of the other
# of dictionary keys, which is undefined
def test_range_args(self):
filter_dsl = filters.filter_dsl_from_multidict(self.args)
assert('range' in filter_dsl[1])
assert('date' in filter_dsl[1]['range'])
assert('comment_count' in filter_dsl[1]['range'])
assert('2014-6-1' == filter_dsl[1]['range']['date']['lte'])
assert('100' == filter_dsl[1]['range']['comment_count']['gt'])
def test_filters_for_field(self):
selected = filters.selected_filters_from_multidict(
self.args, 'category')
assert (('cats') in selected)
assert (('dogs') in selected)
class TestDateValidation(object):
def test_date_validation_incorrect_range(self):
args = MultiDict([('filter_range_date_gte', ['2014-6']),
('filter_range_date_lte', ['2013-6'])])
filter_dsl = filters.filter_dsl_from_multidict(args)
assert(filter_dsl[0]['range']['date']['gte'] == '2013-6-1')
assert(filter_dsl[0]['range']['date']['lte'] == '2014-6-30')
def test_date_validation_correct_range(self):
args = MultiDict([('filter_range_date_gte', ['2013-6']),
('filter_range_date_lte', ['2014-6'])])
filter_dsl = filters.filter_dsl_from_multidict(args)
assert(filter_dsl[0]['range']['date']['gte'] == '2013-6-1')
assert(filter_dsl[0]['range']['date']['lte'] == '2014-6-30')
def test_date_validation_with_days_correct_range(self):
args = MultiDict([('filter_range_date_gte', ['2014-1-23']),
('filter_range_date_lte', ['2014-6-23'])])
filter_dsl = filters.filter_dsl_from_multidict(args)
assert(filter_dsl[0]['range']['date']['gte'] == '2014-1-23')
assert(filter_dsl[0]['range']['date']['lte'] == '2014-6-23')
def test_date_validation_with_days_incorrect_range(self):
args = MultiDict([('filter_range_date_gte', ['2014-6-23']),
('filter_range_date_lte', ['2014-1-23'])])
filter_dsl = filters.filter_dsl_from_multidict(args)
assert(filter_dsl[0]['range']['date']['gte'] == '2014-1-23')
assert(filter_dsl[0]['range']['date']['lte'] == '2014-6-23')
def test_default_days_correct_range(self):
args = MultiDict([('filter_range_date_gte', ['2014-1']),
('filter_range_date_lte', ['2014-6'])])
filter_dsl = filters.filter_dsl_from_multidict(args)
assert(filter_dsl[0]['range']['date']['gte'] == '2014-1-1')
assert(filter_dsl[0]['range']['date']['lte'] == '2014-6-30')
def test_default_days_incorrect_range(self):
args = MultiDict([('filter_range_date_gte', ['2014-6']),
('filter_range_date_lte', ['2014-1'])])
filter_dsl = filters.filter_dsl_from_multidict(args)
#from nose.tools import set_trace;set_trace()
assert(filter_dsl[0]['range']['date']['gte'] == '2014-1-1')
assert(filter_dsl[0]['range']['date']['lte'] == '2014-6-30')
|
STACK_EDU
|
I have a very large file for iTunes that I want to uninstall, but I get a message, "This action is for products that are currently installed" or "The feature you are trying to use is on a network that is unavailable." Any thoughts on how to uninstall it? I'm running XP with IE 8.
That can happen with Apple products if certain other components are missing or already uninstalled. Go to support.apple.com/kb/TS3704 and read and follow the instructions on downloading the Microsoft Install/Uninstall tool for Apple products.
I have a Compaq computer with Windows 7. Awhile ago, I went to turn it on and it stuck on a screen that says I have been illegally downloading inappropriate child videos and in order to continue using my computer I need to send them $300. The screen looks official but knowing I had not done anything they had said, I figured it was a scam. Is there a way I can take care of the problem without taking it somewhere? I don't want to take it somewhere and have them get the wrong impression of me.
Scam, of course. Try rebooting into Safe mode with Networking. Hold the "Shift" key down while it starts Windows and see if that bypasses the blocking screen. From there, run your updated antivirus scan and see if that corrects the issue. You can also download and run MalwareBytes AntiMalware (malwarebytes.org/mwb-download.php) and have it delete what it finds. If this virus has already corrupted your antivirus, see if you can get to Windows Defender Offline. It will help you to create a boot CD allowing you to boot into Windows without activating any resident virus and perform a full scan of your hard drive. You can find it at windows. microsoft.com/en-US/windows/what-is-windows-defender-offline. Once this is completed, you'll need to download and reinstall any antivirus program you had been using since the resident version was compromised.
Running Windows 7 with IE 11, which is my preferred browser. Any idea why certain Web pages won't show up on IE, but if I go to Chrome they appear?
It could be because of the default privacy settings within Internet Explorer. Try clicking the "Some content is blocked to protect your privacy" icon in the top right of Internet Explorer at the end of the URL address box. It will look like a circle with a line through it. That will allow the blocked content and most likely display your full page.
Send questions to [email protected] or Personal Tech, P.O. Box 1121, St. Petersburg, FL 33731. Questions are answered only in this column.
|
OPCFW_CODE
|
URL for opening Interactive Report Layout not working
The url to an IR report url is not working when using the link example prvided by page designer.
The example in the page designer shows like f?p=&APP_ID.:59:&APP_SESSION.:IR[HS]_34718
When using in on a public page without session id, the report will not open. No error shown just a blank page.
i try using the old style for the url http://server_ip:8080/ords/f?p=41905101:59::IR_34718
=>works flawless
Page Designer URL Suggestion
APEX Version 5.1.4
I tried with a report on the EMP table. Static id of the interactive report is EMP_IR2. Url is https://myserver/pls/apex/f?p=12345:30:113306489867725::NO:RP:IR[EMP_IR2]C_ENAME:KING and it works just fine. Could it be your syntax is not correct ? In the old syntax the REQUEST value of the url contained the static id of the report. In the new syntax that is no longer the case. In your url you have put the "IR[HS]_34718" in the place of the request.
What APEX version are you working with?
I added APEX Version and the URL Suggestion copied from the Page Designer Saved Report Link example
@KoenLostrie. Providing a Filter in the URL works great:
@KoenLostrie. Providing a Filter in the URL works great
[link]http://serverip:8080/ords/f?p=41905101:59:::::IRC_HALLE:H3a
As soon as i add the Report Static ID the url does not work:
[link] http://serverip:8080/ords/f?p=41905101:59:::::IR[HALLENSTATUS]C_HALLE:H3a
Misunderstood the original question - I tried using the request syntax on 19.1 and it works fine for me. It opens the saved report as expected. Unfortunately I don't have a 5.1 instance to test on. The url I used was
https://myserver/pls/apex/f?p=12345:30:9367639950297:IR[EMP_IR3]_142887842:NO:RP::
|
STACK_EXCHANGE
|
<?php
/**
* Created by PhpStorm.
* User: joel
* Date: 13/02/20
* Time: 15:24
*/
namespace FdjBundle\Command;
use FdjBundle\Entity\TennisScore;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
class JoueurTennisSocreCote extends ContainerAwareCommand
{
protected function configure()
{
$this
->setName('fdj:JoueurTennisScoreCote')
->setDescription('MAJ JoueurTennisScoreCote.')
->setHelp("Cette commande lance une requette pour mettre a jour la table JoueurTennisScoreCote");
}
protected function execute(InputInterface $input, OutputInterface $output)
{
$output->writeln(['cote inputt', '============',]);
$em = $this->getContainer()->get('doctrine')->getManager();
$matchs = $em->getRepository('FdjBundle:TennisScore')->findAll();
foreach ($matchs as $match){
if ($match->getJoueursTennis() == 1) {
$joueur1s = explode("-", $match->getLabel());
dump($joueur1s);
$date = new \datetime($match->getDateDeSaisie());
$joueur1 = new JoueurTennisScoreCote();
$joueur1->setNom($joueur1s[0]);
$joueur1->setCote($match->getUn());
$joueur1->setNomAdversaire($joueur1s[1]);
$joueur1->setCoteAdversaire($match->getDeux());
$joueur1->setDate($date);
$joueur1->setCompetiton($match->getCompetition());
$joueur1->setResultat($match->getResultat());
if ($match->getEquipe1() > $match->getEquipe2()) {
$joueur1->setVictoire(1);
} elseif ($match->getEquipe1() < $match->getEquipe2()) {
$joueur1->setVictoire(0);
}
$joueur2 = new JoueurTennisScoreCote();
$joueur2->setNom($joueur1s[1]);
$joueur2->setCote($match->getDeux());
$joueur2->setNomAdversaire($joueur1s[0]);
$joueur2->setCoteAdversaire($match->getUn());
$joueur2->setDate($date);
$joueur2->setCompetiton($match->getCompetition());
$joueur2->setResultat($match->getResultat());
if ($match->getEquipe1() > $match->getEquipe2()) {
$joueur2->setVictoire(0);
} elseif ($match->getEquipe1() < $match->getEquipe2()) {
$joueur2->setVictoire(1);
}
$match->setJoueursTennis('2');
$em->persist($joueur1);
$em->persist($joueur2);
$em->persist($match);
$em->flush();
dump($joueur1);
dump($joueur2);
}
}
$output->writeln(['============','resultat fin',]);
}
}
|
STACK_EDU
|
This example shows how to create a video algorithm to detect motion using optical flow technique.This example uses the Image Acquisition Toolbox™ System Object along with Computer Vision System Toolbox™ System objects.
This example streams images from an image acquisition device to detect motion in the live video. It uses the optical flow estimation technique to estimate the motion vectors in each frame of the live video sequence. Once the motion vectors are determined, we draw it over the moving objects in the video sequence.
Create the Video Device System object.
vidDevice = imaq.VideoDevice('winvideo', 1, 'MJPG_320x240', ... 'ROI', [1 1 320 240], ... 'ReturnedColorSpace', 'rgb', ... 'DeviceProperties.Brightness', 130, ... 'DeviceProperties.Sharpness', 220);
Create a System object to estimate direction and speed of object motion from one video frame to another using optical flow.
optical = vision.OpticalFlow( ... 'OutputValue', 'Horizontal and vertical components in complex form');
Initialize the vector field lines.
maxWidth = imaqhwinfo(vidDevice,'MaxWidth'); maxHeight = imaqhwinfo(vidDevice,'MaxHeight'); shapes = vision.ShapeInserter; shapes.Shape = 'Lines'; shapes.BorderColor = 'white'; r = 1:5:maxHeight; c = 1:5:maxWidth; [Y, X] = meshgrid(c,r);
Create VideoPlayer System objects to display the videos.
hVideoIn = vision.VideoPlayer; hVideoIn.Name = 'Original Video'; hVideoOut = vision.VideoPlayer; hVideoOut.Name = 'Motion Detected Video';
Create a processing loop to perform motion detection in the input video. This loop uses the System objects you instantiated above.
% Set up for stream nFrames = 0; while (nFrames<100) % Process for the first 100 frames. % Acquire single frame from imaging device. rgbData = step(vidDevice); % Compute the optical flow for that particular frame. optFlow = step(optical,rgb2gray(rgbData)); % Downsample optical flow field. optFlow_DS = optFlow(r, c); H = imag(optFlow_DS)*50; V = real(optFlow_DS)*50; % Draw lines on top of image lines = [Y(:)'; X(:)'; Y(:)'+V(:)'; X(:)'+H(:)']; rgb_Out = step(shapes, rgbData, lines'); % Send image data to video player % Display original video. step(hVideoIn, rgbData); % Display video along with motion vectors. step(hVideoOut, rgb_Out); % Increment frame count nFrames = nFrames + 1; end
In the Motion Detected Video window, you can see that the example detected the motion of the notebook. The moving objects are represented using the vector field lines as seen in the video player.
Here you call the release method on the System objects to close any open files and devices.
release(vidDevice); release(hVideoIn); release(hVideoOut);
|
OPCFW_CODE
|
import { CreepExtras } from "prototypes/creep";
import { RepairerCreep } from './../creep-roles/repairer-creep';
import { HarvesterCreep } from "creep-roles/harvester-creep";
import { MovementSystem } from "systems/movement-system";
export class CreepManagement {
public static run(creep: Creep): void {
if (creep.spawning) {
return;
}
const creepExtras = this.getCreepExtras(creep);
const colony = creepExtras.getColony();
if (colony && colony.creeps && creep.name in colony.creeps) {
colony.creeps[creep.name].id = creep.id;
}
creepExtras.run();
MovementSystem.run(creep);
}
public static getCreepExtras(creep: Creep): CreepExtras {
switch (creep.memory.role) {
case "harvester":
return new HarvesterCreep(creep);
case "repairer":
return new RepairerCreep(creep);
// case "upgrader":
// UpgradeSystem.runUpgraderCreep(this.creep);
// break;
// case "builder":
// BuilderSystem.runBuilderCreep(this);
// break;
// case "defender":
// DefenceSystem.runDefenderCreep(this);
// break;
default:
return new CreepExtras(creep);
}
}
}
|
STACK_EDU
|
Major Concurrency Issue
I have looked around but I cannot find a decent answer to the issue that I am having. I have a list that is constantly being modified and read from but I keep getting concurrency issues. Note: There are 2 threads that constantly pull data from this list.
The issues pop up on doEntityTick() and getEntity(). Note: doEntityTick issues are caused by me adding an entity to the list.
World:
package UnNamedRpg.World;
import java.awt.Rectangle;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.Iterator;
import UnNamedRpg.Player.Game.GameManager;
import UnNamedRpg.World.Entity.Entity;
import UnNamedRpg.World.Entity.EntityCharacter;
import UnNamedRpg.World.Entity.EntityLiving;
import UnNamedRpg.World.Entity.EntityProjectile;
public class World {
private String name;
private ArrayList<Boundary> boundList = new ArrayList<Boundary>();
private Collection<Entity> entityList = Collections.synchronizedList(new ArrayList<Entity>());
public World(String name){
setName(name);
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public ArrayList<Entity> getEntityList(){
ArrayList<Entity> newList = new ArrayList<Entity>();
synchronized(entityList) {
Iterator<Entity> iter = entityList.iterator();
while (iter.hasNext()) {
Entity ent = iter.next();
newList.add(ent);
}
}
return newList;
}
public ArrayList<Entity> getEntityAtCoordinatePair(double x, double y){
ArrayList<Entity> tempList = new ArrayList<Entity>();
for(Entity ent : getEntityList()){
Rectangle rect = new Rectangle((int)ent.getX(), (int)ent.getY(), ent.getWidth(), ent.getHeight());
if(rect.contains(x, y)){
tempList.add(ent);
}
}
return tempList;
}
public void addEntity(Entity ent){
synchronized(entityList){
entityList.add(ent);
if(ent.getID() == -1){
ent.setID(entityList.size());
}
}
}
public Entity getEntity(int id){
synchronized(entityList) {
Iterator<Entity> i = entityList.iterator();
while (i.hasNext()){
Entity ent = i.next();
if(ent.getID() == id){
return ent;
}
}
}
return null;
}
public void doEntityTick(){
synchronized(entityList) {
Iterator<Entity> iter = entityList.iterator();
while (iter.hasNext()) {
Entity ent = iter.next();
if(ent instanceof EntityLiving){
EntityLiving entLiv = (EntityLiving)ent;
if(entLiv.getHealth() <= 0){
entLiv.setDead();
}
if(entLiv.isDead() && entLiv.getHealth() > 0){
GameManager.isSpawning = true;
}
entLiv.doEntityTick();
}
if(ent instanceof EntityProjectile){
EntityProjectile entProj = (EntityProjectile)ent;
entProj.doTick();
entProj.setCurrentRange(entProj.getCurrentRange() + 1);
if(entProj.getCurrentRange() >= entProj.getRange()){
entProj.setDead();
}
}
if(ent.isDead() && !(ent instanceof EntityCharacter)){
iter.remove();
}
}
}
}
public void addBounds(Boundary bounds){
boundList.add(bounds);
}
public int getBoundsID(int x, int y){
for(int i=0;i<boundList.size();i++){
if(boundList.get(i).contains(x, y))
return i;
}
return -1;
}
public int getBoundsID(Boundary bounds){
for(int i=0;i<boundList.size();i++){
if(boundList.get(i) == bounds)
return i;
}
return -1;
}
public void removeBounds(Boundary bounds){
boundList.remove(bounds);
}
public Boundary getBounds(int x, int y){
for(Boundary bounds : boundList){
if(bounds.contains(x, y)){
return bounds;
}
}
return null;
}
public boolean isInBounds(int posX, int posY){
for(Boundary bounds : boundList){
if(bounds.contains(posX, posY)){
return true;
}
}
return false;
}
}
Stack Trace:
Exception in thread "Thread-1" java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(Unknown Source)
at java.util.ArrayList$Itr.next(Unknown Source)
at UnNamedRpg.World.World.doEntityTick(World.java:83)
at UnNamedRpg.Player.Game.GameManager$1.run(GameManager.java:49)
And what kind of concurrency issue you have?
Why aren't you synchronizing around the iteration in getEntityList()?
@RealSkeptic Changed that but the issue still persits.
@Tsyvarev What do you mean what kind? I am trying to add to the list if that helps.
You are saying you have "issues" but you do not explain what those issues are. You just have comments that say "issues here", "issues here". What issues? Do you have a deadlock? A race condition? An exception? Bad performance? What is happening and what do you expect to happen instead?
@RealSkeptic Honestly i'm not sure other than when I attempt to add to the list it throws a concurrency exception pointing to the commented line of code on doEntityTick() in the world class.
Well? That's exactly what I was asking about. Please copy the error including the stack trace to your question, properly formatted. This is essential information. If the error occurs in more than one place, then add both stack traces. While you are editing your question, please update to your latest code (you said that you added the synchronization in getEntityList() - so your latest code and the freshest stack traces.
@RealSkeptic I've added the stack trace but it really isn't descriptive at all. NOTE: This only happens when I add the entity to the world that spawns the projectile. The entity tick is found above where the work is actually done, this is the only part that varies from the other classes of the same type.
Are you sure that it's actually an issue with multi-threading? It is possible to get this error when a single list is modified inconsistently by a single thread. From docs: "Note that this exception does not always indicate that an object has been concurrently modified by a different thread. If a single thread issues a sequence of method invocations that violates the contract of an object, the object may throw this exception. For example, if a thread modifies a collection directly while it is iterating over the collection with a fail-fast iterator, the iterator will throw this exception."
@Brick It honestly could be just one thread, I was just trying to get the point across that there ARE multiple threads interacting with it. If it were the single-thread issue then how would I go about fixing this? All of the code in the world class is called by one thread.
You have two methods named doEntityTick(). Copy/paste mistake or do both World and Entity have one?
In the snippet of World.java you provided, which line is #83?
@JoeCoder Entity ent = iter.next();
on doEntityTick()
Unrelated to the ConcurrentModificationException, but the body of World::addEntity() should be made atomic by synchronizing on the entityList (or the list size might be unexpected value) even though your tick value makes this improbable.
@JoeCoder Whoops, I just added that and it didn't change anything. I honestly have no idea what would be causing this.
You have rendered your question completely off-topic. Please read the [help/on-topic]: Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. It took a long time to get the "specific problem and error" from you, and now you have removed the code from the question. Please edit it to conform to the rules. If the code is too long prepare a [mcve] instead.
@RealSkeptic I have NOT removed the code, I have simply given you a link to the entire repository instead so that (if wished) you could view the rest of the code and see if there was an issue there. Not to mention that this repository is updated with the most recent code so that I don't have to continually edit it on here.
As I said, the site rules require the code to be in the question itself - not a link. I did not invent that rule, it's in the [help/on-topic].
@RealSkeptic well sorry but I assumed that having all of the code posted would help. I'll add it back when I get home. Now enough about rules, do you have any idea what I've done wrong?
So after an entire day of looking at the code I managed to find a fix. Instead of having the entities spawn immediately it will instead stick them into a queue and await the next world tick in order to spawn them. I thank everyone who attempted to help and have given those a +1 who have helped me come across this answer.
|
STACK_EXCHANGE
|
I have recently defended my PhD thesis titled "Distributed formation control for autonomous robots". Now I am wrapping up the results in the form of videos. Here it is just one of these results.
In the following video I show how a single control law for a team of drones can achieve simultaneously a desired shape, orientation and translational motion. By single I mean no cascade or different controllers. In fact it is just a single line of code.
The required information is very minimal. The drones can work in their own frames of coordinates, they do not need to share (communication) information, and they only interact locally with their neighbors but still achieve a global goal.
All the mathematical stuff, proofs, algorithm, etc can be found in the manuscript of the thesis.
Sorry for the resolution of the video, but I was running four instances of X-Plane at the same time in my computer xD.
It's hard to say but surely interests me. Let me know if you have any thoughts on it and congratulations to your successful graduation!
Thank you for your quick response. I think I get what you mean. We modeled the quad as point mass and from the formation control law we get desired acceleration that turns out to become a feedforward desired attitude for quad to track. Similarly given desired velocity from a formation control law, quad's on board controller will track this desired velocity instead. From here I become interested in which will perform better, acceleration or velocity in practice. I'm also wondering if we're able to have both desired velocity and acceleration to do formation control, maybe it gives even better performance.
Hi Zhen Jia,
in particular, in this experiment the control action is giving the desired attitude angles ("feedforward accelerations").
If you check in Chapter 7, Table 7.3, page 126 (xD), you will see that the algorithm can be tailored for giving either desired velocities or accelerations.
Hi good work but I wonder if your control law only gives the desired acceleration and no desired position and velocity computed for each individual quad. I ask this because in practice, maybe controlling the acceleration is often hard to get good performance. Thank you.
no, the control is completely autonomous. In this video I am running four XPlane-9 (one per each drone). They are in the same computer, but in practice they only need to be reachable in the network.
I have a C++ program that runs four instances of the class "quadcopter". Each quadcopter has associated one simulator (so you can read from it and send commands to the motors) and their corresponding sensors (where we add noise and non perfect calibration). In addition to the standard sensors such as IMU or compass, we also include the relative position w.r.t. their neighbors (in the corresponding frame of coordinates of the quadcopter).
The quadcopters also have an attitude controller. I have written a very simple one in order to control the accelerations at the body frame of the quadcopter.
The algorithm that you see in the video is providing the setting points for the desired accelerations of the quadcopter, and this signal feeds the attitude controller. In fact, in order to keep the demonstration simple, as you can observe in the video, the vertical motion (altitude) has been decoupled from the lateral one.
At certain point I guess I will release the C++ code... but to be honest, there are already too many examples out there about how to interface X-Plane.
p.d. sorry about the multiposting, I do not know why my browser does not like so much this editor.
Hi, how to control all drones? do you use a serial communication to the Transmitter from each drone? maybe Taranis can control from outside ?
Hello Hector, thx very much for the explanation. It is quite impressive what you've done !
I forgot to mention what are the practical consequences of these findings. It means that the drones do not need to share a common North, therefore if they have sensors installed on board, they do not need to be aligned or calibrated w.r.t. the ones from their neighbors (speaking about orientation).
In fact, in the first part of my thesis I also proof that they do not need to be calibrated for the range/distance either. For example, the distance between A and B is 5, but A is measuring 5.1 and B is measuring 5, i.e. the range sensor of A is not well calibrated. It turns out, that the algorithm is robust against such an issue (without requiring exchange of information at all).
Summarizing, you can install a "cheap" non-calibrated camera in A and a good one in B. The algorithm is still robust. Of course, this is just a particular example. In general we are talking about an arbitrary number of drones and sensors.
Let me know whether you have further questions :P. I am happy to talk about these things.
I will try to clarify your question (a good and fair one by the way).
Consider two neighboring guys A and B with positions pA and pB w.r.t. a global frame of coordinates. The relative position (pA-pB) can be represented in an arbitrary frame of coordinates, either a global one (GPS) or in a local one. For example, the frame of coordinates of A can be located in an arbitrary position with an arbitrary orientation.
Both guys, A and B need to know their relative position but each one w.r.t. their own frame of coordinates, i.e. same concept but with a different representation. For example, if I stay in front of you, for me my left is your right. What do I mean by "no communication"? How A tells B, hey man, let us go to the right? It turns out that in this algorithm they do not need such a thing. Both, A and B, have their own different understanding about the information, and they only do care about their own understanding.
In fact, all the sensing can be done locally on board without an explicit communication link, i.e. to exchange information between the guys. You may believe this is trivial, but when you are adding more and more complexity/capabilities to an algorithm, it is not anymore.
when you say "they do not need to share (communication) information, and they only interact locally with their neighbors", isn't it a contradiction ? if they interact, they obviously share information via vision or sonar or radar or ?
|
OPCFW_CODE
|
Microsoft agree that browser hacks lead to long term problems
In a post on their website, Microsoft have finally agreed that browser "hacks" - traditionally used by frustrated web developers to circumvent display bugs in Microsoft's Internet Explorer web browser - can lead to long term problems, noting that fixes in the upcoming IE7 can break these hacks. This means that with the release of the keenly awaited new version of Internet Explorer, currently the world's most prolific browser due to it's shipment with the Microsoft Windows operating system, that a lot of sites written using these common methods could break.
Due to bugs and display differences between Internet Explorer and other common web browsers, a large portion of the web development community have turned to "hacks" whereby you craft stylesheet code to contain a compensated value for IE, followed by the real value encoded in a way that IE can't understand due to other bugs. Microsoft say that they have fixed some of these latter bugs, but if they don't fix the issue that the hack was designed to avoid then the page won't function correctly. Microsoft admit "We’re starting to see the first round of sites and pages breaking due to the CSS fixes we have made." Although feasibly they may fix some of the display issues that developers were trying to hack around before the final release, some of the issues aren't classified by Microsoft as bugs due to ambiguity in the HTML specification. These still lead to display issues and are unlikely to be fixed any time soon.
At MicroAngelo we have never advocated the use of browser hacks as a solution to the problem of cross-browser compatibility: in fact we believe that if it can't be done relatively safely in a cross-browser way then it shouldn't be done. However, the technique has been widespread for years, and despite Microsoft's latest plea to stop using the flawed methods it is likely that even the majority of new sites will use this method. It is a real issue: with the release of the new version of IE, sites using these methods will break. The lucky ones will just look a bit weird in IE, but some will be unusable as all the text becomes unreadable.
Microsoft ask that instead of using hacks developers use "Browser Sniffing" techniques to apply specific code to each browser. While we at MicroAngelo agree that this is a better solution than using hacks, we still believe that by using better coding techniques even this time consuming step can be avoided. With smarter coding practices we can write once and use everywhere, even today, with only insignificant differences between browsers. As time goes on and browsers improve with regard to how they implement web standards, these differences will get less and less.
IE7 is predicted to ship with Microsoft Windows Vista - the next version of Windows - which means that it will likely become the most prolific browser in the world about a year later as people buy new computers or upgrade to the latest Microsoft operating system (based upon what happened with the last couple of Windows releases). This means that if your site is broken in the new browser then the problem will actually get worse over time, the opposite of the normal situation whereby old and buggy browsers slowly disappear from common use.
If you have a web site constructed in the past few years, it could be at risk. Ironically, these techniques were mostly used by those who were actually interested in using web standards for greater accessibility and usability. We will happily let you know if your site uses these methods. If it does, Microsoft recommend "that you please update your pages to not use CSS hacks." Contact us and let our expert code smiths look over your site for you.24th January 2006
|
OPCFW_CODE
|
What is cryptocurrency?
What makes cryptocurrency unique?
Why is it called cryptocurrency?
What is public-key cryptography?
Who invented cryptocurrency?
What is the difference between cryptocurrencies and tokens?
What is a crypto wallet?
A cryptocurrency (or crypto) is a form of digital cash that enables individuals to transmit value in a digital setting.
You may be wondering how this sort of system differs from PayPal or the digital banking app you have on your phone. They certainly appear to serve the same use cases on the surface – paying friends, making purchases from your favorite website – but under the hood, they couldn’t be more different.
Cryptocurrency is unique for many reasons. Its primary function, though, is to serve as an electronic cash system that isn’t owned by any one party.
A good cryptocurrency will be decentralized. There isn’t a central bank or subset of users that can change the rules without reaching consensus. The network participants (nodes) run software that connects them to other participants so that they can share information between themselves.
On the left is what you’d expect something like a bank to use. Users must communicate via the central server. On the right, there is no hierarchy: nodes are interconnected and relay information between themselves.
The decentralization of cryptocurrency networks makes them highly resistant to shutdown or censorship. In contrast, to cripple a centralized network, you just need to disrupt the main server. If a bank had its database wiped and there were no backups, it would be very difficult to determine users’ balances.
In cryptocurrency, nodes keep a copy of the database. Everyone effectively acts as their own server. Individual nodes can go offline, but their peers will still be able to get information off of other nodes.
Cryptocurrencies are therefore functional 24 hours a day, 365 days a year. They allow for the transfer of value anywhere around the globe without the intervention of intermediaries. This is why we often refer to them as permissionless: anyone with an Internet connection can transmit funds.
The term “cryptocurrency” is a portmanteau of cryptography and currency. This is simply because cryptocurrency makes extensive use of cryptographic techniques to secure transactions between users.
Public-key cryptography underpins cryptocurrency networks. It’s what users rely on to send and receive funds.
In a public-key cryptography scheme, you have a public key and a private key. A private key is essentially a massive number that would be impossible for anyone to guess. It’s often hard to wrap your head around just how big this number is.
For Bitcoin, guessing a private key is about as likely as correctly guessing the outcome of 256 coin tosses. With current computers, you wouldn’t even be able to crack someone’s key before the heat death of the universe.
Anyways, as the name might suggest, you need to keep your private key secret. But from this key, you can generate a public one. The public one can safely be handed out to anyone. It’s feasibly impossible for them to reverse-engineer the public key to get your private one.
You can also create digital signatures by signing data with your private key. It’s analogous to signing a document in the real world. The main difference is that anyone can say with certainty whether a signature is valid by comparing it with the matching public key. This way, the user doesn’t need to reveal their private key, but can still prove their ownership of it.
In cryptocurrencies, you can only spend your funds if you’ve got the corresponding private key. When you make a transaction, you’re announcing to the network that you want to move your currency. This is announced in a message (i.e., transaction), which is signed and added to the cryptocurrency’s database (the blockchain). As mentioned, you need your private key to create the digital signature. And since anyone can see the database, they can check that your transaction is valid by checking the signature.
There have been a handful of attempts at digital cash schemes over the years, but the first of the cryptocurrencies was Bitcoin, which was released in 2009. It was created by a person or group of people using the pseudonym Satoshi Nakamoto. To this day, their true identity remains unknown.
Bitcoin spawned a huge number of subsequent cryptocurrencies – some aiming to compete, and others seeking to integrate features not available in Bitcoin. Nowadays, many blockchains do not just allow users to send and receive funds, but to run decentralized applications using smart contracts. Ethereum is perhaps the most popular example of such a blockchain.
At first glance, cryptocurrencies and tokens appear identical. Both are traded on exchanges and can be sent between blockchain addresses.
Cryptocurrencies are exclusively meant to serve as money, whether as a medium of exchange, store of value, or both. Each unit is functionally fungible, meaning that one coin is worth as much as another.
Bitcoin and other early cryptocurrencies were designed as currency, but later blockchains sought to do more. Ethereum, for instance, does not just provide currency functionality. It allows developers to run code (smart contracts) on a distributed network, and to create tokens for a variety of decentralized applications.
Tokens can be used like cryptocurrencies, but they’re more flexible. You can mint millions of identical ones, or a select few with unique properties. They can serve as anything from digital receipts representing a stake in a company to loyalty points.
On a smart-contract-capable protocol, the base currency (used to pay for transactions or applications) is separate from its tokens. In Ethereum, for instance, the native currency is ether (ETH), and it must be used to create and transfer tokens within the Ethereum network. These tokens are implemented according to standards like ERC-20 or ERC-721.
Essentially, a cryptocurrency wallet is something that holds your private keys. It can be a purpose-built device (a hardware wallet), an application on your PC or smartphone, or even a piece of paper.
Wallets are the interface that most users will rely on to interact with a cryptocurrency network. Different types will offer different kinds of functionality – evidently, a paper wallet cannot sign transactions or display current prices in fiat currency.
For convenience, software wallets (e.g. Trust Wallet) are considered superior for day-to-day payments. For security, hardware wallets are virtually unmatched in their ability to keep private keys away from prying eyes. Cryptocurrency users tend to keep funds in both types of wallets.
|
OPCFW_CODE
|
I'm a teacher and I need a macro or even other solution are ok (a java or python program, whatever will work) to create an evaluation sheet into an Excel file for each test I prepare in a given class. I attach an .xlsx file as example and a document with all the instruction written inside. Thank you
...-App for IOS and Android (will be upload in our google play and apple store) -Cart -1 page for contact -1 page for fill a form with information -Paypal as payment gateway -Ecommerce features like: discount, promotion, cupom code. A figma file with the mockup will be release for the hired developer. Buget is very low, so send your best price. For avaoid
.../ Underground / HIP Hop style / Underground image of anything YOU (the perspective artist) may deem that will flow with what is already on the channel. It should be animated. It can be an animal object or man / woman /group that may be dancing performing an act related to or can be related, portraying a "HIP" or "Cool" Style or the like. Please look
I am looking for someone to design an African American fairy in Anime style for a school project for me. She (female) would be medium brown in skin color with dread locs. I would like a galaxy scheme. Her name is Cosmic Sydney if that helps.
Hello, I am looking for the experienced Excel/VBA, or C# Developer, who can help me with this: I have an Excel Sheet, where the data are stored, Transfer those data to sharepoint list, with some more adcanced functions and formating (JSON...), email summary notification, when, which data changed based on calculated column, simple Power BI from sharepoint
You need to translate a app description text for a...Monday 28 September (you can deliver the translation earlier if you finish earlier), You can find English app description text attached to this job task Please keep original style and formatting when delivering a final translation Please leave the name of the app - "Voicer" in English. Thank you
...that will keep our viewers entertained throughout the entire video, And have good accent. If you want to entre just send me sample of your voice with the style down below These are examples of the style of voice over we are looking for. [login to view URL] [login to view URL] [login to view URL]
I need to proofread a thesis and amend in text citations and references list. You must be graduate from sciences or engineering collage and experienced in academic writing style and APA 6 format. Else, you will waste mine and yours' time and effort. No payment for failure work. I need a freelancer how would directly online, using google form, so that
...note. All that is selected in posting stage must appeal on listing page and on Bill of sale . Two type of auction reserved and unreserved auction . Unreserved auction is free were item will be sold to the top bidder Reserved auction has$2 fees that must be paid before timing auction goes live . No listing fees and u can only realist the same
I am looking for ecommerce app and website with a admin panel which should update the contents and images . This should also allow me to change themes . Complete project should given with source code and should be like plug and play.
I have been doing some work with the free version Azure AD and I'm looking to hire someone for a few hours. 1- We changed our .local domain internally to a .com domain. Now our website doesn't work internally because our .com internally is our local domain instead of our website. So I need help figuring the best solution for managing our DNS. 2-
I am looking for a programmer to develop a simple Android app via kivy (python mandatory) + buildozer (compiler mand...(and it specific chromedriver to run it in background mode) If you have skills in kivy + python + selenium and android environment write to me (chat) for further details. Feel free to chat me to discuss the project Thanks in advance
We are "Delion" and we are a creative and highly professional digital studio based in Europe. Our diverse and experienced team excels in Python, Android, Django, HTML5, Scrum Development, PostgreSQL, AngularJS, Scrum, Docker. We've been successfully working in Website and App Development for more than 7 years. All projects are always well-planned and managed by certified profes...
...with Rant/sale price. valid from to form, after approval display on the web page. Directory Category wise list mobile no directory from Request page on registration/Signup Free Registration and paid registration Extra Button (o1/02) sms varification of the user account The web site also need an app icon i.e. on play store link using our play consol
...some sections out and put new sections in. There are some things on there that don't work. For example my newsletter link. I want to put a list builder tool on - sign up for free product. Create a zoom link page Create a membership section and membership prices. I also need to make sure it's all protected and private (following all the rules). Wix
I need an detailed picture for a tattoo. I attached examples for you to have an idea what I need. The resul...mask is the girl has an bad and an good site. so anyhow you can visualize an good an an bad face on the tattoo. The overall result should look like picture 4. But you can feel free to add more details on your own, like roses or other elements.
Hi, I am Nicolas, Co-founder of ID Genève, a new identity in the world of watchmaking. We are loo...Genève, a new identity in the world of watchmaking. We are looking for one motivated person to monetize our campaigns online. If you have previous a solid background, feel free to contact me for further information. Nicolas Freudiger [login to view URL]
The box is a 330mm length x 330mm width x 366 mm height. The photo and logo of the brand will be provided. What you need to do is to crea...width x 366 mm height. The photo and logo of the brand will be provided. What you need to do is to create the box of the air fryer to look elegant and premium. Thank you. Feel free to ask more about the job scope.
[login to view URL] Subtitles in this style [login to view URL]
Dear Translators, I have a file th...shipping policy: We care to see all our customers receive their purchases as fast as possible. However, Shipping time usually takes between 7-15 days as we commit to provide FREE shipping at no extra costs for our customers. We provide you with a link to track your shipment right after we process your purchase.
PLEASE READ CAREFULLY - UNDERSTAND WORK AND THEN PLACE YOUR BID FOR FULL WORK. * THIS WORK IS NOT FOR JUST 1 PRODUCT. * DO NOT BID PER WORD COUNT OR PER PRODUCT. * YOUR BID WILL BE TAKEN AS TO COMPLETION OF FULL LISTED WORK. Product Content : Total Products 1500 Total Words for each Product : 1500 x 120 = 180,000 Product Title: • Maximum 6 - 7 Relevant Words. This is a Keyword and words us...
I have 3 little piggie characters already designed and drawn in the style I want but the original artist is no longer available so I need someone who can redraw the characters in different poses and costumes. I like psd files, layers intact, colour layer and black layer, background transparent. I only require these 3 characters and props at present
...are accepted but only if a contribution is made on a ranking page in the site lining back to profile Reporting - bi weekly reports required - You will work on a live google sheet Expectations - hundreds of links per domain each month - Open line of communication at all times - You have SEO tools to find potential backlinks. BLACK HAT LINK BUILDING
...Tasks Upload content once taught and recorded by Rachael onto back end of YDB site- then email YDB people to inform them of new content and post on YDB facebook group Post in Free Facebook group twice per week Create social posts ie find quotes, upload testimonials onto canva Monthly tasks Manage affiliates and source new affiliates Source new collaborations
...our clients. The system is supposed to be built in Salesforce with the following functionality: * Create Developer profiles incl. Media, CV typical data, projects (timeline style maybe), availability, top skills rated (e.g. JS 4*, Angular 4*, php 3*, ...) * Ability for the staff members to edit their profiles themselves * Search by filter criteria and
I am developing a card game and need someone to create the 66 Art Deco cards based on different themes that will go into the deck. *This is a graphic/dark game...also have to be included. This game has already been licensed. You may have to sign a non-disclosure. A website might also be built for it. See example files attached for style reference
...could be used for products in future. I also would need a name for my brand that could be transformed as I am looking to do a course on brows. I like minimalistic , modern style , that would not look too girly or cheap. I want it to be an empire on day. Later on I will probably need some business cards and any ideas for Instagram and fb profile appearance
I need help with building a website to showcase consulting services. The web...on the key services we provide categorized in two sections. I will provide an excel document that outlines the pages, text content in each, sign up questionnaire and the style of theme and structure I am looking for. I need help with the website build, design, and launch
we have a product list of 4000 products and need someone to type those in on the website but more importantly start with finding pictures of those 4000 products in 400X400 size and send them to me with proper naming. maybe then can give access to website backend and get the products uploaded by you as well
...everyday i will report you and close project before any milestone is released, all features of milestone must be fully completed YOU MUST HAVE PAST EXPERiENCE IN ONLINE ECOMMERCE WEBSITES Don't ask questions about: Do you have design? Are you designer Do you have wireframe Do you have code? Or anything like that deadline for each milestone must
Design and Develope from 0 shopify ecommerce with 300 products uploaded and connected with catalogue on facebook-instagram-google shopping -ebay shop (still to create), SEO plug in, Languages: primarly in italian and secondly in english Prices: different prices for different countries for the same product Assistance webinar
...electronic products and have been working with our German partner for a very long time. Now we are starting our own online presence and are looking for a freelancer to create an ecommerce store from scratch. The online shop to look like this: [login to view URL] Now the question naturally arises: what should bdone and what not? The answer is:
|
OPCFW_CODE
|
Clojure Goodness: Reapply Function With Iterate To Create Infinitive Sequence
iterate function create a lazy, infinitive sequence based on function calls. The
iterate function takes a function and an initial value as arguments. The first element in the sequence is the initial value, next the function is invoked with the previous element as argument and this continues for each new element. Suppose we have a function
#(+ 2 %) that adds
2 to the input argument. Then if we use this function with
iterate and start with value
1 the first elements of the sequence will be
(+ 2 1),
(+ 2 3),
(+ 2 5). So first element is the initial value, next element is the invocation of the function with input argument
1. The result of this function is
3, which is then the input for the function to calculate the next element and so on.
In the following example code we use
iterate in different scenario’s:
(ns mrhaki.core.iterate (:require [clojure.test :refer [is]])) ;; Lazy sequence of numbers in steps of 2. (def odds (iterate #(+ 2 %) 1)) (is (= (list 1 3 5 7 9 11 13 15 17 19) (take 10 odds))) ;; Define lazy sequence with a growing string. ;; The first element is ar, next argh, then arghgh etc. (def pirate (iterate #(str % "gh") "ar")) (def crumpy-pirate (nth pirate 5)) (is (= "arghghghghgh" crumpy-pirate)) ;; Function that returns the given amount ;; plus interest of 1.25%. (defn cumulative-interest [amount] (+ amount (* 0.0125 amount))) ;; Lazy sequence where each entry is the ;; cumulative amount with interest based ;; on the previous entry. ;; We start our savings at 500. (def savings (iterate cumulative-interest 500)) ;; After 5 years we have: (is (= 532.0410768127441 (nth savings 5))) ;; Function to double a given integer ;; and return as bigint. (defn doubler [n] (bigint (+ n n))) ;; Define layz infinitive sequence ;; where each element is the doubled value ;; of the previous element. (def wheat-chessboard (iterate doubler 1)) ;; First elements are still small. (is (= (list 1 2 4 8 16 32) (take 6 wheat-chessboard))) ;; Later the elements grow much bigger. (is (= (list 4611686018427387904N 9223372036854775808N) (->> wheat-chessboard (drop 62) (take 2)))) ;; Sum of all values for all chessboard squares ;; is an impressive number. (is (= 18446744073709551615N (reduce + (take 64 wheat-chessboard))))
Written with Clojure 1.10.1.
|
OPCFW_CODE
|
unsupported operand type(s) for *: 'NoneType' and 'int' in volume_level on line 178.
Because of the below error, music assistant doesn't load.
Logger: homeassistant
Source: custom_components/mass/player_controls.py:178
Integration: Music Assistant (documentation, issues)
First occurred: 12:01:12 PM (16 occurrences)
Last logged: 2:49:26 PM
Error doing job: Task exception was never retrieved
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 382, in async_add_entities
await asyncio.gather(*tasks)
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 619, in _async_add_entity
await entity.add_to_platform_finish()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 810, in add_to_platform_finish
self.async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 533, in async_write_ha_state
self._async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 573, in _async_write_ha_state
attr.update(self.state_attributes or {})
File "/usr/src/homeassistant/homeassistant/components/media_player/__init__.py", line 990, in state_attributes
if (value := getattr(self, attr)) is not None:
File "/config/custom_components/mass/media_player.py", line 202, in volume_level
return self.player.volume_level / 100
File "/config/custom_components/mass/player_controls.py", line 178, in volume_level
return self.entity.volume_level * 100
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
Marcel fixed a player issue today that will be delivered with Beta6. The root cause of that error, seems the same (player_controls.py", line 178):
[548232032752] Error handling message: Unknown error (unknown_error)
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/websocket_api/decorators.py", line 27, in _handle_async_response
await func(hass, connection, msg)
File "/config/custom_components/mass/websockets.py", line 98, in async_get_mass_func
await orig_func(hass, connection, msg, mass)
File "/config/custom_components/mass/websockets.py", line 763, in websocket_players
result = [item.to_dict() for item in mass.players.players]
File "/config/custom_components/mass/websockets.py", line 763, in <listcomp>
result = [item.to_dict() for item in mass.players.players]
File "/usr/local/lib/python3.9/site-packages/music_assistant/models/player.py", line 331, in to_dict
"volume_level": int(self.volume_level),
File "/config/custom_components/mass/player_controls.py", line 178, in volume_level
return self.entity.volume_level * 100
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
should be fixed in beta 6
Let's wait for @jeroenterheerdt if it's solved. I haven't seen it in b6.
|
GITHUB_ARCHIVE
|
Update 2005-05: On May 01, 2005 OASIS announced that the OASIS ebXML Registry Information Model (RIM) v3.0 and ebXML Registry Services and Protocols (RS) v3.0
specifications had been approved as OASIS Standards, following public review and submission of the specifications for ballot by the OASIS membership. In addition to the two principal prose specifications in PDF (ebXML Registry Information Model 3.0, ebXML Registry Services and Protocols 3.0), the ballot submission package contains five XML Schemas (xsd), twenty-one XML scheme files, and ten WSDL description files.
[February 14, 2005] The OASIS ebXML Registry Technical Committee has voted to approve its ebXML Registry Version 3.0 specification as a Committee Draft and to advance the draft for public review in preparation for ballot as an OASIS Standard.
The ebXML Registry Version 3.0 release is a package of some forty-four files, including the two prose documents ebXML Registry Information Model (RIM) and ebXML Registry Services and Protocols (RS). XML schemas and WSDL files complete the distribution. The specification uses schema documents conforming to W3C XML Schema, and normative text to describe the syntax and semantics of XML-encoded objects and protocol messages.
An ebXML Registry is "an information system that securely manages any content type and the standardized metadata that describes it. It provides a set of services that enable sharing of content and metadata between organizational entities in a federated environment. An ebXML Registry may be deployed within an application server, a web server or some other service container. The registry may be available to clients as a public, semi-public or private web site. The ebXML Registry thus provides a stable store where submitted information is made persistent. Such information is used to facilitate business to business relationships and transactions."
In this context, submitted content for an ebXML Registry includes, but is not limited to: XML schema and documents, process descriptions, ebXML Core Components, context descriptions, UML models, information about organizations, and software components.
The ebXML Registry Information Model (RIM) specification defines the types of metadata and content that can be stored in an ebXML Registry. The companion document ebXML Registry Services and Protocols (RS) defines the services provided by an ebXML Registry and the protocols used by clients of the registry to interact with these services.
According to the RIM specification, an ebXML Registry is capable of storing any type of electronic content such as XML documents, text documents, images, sound and video. Instances of such content are referred to as a RepositorytItems. RepositorytItems are stored in a content repository provided by the ebXML Registry. In addition to the RepositoryItems, an ebXML Registry is also capable of storing standardized metadata that may be used to further describe RepositoryItems. Instances of such metadata are referred to as a RegistryObjects, or one of its sub-types. RegistryObjects are stored in the registry provided by the ebXML Registry."
The ebXML Registry Version 3.0 specification outlines a number of registry use cases. "Once deployed, the ebXML Registry provides generic content and metadata management services and as such supports an open-ended and broad set of use cases. The following are some common use cases that are being addressed by ebXML Registry: (1) Web Services Registry, used for publishing, governance, discovery and reuse of web service discriptions in WSDL, ebXML CPPA and other forms; (2) Controlled Vocabulary Registry, which enables publising, governance, discovery and reuse of controlled vocabularies including taxonomies, code lists, ebXML Core Components, XML Schema and UBL schema; (3) Business Process Registry, which enables publishing, governance, discovery and reuse of Business Process specifications such as ebXML BPSS, BPEL and other forms; (4) Electronic Medical Records Repository; (5) Geological Information System (GIS) Repository that stores GIS data from sensors."
Version 3.0 of the ebXML Registry adds considerable functionality vis-à-vis version 2.0. The version 2.0 specifications for RIM and RS were approved as OASIS Standards in May 2002. According to a summary provided by specification co-editor Farrukh Najmi, a key focus of V3.0 "is federated and secure information management capabilities. A new federation feature enables multiple registries to seamlessly provide a unified information store that enables clients to discover any information using a single federated query that searches all members of the federation. Information in one registry can seamlessly link with information in any other registry and any information from one registry may be replicated in any other registry."
A royalty-free, open source implementation of the latest TC Approved ebXML Registry 3.0 specifications is available from the freebXML Registry project.
Upon request, Farrukh Najmi (Sun Microsystems) has provided a description of new functionality in the ebXML Registry Version 3.0. Together with Sally Fuger and Nikola Stojanovic, Farrukh is co-editor of the ebXML Registry Information Model Version 3.0 (RIM) and ebXML Registry Services and Protocols Version 3.0 (RS) specifications:
[Farrukh writes:] An ebXML Registry is a federated registry and repository that manages
all types of electronic content described by standard and extensible
meta data. To use a familiar metaphor, an ebXML Registry is like your
local library. Its repository contains all types of electronic content
much like the library contains all types of printed content. Its
registry contains meta data describing the content much like the
library's card catalog contains information about its printed content.
Any number of ebXML Registries can work together to offer a unified
service much like multiple libraries can participate in a cooperative
network and offer a unified service.
The central theme of version 3.0 is federated and secure
information management capabilities. A new federation feature enables
multiple registries to seamlessly provide a unified information store
that enables clients to discover any information using a single
federated query that searches all members of the federation. Information
in one registry can seamlessly link with information in any other
registry and any information from one registry may be replicated in any
Since the federation capability blurs the boundaries between registries,
it demands enterprise security features such as as federated identity
management, federated policy management, authentication, access control
and authorization both within and across enterprise boundaries.
To meet these increased demands for security in a federated environment
version 3.0 leverages the OASIS SAML 2.0 specifications to enable
federated Identity Management, Authentication and Single Sign On (SSO)
across federation members. Version 3.0 also leverages the OASIS XACML
1.0 standard to enable fine grained access control expressed in XACML
Access Control Policies. In addition version 3.0 leverages OASIS WSS:
SOAP Message Security 1.0 and OASIS WSS: SOAP Message with Attachments
(SwA) Profile 1.0 to secure the registry's SOAP protocol messages using
Another major theme of ebXML Registry version 3.0 are
content management capabilities. Version 3.0 defines an extensible
service interface and protocol for adding Content Management Services
(CMS) as plugins to an ebXML Registry. It also defines two specific CMS
interfaces, a Content Validation Service interface and a Content
Cataloging Service interface. Content Validation Services automatically
enforce domain specific business rules when content is published to the
registry, consequently improving the quality of content in a registry.
For example, in the medical domain a Content Validation Service plugin
could automatically enforce the business rule that a Medication Order
does not conflict with an allergy recorded for the patient. The registry
does not accept any content that is deemed invalid by a relevant Content
Validation Service. Content Cataloging Services automatically catalog
published content based upon domain specific rules.
For example, an image published to the
registry could automatically by classified as monochrome, gray-scale or
color and enable queries that can search specifically for color images.
Content Cataloging feature makes registry content more discoverable.
A new content versioning feature enables any content or meta data in the
registry to be automatically versioned whenever it is updated. The
feature is based upon the IETF RFC 3253 also known as Delta V.
Version 3.0 defines a new HTTP binding to the ebXML Registry service interfaces using a REST
style architecture. The HTTP binding enables any content or meta data in
the registry to be accessible over an HTTP URL using a standard web
browser. The registry assigns a default URL to all content and meta data
which may be augmented with publisher defined URLs. The HTTP binding
also supports a file/folder metaphor that enables content in the
registry to be organized in folders much like a file system. The HTTP
binding enables registry folders containing content and meta data to be
browsable using a standard web browser. The HTTP binding complements
the existing SOAP binding to the ebXML Registry.
Since its inception the ebXML Registry has supported arbitrary ad hoc
queries using either a standard SQL-92 syntax or an XML Filter query
syntax. While this feature enables flexible discovery use cases, the
resulting queries can be complex and daunting. To address this concern
version 3.0 adds a Parameterized Stored Query feature that allows a
query to be stored in the registry in parameterized form. To invoke the
query a client simply provides some or all of the query parameters, and
is blissfully spared the underlying complexity of the query. Typically
such parameterized queries are exposed to the user as Web Forms where
the user fills in the parameters and submits the query.
Version 3.0 provides a content-based event notification
capability. A subscription may be created by a user for receiving a
specific type of event. A registry notifies subscribers of events
matching their subscription by sending a notification to a web service
or an email address end point.
Finally, version 3.0 makes the ebXML Registry even more standards
compliant byaligning with a variety of others standards. These include
Web Services Security: SOAP Message Security 1.0, Web Services Security: SOAP Message with Attachments (SwA) Profile 1.0, WS-I: Basic Security Profile 1.0, and WS-I: Basic Profile 1.1, in addition to SAML 2.0 and XACML 1.0. [Farrukh Najmi is co-editor of the version 3.0 RIM and RS CD level specifications]
|
OPCFW_CODE
|
Binary Divisibility by 10
Date: 04/07/99 at 21:28:03 From: Anna Subject: Binary divisibility by 10 I was wondering how you can tell whether a binary number of arbitrary size is divisible by 10, without looking at the whole number. The maximum size of consideration is really important, and within 32 bits would be really good, although around 30 words (1 word = 32 bits) is also okay. So is looking at either the most significant or least significant end of the number. I figured out that the number has to be divisible by 2 and 5. Divisibility by 2 of a binary number is easy, but what about by 5? Or is there another way of looking at the problem? Thanks a lot.
Date: 04/08/99 at 16:54:49 From: Doctor Peterson Subject: Re: Binary divisibility by 10 Hi, Anna. This is a great question! I'd never thought much about divisibility rules for base 2 except those that carry over from base 10 (such as divisibility by 3 = 11 base 2, which is the same as the rule for 11 base 10). But I managed to find a rule for binary divisibility by 5 by using some of the same techniques. It's a little tricky, but quite workable. Then I realized that since you're really working not with individual bits but with whole bytes or words, there's a much easier way to do it. I'll first show you the strictly binary method, since it can be instructive, then I'll show you the better way, which in fact is just like the decimal rule for divisibility by 3. I'll work with the number 617283950 = 100100110010110000000101101110. First split the number into odd and even bits (I'm calling "even" the bits corresponding to even powers of 2): 100100110010110000000101101110 0 1 0 1 0 0 1 0 0 0 1 1 0 1 0 even 1 0 0 1 0 1 1 0 0 0 0 0 1 1 1 odd Now in each of these, add and subtract the digits alternately, as in the standard test for divisibility by 11 in decimal (starting with addition at the right): 100100110010110000000101101110 +0-1+0-1+0-0+1-0+0-0+1-1+0-1+0 = -2 +1-0+0-1+0-1+1-0+0-0+0-0+1-1+1 = 1 Now double the sum of the odd digits and add it to the sum of the even digits: 2*1 + -2 = 0 If the result is divisible by 5, as in this case, the number itself is divisible by 5. Since this number is also divisible by 2 (the rightmost digit being 0), it is divisible by 10. This algorithm can be applied to numbers of any size; the highest either sum can be is half the number of bits you're working with. And since the pattern repeats every 4 bits, you can work with one byte (or word, or long word) at a time, making a simple loop. Now you're asking, how did I ever figure this out? Here's what I did: I looked at powers of 2 modulo 5: 2^0 = 1 = 1 mod 5 2^1 = 2 = 2 mod 5 2^2 = 4 = -1 mod 5 2^3 = 8 = -2 mod 5 2^4 = 16 = 1 mod 5 now it repeats ... (Two numbers are equal, or more properly "congruent," modulo 5, if they give the same remainder when they are divided by 5; the sum or product of numbers that are congruent mod 5 will still be congruent.) Now a binary number is the sum of selected powers of 2. If the bits are, say, abcdef, then the number is f*2^0 = 1f = 1f mod 5 e*2^1 = 2e = 2e mod 5 d*2^2 = 4d = -1d mod 5 c*2^3 = 8c = -2c mod 5 b*2^4 = 16b = 1b mod 5 a*2^5 = 32a = 2a mod 5 ----- --------- abcdef = 2(e-c+a) + (f-d+b) mod 5 This is the formula I gave you. Now for the method I'm really recommending. Remember I pointed out that the pattern repeats every 4 bits? That's because 2^4 = 16 = 1 mod 5. Rather than work in binary, we can work in hexadecimal, just adding up the hex digits (4-bit nibbles) of the number. If this sum is divisible by 5, the number itself is. Even better, you can work with a whole byte at a time (base 256), since 256 = 1 mod 5 as well. That's about as simple as it can get: add all the bytes of the number, and if the sum is divisible by 5, the number is divisible by 5. The only disadvantage to working at this larger scale is that the sums will be larger; but for your 120-byte numbers the sum will be no more than 120*256 = 30720. If you don't want to check this for divisibity by 5 by dividing, you can apply the binary algorithm to it. Here's why this works, in simple terms: a*256^0 + b*256^1 + c*256^2 + ... + z*256^n = a*1 + b*(255 + 1) + c*(255 + 1)^2 + ... + z*(255+1)^n = (a + b + c + ... + z) + 255*(something) so if a+b+c+...+z is divisible by 5, then so is abc...z (base 256). In our example, 617283950 = 24CB016E; 24+CB+01+6E = 15E = 350 (base 10), which is divisible by 5; since 6E is even, 24CB016E is divisible by 10. - Doctor Peterson, The Math Forum http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994- The Math Forum at NCTM. All rights reserved.
|
OPCFW_CODE
|
Back to EAGLES-II Home Page
This page is devoted to links to other sites of interest to those involved in evaluation. If you have any suggestions for other useful links please send them to firstname.lastname@example.org
EAGLES-I (EAGLES Evaluation Working Group). The final report: "Evaluation of natural language processing systems".
TEMAA (A Testbed Study of Evaluation Methodologies: Authoring Aids). A project building on the EAGLES-1 evaluation framework formalised and implemented as a Parameterisable Testbed (PTB) and applied to the concrete evaluation of Danish and Italian.
TSNLP (Test Suites for Natural Language Processing). A project developing guidelines to create test suites for evaluation and testing of NL systems, as well as substantial test suites in three languages: English, French and German.
DiET (Diagnostic and Evaluation tools for Natural Language Applications). A project to develop data, methods and tools for glass box evaluation of NLP components.
in Language and Speech Engineering). A project
to provide a framework for semi-automatic quantitive black-box evaluation
for the language and speech engineering industry.
ISO (International Organization for Standardization)
TREC (Text REtrieval Conference) A programme to encourage research in information retrieval from large text collections.
MUC (Message Understanding Conference) performing evaluations of information extraction systems according to pre-established tasks.
GRACE (Grammaires et Ressources pour les Analyseurs de Corpus et leur Evaluation )
SENSEVAL (Evaluating Word Sense Disambiguation Systems)
LDC (the Linguistics Data Consortium) Creates collects and distributes speech and text databases, lexicons and other resources for research and development purposes.
NIST Spoken natural language processing Promotes the advancements of speech recognition and understanding by developing measurement methods, providing reference materials (speech corpora), coordinating benchmark tests within the R&D community and building prototypes.
NLPIR (Natural Language Processing and Information Retrieval) promotes the use of more efficitent techniques for maniupulating (largely ) unstructured textual information, organising TREC congernecs, creating test collections, developing improved evaluation methodology
SERCO (Serco Usability Services) Provides usability and user-centred design services and expert and user-based evaluation
INUSE (Information Engineering Usability Support Centres) an EU-sponsored project to set up a network of Usability Support Centres across Europe to assist both companies and projects within the EC Telematics Applications Programme.
MEGATAQ (Methods and Guidelines for the Assessment of Telematics Application Quality) The MEGATAQ objective is to provide Telematics Applications Projects (TAPs) with user-centred evaluation guidelines and consultancy.
LAUREATE BT's text-to-speech synthesis system
Back to EAGLES-II Home Page
|
OPCFW_CODE
|
At EasyTechJunkie, we're committed to delivering accurate, trustworthy information. Our expert-authored content is rigorously fact-checked and sourced from credible authorities. Discover how we uphold the highest standards in providing you with reliable knowledge.
A smart card is a card that may be capable of data storage, or may also have a microprocessor and therefore be, in essence, a miniature computer and be capable of data processing. The maximum specs of smart card include 8 kb (kilobytes) of RAM, 346 kb of ROM, and 256 kb of ROM that is programmable, along with a 16-bit microprocessor. A smart card programmer is a device for programming smart cards.
There are a growing number of uses for smart cards. They may be used in computer security systems, for example in smart card keyboards, or in building access. They can function as credit cards, electronic cash, or banking cards. They can also be employed in loyalty systems or as identification cards. The precise use that is planned for the smart cards will dictate how the smart card programmer is set up to format them.
Some of the decisions that must be made before the smart card programmer is put to work involve deciding whether the card will have a single use or multiple uses, whether it will keep record information or value, and whether some or all of the data on the card must be kept secure. Decisions about encryption and validation to access the card, such as passwords or PIN numbers are also important to make before deciding the appropriate smart card and smart card programmer to use and making sure the card and programmer are compatible. Another important issue is the language that will be used to program the cards: while some cards are programmed in JAVA, others are programmed in BASIC, or other languages.
Another important consideration when choosing the smart card programmer and one of the key differences between models, is whether it is a contact smart card programmer or a contactless smart card programmer. A contact smart card is one that must be inserted into a reader. A contactless smart card, on the other hand, has a Radio Frequency Identification tag (RFID) embedded, and — as long as it comes within “reading range” — can communicate with a smart card reader at a distance.
Some smart card programmer devices are made to work with many types of smart cards, while some are configured for only one specific type. Some are meant for cards that are going to be disbursed and some are designed for on-site use in circumstances in which re-keying and/or adding new users are ongoing issues. In addition, some smart card programmers are provided in a case that may double as a reader and that includes a battery pack, a protective cover plate, and other protective, durability, and functional features. Others are provided as a circuit board to which a case, a serial cable, and a 9-volt DC battery must be added for functionality and to protect the programmer. On the other hand, the first type sell for over $1000 US Dollars (USD), while the latter is less than $25 USD.
|
OPCFW_CODE
|
7za a -tzip Test.zip MyWork\*.* -rTo many long time users of Windows Dos Prompt, this is the standard way to instruct a program to process all files; the Del, Attrib, Copy, XCopy, Cacls, Dir, RoboCopy, and Rar all conform to this convention and hence one naturally assumes that 7Za.exe would naturally honor this.
Not so, and I learned this the painful way. According to the help file of 7-Zip:
7-Zip doesn't follow the archaic rule by which *.* means any file. 7-Zip treats *.* as matching the name of any file that has an extension. To process all files, you must use a * wildcard.Well, while it is admirable for someone to make a partial stand of this 'archaic rule', it is dangerous to use the same syntax and then silently producing a different result set. It is like switching the active and neutral wires in the electric wiring just because someone took exception to the color coding convention of the wire.
The fact remains that before 7-zip comes along to Windows, that convention has already been established and entrenched, way before it was mistakenly claimed by 7-Zip as introduced in Win 95. There are millions of users accustomed to this convention whether it is logical or illogical; this has become their second nature; it is like arguing whether it is logical or illogical to drive on the left hand side or right hand side of the road; a lone group of dissenting drivers taking a stand can not only play havoc on our roads but producing fatalities.
A convention has been established and people driving on public road has therefore to conform to it, like it or not. In US, all vehicles driving in a mine drives on the opposite side to those on the public road. The change of convention was explicitly stated, for reason sound in mining operations, and drivers are deliberately made to go through a change over section.
7Za however, did not do such a thing. It took *.* to mean 'all files must have extensions' in contrast to the Windows convention which means 'all files with or without extensions', which is understood by millions or trillions.
It is not a dispute of 7Za for being correct. It is raised here for its dangerous and irresponsible practice of flaunting a convention while using the same syntax.
7za a -tzip Test.zip MyWork\*.* -rShould pick up all files in a Subversion repository regardless if the files contain extensions of not; many Subversion files do not have extensions. Rar with this command picks up all files honoring the Windows convention:
Rar a -r Test.rar MyWork\*.*This is not only wise but also responsible realizing the consequence for failing to and taking a stand only brings at best hollow victory and wrath from users at the worst; such is the case with 7za now.
According to the Windows API and treatments of wildcard characters, there is no way to specify collecting files only with extensions. For example using Dir as an example:
Dir *.* /s /band
Dir * /s /bproduce the same result: a list of files with or without extensions. The second form is 7Za's way of specifying any file with or without extension but accepting the first form and producing a totally different result set.
Dir *. /s /bproduces a list of file without extensions. But there is no wildcard syntax to say 'file only with extension'.
As a result of 7-ZIP hollow stand against a convention entrenched in more users of Windows than 7za, it has successfully dislodged people's trust on this program. Archiver should behave much like a copy/xcopy commands; changing the operation with the same syntax is extremely dangerous and developer should not toy with this kind ideological stand in an important tool.
My ill-placed trust on 7za has caused me losing the collection of my Subversion repositories. It is an expensive loss and 7Za's ideological stand against an illogical convention is equally illogical resulting in real loss; what is 7-ZIP hoping to achieve? It has hardly won any friend!
The result is a total distrust of this tool. While I can use Subversion's command to ascertain the integrity of the restored repository, other archives produced by 7Za lack such detection and hence there remains an unknown number of imperfect archives.
I don't discourage 7za to take this kind of admirable stand but it should be selected by users. The default should always follow the convention of the OS in which it is deployed into. 7Za has already established this kind of overriding switches/options and its admirable attempt to correct the convention should only be selected when user makes the choice.
If that is a Unix convention, then either build 7Za to be a Posix conforming program that runs in Windows' Posix subsystem, in which case 7Za can even use case-sensitive file names or have a switch to turn on Unix convention.
This should be a real-life example of the danger of developers failing to conform to entrenched convention, no matter how 'archaic' or illogical that is. The first occupancy rule applies here!
For me, 7Za is now being banished to the recycled-bin as it is too dangerous to use tools that do not conform to convention. It will not earn my recommendation for sure.
|
OPCFW_CODE
|
Migrate to ACLINT
In this commit, based on the implementation of CLINT, an preliminary ACLINT is implemented, including the basic logic for operating mtimer, mswi, and sswi. And this commit replaced CLINT with ACLINT, the old CLINT implementation was removed.
Currently, due to the lack of implementation, the introduced ACLINT uses only supervisor-level IPI. Therefore, although the logic for mswi is implemented, it is not being used at the moment.
It can be tested by make check SMP=n, where n is the number of harts you want to simulate.
In the latest commit, ENABLE_ACLINT was added to the Makefile. This flag determines whether to use CLINT or ACLINT, with CLINT being the default.
In the latest commit, ENABLE_ACLINT was added to the Makefile. This flag determines whether to use CLINT or ACLINT, with CLINT being the default. The ACLINT can be tested by make check SMP=n ENABLE_ACLINT=1 now.
I don't think we should maintain both CLINT and ACLINT implementation. The RISC-V Advanced Core Local Interruptor (ACLINT) enhances the existing SiFive CLINT design through several key improvements:
Modular Architecture: ACLINT adopts a modular approach by separating timer and inter-processor interrupt (IPI) functionalities into distinct devices. This design enables RISC-V platforms to selectively implement only the necessary components, providing greater flexibility in system design.
Dedicated S-Level IPI Device: The specification introduces a dedicated memory-mapped I/O (MMIO) device specifically for supervisor-level IPIs. This innovation eliminates the need to use Supervisor Binary Interface (SBI) calls for interrupt communication in Linux RISC-V systems, streamlining interrupt handling.
Multi-Instance Support: ACLINT supports multiple timer and IPI device instances, a critical feature for multi-socket or multi-die Non-Uniform Memory Access (NUMA) system architectures. This capability allows for more complex and scalable system designs.
Backward Compatibility: The specification maintains full compatibility with the original SiFive CLINT, ensuring that existing RISC-V platforms can seamlessly conform to the new ACLINT specification without requiring extensive redesigns or hardware modifications.
The recent Linux kernel now provides support for ACLINT, and QEMU has also integrated this feature, allowing users to enable it through the -machine virt,aclint=on option. Given these developments, it is an opportune time to transition to the ACLINT architecture.
The final commit removed all implementations of CLINT, including the previously introduced ENABLE_ACLINT and OBJS_IR_CTRL. Currently, the project only contains the implementation of ACLINT.
Check https://cbea.ms/git-commit/ carefully and enforce the rules.
Due to two minor issues in the first commit of this PR—specifically, the absence of #address-cells = <0> causing DTC warnings, and the lack of a newline at the end of aclint.c—two additional small commits were added, making the commit history cluttered, also the commit message didn't follow the rule listed in https://cbea.ms/git-commit/.
To address this, the first commit message has been rewritten in accordance with the commit message rules, and the two minor issues have also been fixed in the process.
The latest commit includes the following updates:
Unified the use of C-style comments throughout the code.
Removed the @brief notation, replacing it with plain and informative sentences.
Replaced backticks with single quotation marks for consistency.
Elaborated on the fundamental principles behind the transition from CLINT to ACLINT, emphasizing the motivations and benefits of this migration.
Summarized Linux kernel support for ACLINT along with relevant git revisions.
Provided detailed instructions on how to validate ACLINT functionalities, including specific testing strategies and validation methods.
Outlined potential future development paths to enhance and complete SMP support, specifying areas for improvement or expansion.
The latest commit includes the following updates:
Don't attempt to summarize your changes. Instead, we always care about the changesets. Let's consolidate each git commit without unnecessary report.
Refine the wording in both comments and the previous git commit message. And use calloc to allocate the registers in ACLINT at runtime.
Thank @Mes0903 for contributing!
|
GITHUB_ARCHIVE
|
The RadGridView provides you with a selection functionality, which allows the user to select one or more items from the data displayed by the control.
The selection mechanism can be controlled programmatically, too. For more information take a look at the Programmatic Selection topic.
To select an item in the RadGridView click somewhere on the desired row.
As of Q2 2010, you can also select a single cell or individual cells as opposed to selecting the full row:
To set the selection unit, use the SelectionUnit enumeration property of the RadGridView. Setting it to FullRow will enable the selection of rows (default), while setting it to Cell will enable the cell selection.
The RadGridView provides three selection modes, which allow you to manipulate the type of selection. This is controlled by the SelectionMode enumeration property which have the following entries:
Single - only one item can be selected at a time. (default value)
Multiple - items are added to the selection when they get clicked and get removed when they get clicked again.
Extended - items are added to the selection only by combining the mouse clicks with the Ctrl or Shift key.
To learn more about the multiple selection (Multiple and Extended selection modes) take a look at the Multiple Selection topic.
The RadGridView provides three selection units, which allow you to manipulate what units are selected when you interact with the grid. This is controlled by the SelectionUnit enumeration property which have the following entries:
FullRow - this is the default value. Clicking within the cells will select the row
- Cell - the clicked cell is selected only. Depending on the value of the SelectionModes property you can have more than one selected cell.
- Mixed - you can select any cell and you can as well select a full row if you click on a row, but not on a cell from it.
The Mixed state is added with Q3 SP 2012 release
The SelectionUnit property is added in Q2 2010 release.
RadGridView provides several properties to get the data behind the selected items - SelectedItem, SelectedItems and SelectedCells.
SelectedItem - the business object that sits behind the selected row. You can use it when the SelectionUnit is set to FullRow (default), otherwise it is null.
SelectedItems - a collection of the business objects that sit behind the selected rows. You can use it when the SelectionUnit is set to FullRow (default), otherwise it is null. It will contain more than one item when the SelectionMode is either Multiple or Extended.
SelectedCells - a collection of GridViewCellInfo objects which represent the business object and the column of the selected cell/cells. You can use it when the SelectionUnit is set to Cell, otherwise it is null.
To disable the selection functionality you can set the CanUserSelect to false.
This will only disable the selection. The user will still be able to change the current item.
This will disable only the selection for the end-user. It still will be possible to manipulate the selection programmatically. To learn more take a look at the Programmatic Selection topic. If you would like to bind the SelectedItems collection of the RadGridView to a property in the ViewModel, then you could check this forum thread.
<telerik:RadGridView x:Name="radGridView" CanUserSelect="False"> </telerik:RadGridView>
this.radGridView.CanUserSelect = false;
Me.radGridView.CanUserSelect = False
There are five events relevant to the selection in the RadGridView: SelectionChanging, SelectionChanged,CurrentCellChanged, SelectedCellsChanging and SelectedCellChanged. The sequence of the events depends on the SelectionUnit property:
FullRow - the SelectionChanged is fired first and after that the CurrentCellChanged event fires.
Cell - the SelectedCellChanged is fired first and after that the CurrentCellChanged event fires
Added with Q3 SP 2012 release Mixed - you can select any cell and you can as well select a full row if you click on a row, but not on a cell.
To learn more about the selection events go to this help topic.
To modify the selection color you have to modify the style of the RadGridView rows. To learn more about how to do it take a look at the Styling a Row topic.
|
OPCFW_CODE
|
System.NullReferenceException in VlcPlayer and ThreadSeparatedImage objects
Hello,
I'm using an older version of xZune.VLC in my project and it's working fine. Though I've noticed that you have improved many things in the recent commits, and I've decided to upgrade my version of the library. Unfortunately the new multithread compatiblity features seems to cause a lot of issues with my older approach to using the VlcPlayer control. The project I'm working is quite complex, and giving all details would be very difficult, so at least I will try to describe briefly how im using the VlcPlayer control.
I'm trying to follow a MVVM pattern in my project. In the view model i have a VlcPlayer object which is bound to the content property of a ContentControl in the View. I have to do it in that way, because it provides me a better control of things than having VlcPlayer defined in XAML and it works well.
One functionality which I have to provide is the ability to switch to the fullscreen mode on the fly, without realoading the stream. After trying many approaches, the best one for me was to create a new window with appropertiate settings, and pass my VlcPlayer object to it's view model and keep it there, until user exits this mode. In the window the VlcPlayer object is bound to ContentControl in a View in the same way as I described earlier. To get it working, I have to pass the VlcControl object between view models of my camera view and the fullscreen view. It requires to set the VlcControl property of a previous view model to null, so it is not bound to the inactive view anymore. This approach worked fine with older versions of xZune.VLC. Unfortunately in new versions it causes a NullReferenceException in the ThreadSeparatedImage class on the SeparateThreadDispatcher property or in the VideoLockCallback and VideoFormatCallback methods of VlcPalyer when accessing the DisplayThreadDispatcher. It looks like the VlcControl's DisplayThreadDispatcher property is set to null at some point, when a VlcPlayer object is rebound to a ContentControl in another View.
Any idea how to solve this issue?
Hi.
I experienced the same problem and unfortunately there is nothing you can do without code fixes.
At first on each SeparateThreadDispatcher.Invoke() should be check if SeparateThreadDispatcher is not null. Also I do believe that checking if InternalImageControl is not null should be placed inside SeparateThreadDispatcher.Invoke().
If someone will do this fix, I'd recommend also have a look at HandleManager, sometimes it can throws IndexOutOfRangeException when new IVlcObject is adding. Do not know what causes this issue, but ConcurrentDictionary helps a lot, but cannot be used in projects below .net 4.0.
After this modification looks like this issue is gone, I'd like to share my changes for the project, but I didn't care about compatibility and it's done only for .net 4.5 using C#6 features.
If you want use VlcPlayer in CS code, not XAML(you will not add VlcPlayer to visual tree).
Please see this wiki (Use VlcPlayer with other controls)[https://github.com/higankanshi/xZune.Vlc/wiki/Use-VlcPlayer-with-other-controls].
If you use new VlcPlayer() not new VlcPlayer(bool) or new VlcPlayer(Dispatcher), SeparateThreadDispatcher will be set after VlcPlayer have added to visual tree.
So it will be null until you add it to visual tree.
But if you use new VlcPlayer(bool) or new VlcPlayer(Dispatcher), VlcPlayer will work on custom mode. if you use SeparateThreadImage to render video, use new VlcPlayer(true), VlcPlayer will work on SeparateThreadImage mode, the VideoSource only can be use in SeparateThreadImage. if you use Image or ImageBrush to render video, new VlcPlayer(false) or new VlcPlayer(Image.Dispatcher) VlcPlayer will create VideoSource for UI thread or your control thread.
Ummm....I will add some null check in my code.
|
GITHUB_ARCHIVE
|
What is a good amount of social shares per week / day?
As the title says. What is your idea of a good amount of social shares to be given per day or week or month for a website?
Are you talking about sharing on your facebook wall? I would say 20% pimping your product and 80% relevant info.
If you are talking about getting Facebook URL likes, get as many as you can. There is no amount that's too little.
I don't really do much on my FaceBook business page. Yes social shares. Like when someone goes through the "addthis" widget on your site and shares your website / webpage. I'm sure no amount is too little!
Try to put the FB Like button on your conversion page and tell people to click the like button so the Like goes to the home page. Addthis will not work for this so you will need to make a button here: https://developers.facebook.com/docs/reference/plugins/like/
Thanks Francisco. I have tried that but have unfortunately had to give up because my WYSIWYG website making program, for whatever reason, can't accommodate, so have decided to forfeit (it seems to have problems with the widgets that are graphically like PNG files). It can manage widgets that are contained within a box so I'm limited to the little addthis horizontal bar. My site and pages from it gets 5 to 7 social shares a day on places like FB, G+ and Pinterest, is this good?
Sounds like you're going to need to get a new software for your website. There is no minimum or maximum amount for Facebook likes. This is all going to depend upon your competition which we don't know. Use open site Explorer to see what the competition has. It may take a long time to rack up shares.
Patchryan last edited by
In my experience, a baseline number of shares is completely reliant on the content being shared. The type of content is also indicative of the network it will be shared on. For example: We get more shares from Linkedin when we produce "professional" content. We get more Facebook shares when we share time wasting content.
Sorry I could not be more specific. I would be happy to help if you are more specific about your brand/ target market.
I personally from what I have read believe the share button to be more powerful when it comes to SEO but I don't know for a fact so anyone have any input on this subject?
One thing I do know, is it's a lot harder to gain a 'Share' then a 'Like' and im talking about these buttons being onsite maybe in a blog post.
If you gained 100 shares and 100 likes which one would generate the best signals?
ThanksSocial Media | | activitysuper0
Do any of you have recommendations on basic social analytics software? We are looking for something that's not to expensive, that's easy to use, and measures the basic stuff, hashtag mentions, tweets, likes, etc.
Any recommendations would be greatly appreciated.
MargaritaSocial Media | | MargaritaS0
Our web development team members are having a debate on when and where to add various social media buttons / links. The crux of the debate is when does adding various calls to social media services become detrimental to page load time.
We have really been focusing recently on optimizing page load speed using YSlow and Google Page Speed tools to tweak templates and pages as much as possible.
Our webmaster argues that we should be selective on what pages we add Google+, Facebook LIKE, Twitter/Follow buttons. e.g. just 'important' pages such as the home page. Here is a forum link that speaks of this... http://www.webmasterworld.com/google/4358033.htm
Bottom line? What are your thoughts on when and where to place various social media buttons and links for Google+, Facebook LIKE and Twitter/Follow.
P.S. We have pretty much agreed to use AddThis button set on our clients' blog pages that includes FB LIKE and SEND, Google+, Twitter and the AddThis general share buttons.Social Media | | RoyMcClean1
I am using AddThis on my site: http://www.shipoverseas.com/
I was wondering what your opinion is regarding Facebook Likes vs. Facebook Shares. Likes seem easier to get and I know that neither Likes nor Shares are part of Google algorithm.
I can change those Like Buttons to Share Buttons, but it would be silly to have both.Social Media | | Francisco_Meza0
Hi, were in the process of creating multiple domains and sites for each country we plan to operate in, should each site site have it's own set of Facebook, twitter you tube accounts etc? I realise this will create more work in administrating each site and corresponding set of social media accounts, but does it have benefits ? As we'll need to do it anyway in different languages, does separating the English languages accounts make any difference carry any benefits?
Thanks,Social Media | | opoczno0
Let's say I have a page on my site with the url www.mydomain.com/page1.php?id=234.
Let's say it can be accessed by the url www.mydomain.com/page-this-is-a-keyword-rich-url
And let's say that the second example is what I have set as my rel-canonical.
I wondered what would happen when people submitted the non-canonical url to twitter. We know that Twitter shares count for something in SEO. I didn't want things to go to waste if people were landing on my short urls and sharing them on Twitter.
Well, tonight, I shared one of my own urls on Twitter. I was accidentally on the short one (not the canonical), but when I shared it via addthis the long one was shared. So, Addthis must read the canonical and use this to share.
Very cool. At least to me. I may possibly be the only person on the planet that understands what I just wrote, but this is a neat discovery for me.Social Media | | MarieHaynes5
Does anyone know of a tool that can get you the total, or close to it, social shares an entire domain has received?
I know of...
http://www.sharedcount.com/ - but this only gets shares for the entered URL not the entire domain.
http://www.pagesort.com/ - this seems to dig a little deeper but not deep enough.
I know opensite explorer gets some data as well but still seems very limited.Social Media | | eyeflow0
|
OPCFW_CODE
|
What's more, you will discover additional Advantages associated with our solutions. Choose A fast take a look at them below:
Our Professional Java Assignment Help can increase your General knowledge of programming ideas in Java. All you must do is present us along with your Java Assignment thoughts and a deadline for which you desire us to offer Using the options.
A method that uses the offered time and expense for handling dining establishments very well. One can generally feed in the number of persons Doing the job and also enable specified roles. This system can be employed successfully for assigning Positions and roles to diverse people today.
Certainly! I'm in this article to help you, And that i am not merely likely to assist you with java project advancement, but I can even share hundred distinctive Thoughts.
Students are being delivered Java assignment help until eventually they get to the fulfillment amount by our gurus. Assignment Desk is rendering top quality assignment to cause you to rating the ideal among the all other college students. We also let you come up with queries and Look at the development of your report.
Java is totally basic function computer programming language and is specially made to Have a very several implementations. You will find number of lesson that compile and form Java programming language are:
Setting up person interface: Creating applets leads to developing the consumer interface. From compiling and running a software and setting up, the applying is expanded to give a user interface applying Java Basis classes that manage user functions.
The Java assignment help solutions provided by Instant Assignment Help have saved lots of world wide Students from the shame of assignment rejections and weak grades by their professors. To understand the benefits that we provide to the college-goers, read additional:
We've 2500+ pro pool from everywhere in the globe. They're really experienced and perfectly expert. All industry experts have skilled in instructing and good command more than programming subject areas. Some of our authorities are as follows:.
You might generate dollars Should your Pal make use over at this website of the referral code to position an buy and make payment for it. You're going to get a partial share of amount on every thriving assignment completion. Spread the terms on FB, Google and Twitter
In the event you lack programming competencies and therefore are unable to finish your Java assignment all by yourself, then you could possibly require help with Java programming assignments from the topic-oriented writers Functioning at Quick Assignment Help.
If That is your initial time for you to operate this command, it could just take some time to obtain try this out all demanded dependency to transform your project to Eclipse design project.
We could help with any Java subject matter that you are assigned. We now have professional tutors in a few principal Instructions:
Now we have helped many college students in creating their Java assignments which have ultimately fetched top rated grades. You can be confident of a satisfactory buyer working experience in case you sign up with us.
|
OPCFW_CODE
|
How best to design a cookie banner for acceptance?
We have taken legal advice and we must now show three buttons on our cookie popup with 'equal' weight and color.
We do not want to mislead anyone with dark patterns, but I do want to make sure that those users who don't care, and just want the banner to 'get out of my way', predominantly click the accept button.
This presents two questions:
What layout is best for the buttons? Should I put the 'accept' button on the left or the right or in the middle? My gut is right (on LTR reading language sites).
Am I better letting users read the page by placing the banner at the bottom? Or do I make a centralised modal that forces users to choose an option?
You ask, how to make the users click the “Accept” button without reading the text. This is a dark pattern, regardless of your company’s intentions given. While this wish is understandable from the company’s point of view, it is not from the users‘ and legal perspectives (in the EU). You are on the user experience site here.
The first question should be, whether cookies and user-tracking technology are really necessary. This has advantages for both sides: users and company. From a user‘s POV, trust is part of the UX. Doing without unnecessary observation and processing gathered data responsibly increase users’ trust. The more trust, the better UX, the more conversion. Secondly, the less data your company processes, the less legal considerations, less computing power, less memory are needed => less costs.
You find more thoughts on this in the Open Source Design Community.
After these considerations move on to the next step:
If your users are located in the EU, then the EU GDPR and Privacy Regulation (EPR) apply, see this article. This means, the privacy protecting option has to be the default one, not the "Accept all" option. The privacy protecting options are “Accept only (technically) necessary cookies” or "Deny all".
So, if your company wants you to implement it properly, my take is:
Don’t use a modal. As a user, I feel annoyed twice: from the banner and from the modal.
Let the banner appear immediately. No silly animations with sliding in banners etc. They eat time and therefore suck even more. Some users might jump off here and were lost for the company.
Show the banner even if the user blocks scripts. This means: No JavaScript etc. for this. There are pure HTML and CSS solutions, web frameworks and CMS (e.g. Laravel and WordPress plug-in).
Make the banner accessible. The WCAG are your friend.
Offer three options:
a) Accept only necessary cookies
b) Accept all
c) Deny all
Option a) shall be the first, so impatient and annoyed users find the privacy protecting option quickly.
This order is based on users‘ and company’s goals.
In option a) use positive wording. Instead of “Deny non-necessary cookies” (bad feelings) use empowering words: “Accept necessary cookies only” (good feelings)
Make option a) the default one - interaction-wise and visually. The solution depends on your site’s design directives, e.g. by highlighting it.
What layout is best for the buttons? Should I put the 'accept' button
on the left or the right or in the middle? My gut is right (on LTR
reading language sites).
Most I have seen place it on the left (first item). Gut feeling plays no role here; you should test with users.
Am I better letting users read the page by placing the banner at the
bottom? Or do I make a centralised modal that forces users to choose
an option?
This will depend on your legal team's advice. If they recommend that access be granted only after accepting cookie consent, then you should use a modal to interrupt the interaction. Otherwise, a bottom banner (as many sites use) should be enough.
|
STACK_EXCHANGE
|
Security researcher discovered a series of flaws, collectively tracked as FragAttacks, that impact the WiFi devices sold for the past 24 years.
Belgian security researcher Mathy Vanhoef disclosed the details of a multiple vulnerabilities, tracked as FragAttacks, that affect WiFi devices exposed them to remote attacks. Some the flaws discovered by the experts date back as far back as 1997.
The vulnerabilities could be exploited by an attacker within a device’s WiFi radio range to steal info from it and also execute malicious code. The devices were exposed to the FragAttacks even if they were using WiFi security protocols such as WEP, WPA, and WPA3.
The issues impact all Wi-Fi security protocols, according to Vanhoef, more than 75 tested Wi-Fi devices were affected by at least one of the FragAttacks flaws, and in the majority of the cases, the devices were vulnerable to multiple vulnerabilities.
“This website presents FragAttacks (fragmentation and aggregation attacks) which is a collection of new security vulnerabilities that affect Wi-Fi devices. An adversary that is within radio range of a victim can abuse these vulnerabilities to steal user information or attack devices. Three of the discovered vulnerabilities are design flaws in the Wi-Fi standard and therefore affect most devices.” reads the website FragAttacks. “On top of this, several other vulnerabilities were discovered that are caused by widespread programming mistakes in Wi-Fi products. Experiments indicate that every Wi-Fi product is affected by at least one vulnerability and that most products are affected by several vulnerabilities.”
The expert discovered three design flaws in the 802.11 standard that underpins WiFi along with common implementation flaws related to aggregation and fragmentation.
The vulnerabilities affect all major operating systems, including Windows, Linux, Android, macOS, and iOS. All The APs that were tested by the experts were also found vulnerable, including professional APs. Vanhoef pointed out that only NetBSD and OpenBSD were not impacted because they do not support the reception of A-MSDUs.
The following video shows three examples of how a threat actor can exploit the vulnerabilities.
“As the demo illustrates, the Wi-Fi flaws can be abused in two ways. First, under the right conditions they can be abused to steal sensitive data. Second, an adversary can abuse the Wi-Fi flaws to attack devices in someone’s home network.” continues the expert. “The biggest risk in practice is likely the ability to abuse the discovered flaws to attack devices in someone’s home network. For instance, many smart home and internet-of-things devices are rarely updated, and Wi-Fi security is the last line of defense that prevents someone from attacking these devices. Unfortunately, due to the discover vulnerabilities, this last line of defense can now be bypassed. In the demo above, this is illustrated by remotely controlling a smart power plug and by taking over an outdated Windows 7 machine. The Wi-Fi flaws can also be abused to exfiltrate transmitted data.”
Summarizing, the design flaws discovered by the expert are:
- CVE-2020-24588: aggregation attack (accepting non-SPP A-MSDU frames).
- CVE-2020-24587: mixed key attack (reassembling fragments encrypted under different keys).
- CVE-2020-24586: fragment cache attack (not clearing fragments from memory when (re)connecting to a network)
while the implementation vulnerabilities are:
- CVE-2020-26145: Accepting plaintext broadcast fragments as full frames (in an encrypted network).
- CVE-2020-26144: Accepting plaintext A-MSDU frames that start with an RFC1042 header with EtherType EAPOL (in an encrypted network).
- CVE-2020-26140: Accepting plaintext data frames in a protected network.
- CVE-2020-26143: Accepting fragmented plaintext data frames in a protected network.
and other implementation flaws found by the researcher are:
- CVE-2020-26139: Forwarding EAPOL frames even though the sender is not yet authenticated (should only affect APs).
- CVE-2020-26146: Reassembling encrypted fragments with non-consecutive packet numbers.
- CVE-2020-26147: Reassembling mixed encrypted/plaintext fragments.
- CVE-2020-26142: Processing fragmented frames as full frames.
- CVE-2020-26141: Not verifying the TKIP MIC of fragmented frames.
The expert notified affected vendors and has given 9 months to address the issues.
Vanhoef also released a research paper and an open source tool that can be used to determine if Wi-Fi clients and access points are vulnerable to FragAttacks.
Follow me on Twitter: @securityaffairs and Facebook
(SecurityAffairs – hacking, FragAttacks )
The post FragAttacks vulnerabilities expose all WiFi devices to hack appeared first on Security Affairs.
If you like the site, please consider joining the telegram channel or supporting us on Patreon using the button below.
|
OPCFW_CODE
|
简易对话【听力】-【银行】:Reporting Wrong Charges 银行错误的收费
A: I want to contest some charges I see on my account.
B: Do you think your account has been compromised?
A: I think I am a victim of identity theft.
B: What proof do you have?
A: I did not buy anything in Miami last week.
B: How can we be sure?
A: I work here in New Jersey.
B: We will open an investigation.
A: How long will that take?
B: The investigation can take up to six weeks.
A: I need my money now though.
B: The charges you contest will be frozen.
- 【听力-银行】银行相关英语对话听力7:Trans... 阅读:94 2016-03-20
- 【听力-工作】职场英语对话听力8:Being a... 阅读:41 2016-03-19
- 【阅读-英语学习方法】“warm-chair attri... 阅读:20 2016-02-21
- 【阅读-每日英语】疯狂英语精华My Mistress... 阅读:25 2017-09-09
- 【阅读-每日英语】最受欢迎婴儿名,有你喜欢的吗? 阅读:31 2016-10-10
- 【阅读-每日英语】【语音版】“钢笔没水了,甩一甩”用... 阅读:78 2016-10-29
- 【听力-购物】Returning a Defec... 阅读:11 2017-09-07
- 【阅读-英语学习方法】“haha point”笑点 阅读:39 2016-02-23
- 【阅读-每日英语】【英语表达】对喜欢的女孩儿,如何表爱意? 阅读:50 2016-06-05
- 【阅读-每日英语】没有钱的爱情也是甜蜜的,但是没有钱... 阅读:32 2016-11-02
- 【听力-学校生活对话】学校生活英语对话听力14:Popu... 阅读:58 2016-03-19
- 【阅读-每日英语】教你用英文表达不满情绪! 阅读:22 2016-10-15
- 【阅读-每日英语】幸运,其实就是准备遇到了机会 阅读:17 2016-10-20
- 【听力-雇佣】Talking on the Ph... 阅读:57 2017-05-04
- 【阅读-每日英语】英语中颜色背后的含义 阅读:32 2016-10-02
- 【听力-安全】Crime Reduction ... 阅读:15 2017-03-14
A: Did you hear what happened to Marie?B: I have no idea what happened. Is she okay?A: She's devastated. Marie an...
A: Why are you taking pictures of yourself?B: I need a profile picture. A: Just use any picture.B: I need it to be pe...
A: Did you watch that golf tournament?B: The one that Tiger won?A: How did he do it?B: It was nothing for him.A: He s...
A: Can you record the new episode of Clarence for me?B: Why don't you watch it yourself?A: I won't be home th...
A: What's this stain?B: I don't know.A: It looks like blood.B: I think my nose was bleeding.A: You should wet...
Everyday English:Your Halo Is Too Tight A pious believer was very serious in his spiritual cultivation. He went to ch...
A rich old farmer, who felt that he had not many more days to live, called his sons to his bedside. 有一个富有的老农民感觉自己就要不久...
A: Don't put your cup noodles in the microwave!B: Why not? They're better this way.A: The styrofoam will melt...
OMG! Got Accepted!
BEIJING, Aug. 8 (Xinhua) -- In the most spectacular fashion in Olympic histor...
点击有音频>>>There is a lady sweet and kind,有一位女郎甜美和善Was never face so pl...
A: Your blood pressure is very high.B: What does high blood pressure mean?A: High blood pressure or hypertension mean...
|
OPCFW_CODE
|
Computational Biology Assignment Help UK
Computational Biology, in some cases described as bioinformatics, is the science of utilizing biological information to establish algorithms and relations amongst numerous biological systems. Journal of Computational Biology is the leading journal in the analysis, management, and visualization of cellular details at the molecular level. It provides peer-reviewed posts focusing on unique, advanced approaches in computational biology and bioinformatics. The term ‘computational biologist’ can include numerous functions, consisting of information expert, information manager, database designer, statistician, mathematical modeler, bioinformatician, software application designer, ontologist– and much more. Exactly what’s clear is that computer systems are now vital parts of modern-day biological research study, and researchers are being asked to embrace brand-new abilities in computational biology and master brand-new terms (Box 1). Whether you’re a trainee, a teacher or someplace between, if you progressively discover that computational analysis is necessary to your research study, follow the guidance listed below and begin along the roadway to ending up being a computational biologist!
Broadly speaking, computational biology is the application of computer technology, data, and mathematics to issues in biology. Computational biology covers a large range of fields within biology, consisting of genomics/genetics, biophysics, cell development, biochemistry, and biology. It makes usage of tools and methods from numerous various quantitative fields, consisting of algorithm style, maker knowing, Frequentist and bayesian stats, and analytical physics. Much of computational biology is worried about the analysis of molecular information, such as biosequences (DNA, protein, or rna series), three-dimensional protein structures, gene expression information, or molecular biological networks (metabolic paths, protein-protein interaction networks, or gene regulative networks). A wide array of issues can be resolved utilizing these information, such as the recognition of disease-causing genes, the restoration of the evolutionary histories of types, and the unlocking of the intricate regulative codes that turn genes on and off. Computational biology can likewise be interested in non-molecular information, such as eco-friendly or medical information.
Computational Biology provides Ph.D. degrees in the advancement and application Computational biology includes the advancement and application of theoretical and data-analytical approaches, mathematical modeling and computational simulation methods to the research study of biological, behavioral, and social systemsThe field is broadly specified and consists of structures in computer technology, used mathematics, animation, stats, biochemistry, chemistry, biophysics, molecular biology, genes, genomics, ecology, development, neuroscience, visualization, and anatomy. Computational biology is various from biological calculation, which is a subfield of computer technology and computer system engineering usingbioengineering and biology to construct computer systems, however resembles bioinformatics, which is an interdisciplinary science utilizing computer systems to shop and procedure biological information.
computational simulation strategies to the research study of biological systems Computational Biology and Chemistry releases initial research study documents and evaluation short articles in all locations of computational life sciences. High quality research study contributions in the locations of nucleic acid and protein series research study, molecular development, molecular genes (practical genomics and proteomics), theory and practice of either chemical-biology-specific or biology-specific modeling, and structural biology of nucleic acids and proteins are especially welcome. Remarkably high quality research study operate in bioinformatics, systems biology, cybernetics, ecology, ecological sciences, computational pharmacology, metabolic process, biomedical engineering, public health, and analytical genes will likewise be thought about.
Computational Biology research study at IBM covers life sciences research study at the user interface of infotech and biology. This research study is carried out typically in cooperation with partners in universities, medical proving ground, biotechnology business and the pharmaceutical and healthcare market. In the Computational Biology significant you will acquire an understanding of the analysis and analysis of biological phenomena utilizing analytical and mathematical designs, computational tools and the algorithmic style and analysis of such designs and tools.
Computational Biology” addresses biological concerns with computational methods & algorithms. From mathematical modeling of biological systems to the processing/analysis and combination of big quantities of medical or speculative information. A variety of research study groups at the University of Lausanne operate in the field of Computational Biology. They are all connected to the SIB Swiss Institute of Bioinformatics, and work together carefully with other SIB groups.Computational biology where research studies include a mix of subjects from the disciplines of computer technology in addition to biology. The research studies in Computational Biology use methods and principles in computer technology, used mathematics and data to resolve issues that occur in biology. Main focus of the field will be on the advancement information analysis techniques based upon computer technology and data to deal with biological research study subjects without the assistance of a real physical damp lab.
Couple of subjects of Computational biology specialists:
algorithms for rebuilding phylogenies, computational systems biology, information mining algorithms, Biological and Biomedical Sciences, Information Science, Genetics to Genomics, Sequencing and Deep Sequencing: Methods,, Applications and algorithms, Genomics/Comparative Genomics, Prokaryote and Viral Genomes, Metagenomics Information, Information, and Knowledge Management, Concepts in Bioinformatics, Data combination II; Challenges, integrating several, kinds of information, NLP, Information Retrieval, Algorithms: Patterns and concepts, Hidden Markov, Models, Position Specific Matrices, Non-coding RNAs — siRNAS, miRNAs, Rapid Sequence Searching Algorithms
Suffix Trees,, Burrows-Wheeler Transform, Hashes, Population Genetics I, Phylogenetics I, Genome biology, State of art evaluation of computer system strategies utilized in CBB, Dynamic shows, Pattern matching algorithms, Clustering algorithms, Tree building and construction algorithms, Macromolecular structure, Graph algorithms Modeling, Simulation Techniques, Numerical techniques, Identification, Classification of open ended issue, Project Implementation, Evaluation methods, Feedback driven job advancement, Project discussion demonstration strategies, R use, Analyzing big datasets in R, Randomization, False discovery rate Intro to bioinformatics, Algorithms, Perl, Sequence positioning, Phylogenetic trees, Protein structure, RNA secondary structure, Microarrays, Mass spectrometry, Hidden Markov designs, L systems, Information theory in natural computing, Genomes, biological series analysis, concealed Markov designs, gene finding
RNA folding, series positioning, genome as
Computational Biology Assignment assist services are readily available 24 * 7 be Computational Biology specialists & Computational Biology teachers in order to assist trainees to fix normal Computational Biology Helpassignment.uk Biology research assistance services at Helpassignment.uk are dependable & very cost effective in order to get an A+ grade. Post your requirements at Helpassignment.uk to obtain the instantaneous Computational Biology project aid. Computational Biology Online Tutoring s essions can be set up 24/7 by licensed Computational Biology online tutors.
Our Computational biology research aid is offered 24/7:
Summary of Molecular Biotechnology, Chemical & Biology Foundations, Molecular Biology & Pathways, DNA andRecombinant DNA Technology Prokaryotes, Eukaryotes, Proteins and Health applications I, Health applications II, Plant farming, computational biology and bioinformatics, structural bioinformatics, modeling and simulation of biological networks, computational series analysis Broadly speaking, computational biology is the application of computer system science, stats, and mathematics to issues in biology. Computational biology covers a large variety of fields within biology, consisting of genomics/genetics, biophysics, cell biochemistry, biology, and advancement. Computational Biology provides Ph.D. degrees in the advancement and application Computational biology includes the advancement and application of theoretical and data-analytical approaches, mathematical modeling and computational simulation methods to the research study of biological, behavioral, and social systems The field is broadly specified and consists of structures in computer system science, used mathematics, animation, stats, biochemistry, chemistry, biophysics, molecular biology, genes, genomics, ecology, advancement, visualization, neuroscience, and anatomy. Computational biology where research studies include a mix of subjects from the disciplines of computer system science as well as biology. The research studies in Computational Biology make usage of strategies and principles in computer system science, used mathematics and data to attend to issues that develop in biology.
|
OPCFW_CODE
|
One of the blogs that I enjoy reading the most is HitCoffee, partly because it is very well written by a fellow geek, but mostly because I can get several topics sparked by just one of the posts there. A recent post about the tango that developers, QA and project managers dance, made me think about what I have learned about software development during my career and I commented…
Every programmer that I have met thinks that they are awesome software designers, however I would venture to say that only about 10% are truly good at it.
Software developers have an amazing capacity for arrogance in general. They really seem to believe that everything would be better if they were running the company. I’ve met MBAs with more humility than a lot of programmers… and I’m thinking of some programmers around here that I really like!
I wrote my first line of code when I was about nine years old. Now it is probably pretty common that a kid that young will customize their myspace or even write a full html page, but back then it was not that common. I was lucky that I went to a school that had IBM clones that we were taught how to use. One of the surprising things about the U.S. education system to me was that even though they had the resources, the computer classes I took here were actually way behind from what I learned in Colombia at a high school level. I will some day write more about that disparity, but I just wanted to give myself some credibility here, when it comes to computers and writing code, I have been around it for quite some time.
We all think of doctors and lawyers as the ultimate arrogant professions, and while some of them probably are, a lot of people in the IT field have a god complex. I have met several programmers that like to use the term programming god. Some of them truly think of themselves as better than others simply because they can read computer code, or have memorized every single intricacy and function of a programming language.
I have no idea why I was spared that faith of becoming arrogant about programming. I do think I am above average when it comes to programming and I am also excellent at bug tracking and solving. That does not make me a better person, just someone that has patience and good logic. I however respect people in other professions inside and outside of my field.
The challenge that companies face is that communication breaks down as soon as their departments stop talking to one another. I was hired at one company because I could potentially bridge a broken relationship between marketing and development, and while I was able to facilitate the running of the projects I was involved in the relationship never seemed to get better. You had web developers trying to be designers and vice versa.
The good software architects that I have met were actually poor programmers. Big picture people tend to either forget about the details or the users. Teams of people seem to make this issue less painful, but all parties need to be truly involved and willing to comprimise.
I learned early in my career that the more I simplified the software I wrote, the more people like the features I implemented. Color coding things has always been something people respond to because it gives them another way to memorize things. I also try to apply a the little principle that computers are better at remembering things, but humans are still superior at make decisions. When I have to leave something up to the user I balance it on that scale.
Almost every programmer that has been in a company for a while thinks they understand the business side of things. However, the business side of any company is a living entity that constantly changes, the bigger the company the bigger the changes. Users are very clever and will use features in ways that they were not intended… do developers then adapt them to do the right thing or remove them?
That is when I think some of the disconnect happens. In situations like this it should be a consensus between the development team and the business that ultimately dictates the direction. Most of the times it is only one of the sides that dictates the direction and that leads in the best scenario to friction and in the worst it least to lost company productivity.
Software development is a lot of times like single life, you date different companies and try to do your best for a while but eventually you want to move on. You feel like you have so much to offer, but you are not appreciated, listened to and could do so much better. The good relations come when you have commitment and truly want to marry a company, truly start to think about what would be best for the company and not what is going to make you seem smart and clever. A programmer can be revered as a genius, but if the software they write is not usable or does not solve the problem it the end its a failure.
|
OPCFW_CODE
|
Paste In URL Field Not Working on Linux Wayland
[X] I think my problem is NOT with youtube-dl
[X] I've verified and i assure that I'm running yt-dlg 1.X.Y
[x] I assure that i am using the latest version of youtube-dl
[x] Searched bugtracker
[x] I've read the FAQs file
What is the purpose of your issue?
[x] Bug report
What operating system do you use ?
Fedora 38, Linux 6.2.15-300.fc38.x86_64
Wayland, KDE Plasma 5.27.4, Qt 5.15.9
Python 3.11.3
wxPython 4.2.0
yt-dlg 1.8.5 from pip
List of actions to perform to reproduce the problem:
Try to paste with Ctrl-V, menu, or middle-click
nothing happens
Similar symptoms to #122 but I checked paste works with the basic wxpython sample given in https://github.com/wxWidgets/Phoenix/issues/1391
So yt-dlg is doing something weird (maybe #123 regressed this particular environment? I am surprised there is a need for the custom _paste_from_clipboard code at all...)
As a workaround, commenting the custom paste handler in https://github.com/gitgeoff/youtube-dl-gui/blob/master/youtube_dl_gui/mainframe.py#L407 works :)
self._url_list = self._create_textctrl(
wx.TE_MULTILINE | wx.TE_DONTWRAP, # self._on_urllist_edit
)
No adverse effect that I can see.
From where you install wxPython:
from the system with dnf or using pip ?
In this comment wxPython will need compiled with Unicode support . Can share the steps you follow to install yt-dlg
I install Fedora Workstation 38 and follow your workaround
sudo dnf install python3-wxpython4
pip install yt-dlg
and Ctrl-V working OK. Can see the video demo:
https://github.com/oleksis/youtube-dl-gui/assets/44526468/08766014-7151-48c9-89e4-eb83665bca71
You can share more information to reproduce the error ?
Wayland Primary Selection / Clipboard
It's about how Fedora 38 with KDE and Wayland implement Clipboard and Primary Selection
Primary Selection
Data sharing between clients
Clipboard/Selection Behavior
wl_data_device - data transfer device
Doing the test, work with the settings
System Settings -> General Behavior
Middle Click: checked
Clipboard settings: General Configuration
Selection and Clipboard: checked
Open yt-dlg with the GDK_BACKEND environment variable
GDK_BACKEND=x11 yt-dlg
You can watch the following demo video using wxPython with KDE and Wayland
https://github.com/oleksis/youtube-dl-gui/assets/44526468/f21b9801-1c7a-4677-9880-fd823ed8bdac
@eddy-geek Can test the yt-dlg version 1.8.6 ?
Install from GitHub
python3 -m pip install -U git+https://github.com/oleksis/youtube-dl-gui#egg=yt-dlg
Install from TestPyPI
python3 -m pip install -U --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ yt-dlg
Vide demo:
https://github.com/oleksis/youtube-dl-gui/assets/44526468/1728028e-263f-4e00-9c5e-8ab120791b69
|
GITHUB_ARCHIVE
|
JetBrains RubyMine 2022.2 WIN + MAC Free Download Nulled Crack Ultimate Serial Key
JetBrains RubyMine 2022.2 Free Crack Ultimate Full Version
As usual, a big chunk of work went to reviewing and fixing report and test cases submitted by our users. Weve also improved error messages for undefined method and method missing, as well as a number of integration improvements for Ruby, Git, Cucumber, Webstorm, ST3, and Tmux. In addition, RubyMine makes working with RVM and Ruby 2.6.4 easier and more convenient, and weve added many new and improved language features in Ruby.
Adds support for macOS Catalina! From the developer forum – The current beta has a not yet final version of the latest JetBrains support for Mojave and support for Mojave, but we are working on it. It is expected to be released next week. WebStorm 8.0+ Support for the latest webpack CLI command line interface, see command line options in the help documentation for more details. New inspection for missing type signature for assignment in the header of class methods. We have added a new inspection called Missing type signature for assignment in the header of class methods. Inspections Quick-fix for removing duplicate test cases in unit test generator and improved performance when working with large test files, parameters or test methods. Quick-fix for duplicate code in the same place in class and module. Now using MessagesInCode to show tool tips in the editor if there are messages in the code. Quick-fix for removing unused test. Remove unused imports in the settings. Remove unused variables. Fix some issues with Go support in the settings. Fixes a few issues with the Go imports and code model of the settings. Fixes files opened by the VCS tool dialog not being updated correctly when the projects are refreshed. Fixes multiple issues with the indexer in the settings. Speed up the parsing of old files. Fixes an issue with the template lookup preference. Speed up the language server reconnection after connectivity issues. Fixes a few issues with the Go moves/fwd quick-refactor command. Fixes an issue with the Go messages command. Fixes an issue with the Go suggestions command. Fixes an issue with the Go removes function quick-refactor command. Fixes an issue with the Go implements missing command. Fixes an issue with the Go imports command. Fixes an issue with the Go extends command. Fixes an issue with the Go removes variable quick-refactor command. Fixes an issue with Go imports in templates. Fixes an issue with the Go jumps command. Fixes an issue with the Go adds function quick-refactor command. Fixes an issue with the Go fixes pragma quick-refactor command. Fixes an issue with the Go lets command. Fixes an issue with the Go adds pragma quick-refactor command. Fixes an issue with Go imports in templates. Fixes an issue with the Go extends command. Fixes an issue with the Go declares command. Fixes an issue with the Go implements command. Fixes an issue with the Go extends command. Fixes an issue with the Go introduces command. Fixes an issue with the Go interface command. Fixes an issue with the Go implements extension command.
ScreenHunter Pro Final Release Crack 2022 Download
JetBrains RubyMine 2022.2 Free Crack Free Download
The main goal of JetBrains RubyMine 2020.2 is to implement the best practices for Ruby development. This release comes with a bunch of tweaks and usability improvements and bug fixes. For example, it improves navigation between class methods with Method menu. You can directly navigate to the source code location from class methods or methods and the IDE helps to find the correct tool if it’s needed. In addition to other features, it improves handling of indentation when navigating between blocks on the same line. We’re also working on keeping the IDE clean and tidy. This release makes it possible to hide the descriptions of methods and params . Check for the See & Hide Parameters and Method option in the Parameter view.
JetBrains RubyMine Full Version 2019.2 contains the best IDE experience for Ruby development. That is, it has fixed all the annoying issues and bloopers of previous releases. In addition to our top-notch Ruby development features, we also fixed a lot of bugs, improved the UI and added new features. Here are some of the most notable ones:
By activating the RubyMine plugin inside IntelliJ IDEA Ultimate, you can:
- Create a RubyMine development environment in seconds without going through any configuration.
- Start developing Ruby applications without installing Ruby first.
- Run Ruby applications installed outside of the IDE from inside IntelliJ IDEA.
IntelliJ IDEA development environment has received many improvements:
- Received a major rework of the bundled JetBrains Runtime, requiring IDEA to be restarted.
- Starting with this release, IDEA uses JetBrains Runtime 17 instead of the bundled JetBrains Runtime. Java 9 is now supported.
- A new side panel for refactoring, code navigation, and quick fixes became a top request from the community.
- Coding support is extended to a number of new languages and frameworks such as Spring and Angular.
- Performance improvements to the GUI and key features for code browsing and refactoring.
- More than 100 improvements to the RubyMine plugin.
What is JetBrains RubyMine 2022.2 and what is it for
Installed with an OpenJDK or its forks, such as JetBrains Runtime 17, the IDE always includes a custom, patched version of the JDK in its Bundle. This custom version of the JDK serves as a runtime for applications under development, and is different from the runtime that the IDE uses to execute your applications. If you develop applications with the IDE, JetBrains Runtime 17 is the one to use.
IDEA 2022.1 includes a new JEP that enables application execution to be sent to a different, custom, patched version of the JDK. This second runtime is used for applications under development, while the runtime of the IDE is the one used by running applications. If you develop applications with the IDE, JetBrains Runtime 17 is the one to use.
In several JetBrains IntelliJ IDEA versions, creating run configurations for cloud application servers leads to saving a cleartext unencrypted record of the server credentials in the IDE configuration files. If the Settings Repository plugin was then used and configured to synchronize IDE settings using a public repository, these credentials were published to this repository. The issue has been fixed in the following versions: 2019.1, 2018.3.5, 2018.2.8, and 2018.1.8.
IDEA 2022.1 is the first to be based on the JetBrains Runtime 17, a fork of the OpenJDK. Previous releases used JetBrains Runtime 11, based on JDK 11, which was the prior LTS (long-term support) release, dating from 2018. Note this is the runtime used by the IDE, not the runtime for applications under development, which is chosen by the developer.
The RubyMine Team is pleased to release RubyMine 22.1 with many new features and improvements. JetBrains continues its fight against mobile software engineers and small teams. This version will help you develop more robust applications and ensure your team is focused on the right thing. Key new features: Use your test results to provide feedback Get support for the latest Ruby versions Reduce time waiting for builds with faster Git integration Start a RubyMine WebSocket Server Use RubyMine to analyze your code more easily Add more convenient shortcuts to common commands Improve IntelliJ and Cmdshell editions by adding Kotlin support Improve the IntelliJ Cmdshell User Experience
VariCAD 2022 V2.03 For Free Full Cracked Activation Code For Mac And Windows
What’s new in JetBrains RubyMine 2022.2
- Check out the RubyMine IDE documentation!
- Maven and Maven 3 support in RubyMine
- RubyMine offers many multi-lang and Ruby API compatible plugin as Maven dependencies
- RubyMine can deal with the configuration of some components in the Apache Maven 2 platform
- In RubyMine 2022.2, Ruby code completion in static methods and classes in the Java language is improved
JetBrains RubyMine 2022.2 System Requirements
- RAM 2GB +
- CPU 1 GHz
JetBrains RubyMine 2022.2 Pro Version Activation Number
JetBrains RubyMine 2022.2 Full Activation Number
|
OPCFW_CODE
|
Culture yeast from bottles
i have read that we can culture yeast from commercial bottle conditioned beers, So is it not the same that we can use a bottle of our own beer that has been made from a brew that used a White labs or Wyeast culture ?
Sure, there's no reason you can't do this. Realistically, though, since you're talking about a beer you've brewed yourself, there are other times in the process where it is much easier to collect yeast.
For example, after primary fermentation is done, it is usually pretty easy (depending on your equipment) to collect some of the yeast cake deposited after fermentation. Or you could collect some fermenting wort during primary and save for reuse, or save a portion of a starter (if you use one).
However, if you want to culture a yeast from an older batch, or a batch of a friend's beer, it's entirely possible. The caveat is that you have to spend some time and effort to grow the yeast up to a usable quantity, and the surviving cells from which your new culture will derive may not be nearly as healthy as fresher yeast.
Great answer, so how do the brewers maintain their healthy yeast? Do they wash it? And how many brews could I trust before washing, or should I just
small bottle all the yeast that is left over from a fermentation?
@Custodian: see this topic about maintaining healthy yeast: https://homebrew.stackexchange.com/questions/23212/what-is-necessary-to-keep-yeast-happy-and-healthy
Commercial brewers generally do not wash yeast. They simply repitch the slurry a number of times, then order a new pitch from a yeast company who banks it for them.
Thanks for that Denny, That seems like the easiest way, how many times repitch,? To save the character.
The number of times you can repitch depends heavily on many factors. These factors include the type of yeast it is, the type of fermentations the yeast goes though (wort composition etc.), the brewer's treatment of the yeast (how stressful were fermentations?), what you actually want it to taste like, and many more. As such, there's no real answer to that question. Assuming you treated the yeast very well each time, 10 generations is an oft-quoted maximum.
I've done it, so it's definitely possible. You just have to bear some caveats in mind.
How many batches have been fermented with the yeast? Remember that some strains tend to mutate faster than others. I've heard that you can ferment 5-8+ batches with consecutive generations of the same yeast without any problems, but e.g. Hefeweizen yeast will not be viable for that many.
What's the ABV of the beer you're culturing the yeast from? As you probably now, stronger beers will contain yeast that's either dead or in poor condition. It's probably not worth it to culture yeast from a barley wine. Darker and hoppier beers will further deteriorate the yeast too.
What yeast was used for refermentation? There are strains specifically for bottle conditioning that some brewers use. Commercial breweries very often add a completely different strain for refermentation, sometimes even a lager yeast (despite the beer being an ale). Make sure it's the same strain that was used for fermentation, otherwise you can end up with something different that what you expected.
How difficult is it to obtain the yeast you're expecting to get? If it's not a rare/expensive strain, you might want to consider either buying a fresh packet or perhaps asking your local homebrewers if they don't happen to have some extra yeast cake in their fridge.
You need some equipment and time. A bottle will have relatively few viable yeast cells, not nearly as many as a fresh packet of liquid yeast. Therefore, ideally you should have a plate stirrer, a small flask (like 250ml), a large one (2l should be fine) and a lot of patience. The culturing takes no less than a week. You should also start with a low gravity beer, as the number of cultured cells will still probably be well under what you'd normally want to pitch. After your low gravity beer's primary is done, collect the yeast cake and use that to make your desired batch of awesomeness.
I havnt got the yeast yet.
Yes, that's perfectly fine, however the best way to conserve a strain of yeast that you already have on hand is by making an oversize starter and saving a portion of it or slanting the yeast. this way you prevent most of the problems that comes with yeast reutilization from batches, like contamination and the pain of isolating a pure colony of yeast.
|
STACK_EXCHANGE
|
AVAMAR SOFTWARE INSTALLATION STEP BY STEP
DellEMC Avamar administrator is a graphical user based management console software that is used to do administrator related tasks of avamar system from a certified client operating system like Windows and linux.Avamar is a client & server network backup and restore solution.
An avamar appliance consist of one or more avamar certified nodes or severs and the network servers or desktop clients backup data into those servers.
Avamar appliance is typically installed on Dell power edge servers which is a self-contained, rack-mountable, network-addressable devices that runs Avamar server software on the Linux operating system.
Avamar system is consist of the following devices
- Avamar Server
- Avamar Nodes
- Disk drive storage on the node
- Stripe on the disk drive
- Object on the stripe
Avamar system has a data server which is used for performing backup, restore, validation of the consistency.
Below are the steps for installation
Rack Mount Avamar Server
Provide Dual Power cable & Ethernet Connectivity
Power on Avamar Server
Initialize & Install Avamar Server Software
a) Network Configuration
Login Avamar Server by Putty
Enter the IP address
Enter the default username and password.
Login as : root
At the command prompt, type the following command:
To enter the IPv4 IP Configuration, press 1
- Press 1 again to enter the IPv4 Address and Prefix (for example 192.168.10.X/24 or192.168.10.X/255.255.255.0).
- Press 2 to enter the IPv4 Default Gateway address.
- Press 4 when complete to return to the main menu.
- Press 3 to enter the DNS Settings
- Press 1 to enter the Primary Nameserver IP address. Both IPv4 and IPv6
- addresses are supported.
Go to Network devices and configure the network setting
Enter IP and hostname
Enter domain name and gateway
Avamar Package Transfer on /usr/local/avamar/src directory by WinSCP
From /usr/local/Avamar/src to data01/Avamar/repo/packages
Check Preconfiguration parameter
Enter the Ip address of AVMAR node in brower
Now Type https://:8543/avi
Click on Install
Fill the required information
Enter the credentiol to fininsh configuration
Installation of Avamar will get start
Installation completed successfully
Additional Steps :-
Editing the site name for ESRS
- Open a command shell:
a. Log in to the server as admin.
b. Switch user to root by typing:
- Type the following commands to launch the utility and change the site name:
avsetup_ connectemc.pl –site_name=site_name
Where site_name is the name of the customer site.
- Disable and then enable ConnectEMC.
Enabling and disabling ConnectEMC provides detailed information.
- Restart MCS
Registering with ESRS :-
- Export the ESRS Gateway certificate:
a. Point a browser at https://esrs_gateway:9443
where esrs_gateway is the hostname or IP address of the local ESRS gateway.
b. Use the browser’s functionality to export the certificate.
For example, in Internet Explorer 11:
i. Click the lock icon in the URL field and select View Certificates.
ii. Click the Details tab.
iii. Click Copy to File and complete the steps in the Certificate Export Wizard.
- Copy the exported certificate to a temporary location on the Avamar server.
- Open a command shell and log in by using one of the following methods:
● For a single-node server, log in to the server as admin.
● For a multi-node server, log in to the utility node as admin.
- Switch user to root by running the following command:
- Back up the keystore by typing the following command on one line:
cp -p /usr/local/avamar/lib/rmi_ssl_keystore /usr/local/avamar/lib/rmi_ssl_keystore.bak
- Import the ESRS server certificate into the keystore by typing the following command on one line:
keytool -importcert -keystore /usr/local/avamar/lib/rmi_ssl_keystore -storepass password
Replace password with the keystore password. Replace certfile with the name of the ESRS server certificate, including its
- Restart the MCS by typing the following command:
See also :- DellEMC ESRS
|
OPCFW_CODE
|
This will be the first of many posts I do about this topic,
For those who haven’t heard, Mastodon is a social media platform in the same concept as Twitter (Microblogging) but instead of there being one central authority and one team of administrators it is a collection of servers called instances that house the local users of that instance and their data that also communicate with other instances through a shared protocol called ActivityPub. This makes for a very robust and fault-tolerant social network.
Mastodon’s default UI is very similar to Twitter’s Tweetdeck where you have four different panels (Settings / Toot, Home, Notifications, Federated Timeline) Where you can compose messages called toots (if your coming from Twitter: a tweet) which can be as long as 500 characters on some instances.
I host a general, furry-related, mastodon instance over at puppo.space where I stay mostly active and communicate with other users both on the instance and out on the Internet. So far the instance has run smoothly over the last two months and I’m constantly looking for things to improve upon, both the interface, backend, and at the infrastructure level.
Being someone who values privacy and security above all else I looked towards everything I could possibly implement in order to provide the absolute best security going forward and this is what that looks like from me:
Cloudflare WAF –
By subscribing to Cloudflare’s Pro tier package your website benefits from Cloudflare’s Web Application Firewall which protects your site from common attack vectors such as SQL Injection by blocking the requests when it detects them in-transit before it even reaches your origin servers. With Cloudflare standing in front of many sites it allows them to see and detect 0-day attacks early and write up rules in their WAF in many cases quicker than waiting for a vulnerability to be patched upstream which improves security for many websites at once rather than waiting for each individual site to upgrade themselves, improving Internet security overall.
Cloudflare is not the only provider offering a Web Application Firewall, a comprehensive list can be found in the links below.
Cloudflare CDN –
I decided to use Cloudflare right from the get-go because I utilize origin servers who’s pipes only go up to 100 Mbps, while my hosting provider offers DDoS protection I had more faith in Cloudflare’s offering since they were able to mitigate the world’s largest DDoS attack in history and from prior experience I liked the offerings they had available. Cloudflare is also responsible for providing the world’s fastest DNS resolvers (220.127.116.11, 18.104.22.168) so it made sense for me to want to take full advantage of having them also deliver my DNS entries.
Since I manage my own email servers I realized I had to make sure that emails sent from Puppo Space made it to user’s inboxes instead of spam so I implemented a standard Postfix mail server and configured it to automatically sign emails sent from puppo.space email addresses with the domain’s DKIM key. This proves to other email providers that my server was the one to send out this message. SPF (Sender Policy Framework) allowed me to restrict which servers were authorized to send email on behalf of the domain. DMARC allowed me to tell other email providers what to do should they receive an email from my domain that happened to fail ether the DKIM or SPF test or both, to discard the message. I had also configured the DMARC record with a reporting email so that I can ensure emails sent to users are really coming from my servers.
This is a gimme, any modern website should at least be making an attempt to encrypt the communications between the client and the server. Going back to Cloudflare for a second, the communications between you, the user, and Cloudflare’s CDN network supports TLS version 1.3 which is currently the most secure version of TLS available with no known breaches or security vulnerabilities reported to its name but has been under active scrutiny and research for the past 8 years. By utilizing Cloudflare I am able to give users TLS 1.3 connectivity between their device and the CDN but from Cloudflare down to the origin server it is dropping back to TLS 1.2 due to Cloudflare’s limitation of not providing TLS 1.3 connectivity between origin servers and them, however, use of strong TLS 1.2 ciphers and SSL certificates on the origin servers provide SSL/TLS for the last mile from Cloudflare back to the origin servers.
Another security recommendation was to restrict the providers capable of issuing SSL certificates for the domain. The point of this is two-fold. Certificate Authorities are trusted universally where they are installed and because there are many of them out there it is quite possible for a skilled attacker to break in and have false certificates issued. By restricting which CA’s can issue certificates to the domain if a rogue authority did issue a certificate on the domain your browser would display the big red insecure page due to the CA not being in the domain’s CAA list. While the possibility an attacker going to such lengths for a site like mine is negligible at best, I sleep easier at night knowing not even a rogue self-signed root certificate installed by a hacker would be able to issue a working (green) SSL certificates for my domain.
HTTP Public Key Pinning works in tandem with CAA in that now not only am I restricting what CA’s can issue certificates for my domain, now I am telling the client to only trust my website if the SSL certificate sent to the user and its chain match a predefined list. While Chrome has deprecated this feature as of Chrome version 69 I have left it enabled. The big major problem with pinning is the fact that it is very easy to break your website should you fail to configure pinning properly. Since HPKP also has a configurable max-age it is entirely possible for an administrator to accidentally configure their max-age longer than their certificate has left. It is required that you configure a backup key that can be used should such an event occur. For my case, I pinned my dedicated SSL certificate and Cloudflare’s free SNI certificate. Since the site is constantly sitting behind Cloudflare and I can’t upload certificates from LetsEncrypt without a business plan I went for the best options.
HTTP Strict Transport Security allows me to tell the browser to not only force the HTTPS protocol but to require it. This is another security feature not immediately enabled due to the potential to break sites. Like HPKP it to has a configurable max-age, say you told the browser to remember that your site requires HTTPS for 6 months (the recommended TTL by Cloudflare) but after 3 months need to use http on a subdomain of the same url. The browser is unfortunately going to break your site. Even if you try to wipe out the setting from the server you will have to manually remove the entry in the browser before being able to access the site again.
While I realize a lot of this added security on top of the site raises the changes that I could potentially introduce a change that breaks the site, I think in the grand scheme of things its better to over prepared than to have something come up months later that could have been easily prevented.
Below I’ve included reports from a few of the scanners I used to bring site security up to where it is and where you may feel free to review for yourself. I also challenge you to plug in other sites that you use on a daily basis and see where they stack up. Remember that just because a site doesn’t match up 100% doesn’t necessarily mean a site is dangerous/bad or not because different sites have different security needs.
|
OPCFW_CODE
|
This article is designed to help those who want to migrate their Windows Server 2003 TS License Server from one machine to another. We recommend that you read through the instructions once before beginning the migration.
License Server migration is an added feature in Windows Server 2008 R2. In Windows Server 2008 R2, when you right-click on the server name you will see the ‘Manage RDS CALs’ option. After selecting this, you will see the ‘Manage RDS CALs Wizard’ to guide you through the migration process. But to migrate Windows Server 2008 license server, you need to follow the same steps as mentioned in the post.
The migration of your license server requires three stages. First, you must activate the new license server. Next, you need to deactivate the old server. Lastly, you need to move all the licenses from the old server to the new server. To do this, you will need to contact Microsoft Clearinghouse over the telephone. You should to be prepared with the paperwork for the original TS licenses, as this data needs to be provided to clearinghouse personnel. If the original paperwork is lost, then you need to contact your Microsoft TAM (Technical Account Manager) to obtain copies.
To migrate your license server:
Is that possbile to uninstall / remove the TS CALs that are already installed on TS licensing server? Both TS CAL and icensing server are Windows 2008.
You can either contact Clearinghouse for the same or use APIs:
uint32 UninstallLicenseKeyPack( [in] uint32 ProductVersion,
[in] uint32 ProductType,
[in] uint32 LicenseCount);
uint32 RemoveLicensesWithIdCount( [in] uint32 KeyPackId,
[in] uint32 LicenseCount);
Hope that helps!
Does this migration apply to windows server 2000 to windows server 2003. Is there anything to watch out for since they are not the same OS.
No, with all the pre-requisites mentioned in the post, just call the clearinghouse. They will guide you through the process.
Hi, as I had been reading, if you want to migrate the licenses from one server to another on, the OS should be the same, is that right?
Right now I see at the "Terminal Services Licensing" console that we have unlimited licenses in the old server that has windows 2000 and I would like to move them to another server but it has windows 2003. Can I do that following the steps in this blog or I need to buy new licenses or what I should do?
Thanks in advance,
Can you clarify what paperwork is being referred to in Step 3, #3b... "Paper work for the original TS licenses".
I need to migrate our TS CALs, but the existing CALs were purchased before I joined the company and I haven't had luck tracking down any paperwork.
@Leizer: Yes you can follow the steps mentioned in the post to solve your issue.
You need to buy the new CALs only in case if you are upgrading your terminal servers also. For accessing Windows Server 2003 terminal server you need Windows Server 2003 TS CALs.
Else you are good to go:).
@Graeme : Here paper work refers to all the details of the CAL Pack like agreement number, type of CAL Pack etc.
I can not locate the original TS Call paper work mentioned in step 3.b.
I have installed a new Windows 2003 server that will also run licensing. How do I migrate to TS Cals to this server?
The older licensing server is still working but we want to migrate to to newer server.
Any help is appriciated.
Do you have the license key pack IDs of the installed licenses?
I'm confused. On step 1 #1, it says "setup the VM guest as a Windows 2003 TS license server".
Should it be "Setup a Windows 2008 R2 as a license server?"
@Jessica: Yes, your right. It should be: "setup a machine running Windows Server 2008 R2 or Windows Server 2008 as a license server".
Thanks for pointing out. I will get that fixed.
i have a Win2K3 licensing server with 50 2K3 TS CALs installed. We have purchased a Win2K8 server to replace the 2003. Do i purchase new 2K8 CALs or do the steps above apply in this scenario?
If your plannig to replace you TS also with Windows Server 2008 then definately you need to buy Windows Server 2008 TS CALs.
Can a client with a 2008 TS device-CAL connect to a W2K3 TS server ?
More precisely : If I have now 100 TSCALS for 2003 with software assurance, and I install a 2008 TS (RD) license server and I upgrade the 2003 TSCALs to 2008 TSCALs can my clients connect to both 2003 and 2008 Terminal Servers ? Or should I have now suddenly 200 Cals ? And within a year if all 2003 servers are phased out, again 100 CALS ?
|
OPCFW_CODE
|
Maybe, however note that there is a serious inclusivity issue here; hardware tokens are far from free of cost. I think suggesting that one can only contribute to nixpkgs safely if one owns such a token is a bit tone deaf when a contributor is complaining about their lack of phone meaning they cannot contribute.
Also consider the threat model here; the point of 2FA is to require both factors for a login. Assuming you don’t store your passwords as plain text on the device you use to sign in, these are obviously still two factors - a knowledge factor (your encryption password) and an ownership factor (the device used to store your TOTP key).
This is weakened by malware being able to exploit a single device, and hence extracting your knowledge and ownership factor via keyloggers, but as @7c6f434c says, once you have that level of access to a device you also have access to browser session storage and therefore free-reign over account access anyway, so your suggested approach doesn’t add additional protection here.
Having a third device is great, but I don’t think it offers any direct protection against a threat that using TOTP on an end user device doesn’t, specifically for systems that use session cookies such as GitHub.
For other systems hardware tokens are an obvious plus (e.g., client certificates for SSH access), and they are more resistant to malware, but under reasonable assumptions for web access TOTP on a laptop should perform similarly against a dedicated attack, and completely stop password phishing and such.
This is false. My phone also happily runs arbitrary applications, at least to the extent my computers do. Honestly it’s easier there than on NixOS, I need to write a whole package almost every time I want to “just run something”. On my phone I just need to tick a box saying “allow apps from this location”.
Besides that, claiming Apple or Google vet applications makes their platform safe is definitely a fallacy. The mobile app stores have been known to distribute malware, in many wondrous ways, from taken-over ad delivery networks injecting malware (and executing it because clearly they’re designed for maximum security, not profit) to simply missed viruses in submitted binaries built on malware-ridden eclipse instances. This isn’t an uncommon thing on rarely used apps either, Wechat and Twitter at least have both been compromised in the past, and are well among the most used.
Neither is perfect, of course, I rely on open source contributors and software that I also can’t perfectly track, all of which could try to do evil, so arguably it’s inherently broken. Nixpkgs is at least more strict about the build processes than either of Google or Apple, but ultimately trust is a difficult topic.
Personally, I trust this particular instance of the software delivery mess a bit more, especially because my computers have declarative, reproducible configurations that I can manage - I feel that I can know my attack surface here. Not on my phone.
|
OPCFW_CODE
|
Hello all -
I'm new at this, so please bear with me.
I've been tasked to add a second WLC to a customer's network. They currently have a 4402-50 supporting 47 access points. The customer purchased a 3750G w/ integrated WLC (50AP) anticipating the addition of several more APs, but the addition is on hold. In the meantime, the customer would like the new WLC to help balance the AP load. The existing WLC is running 188.8.131.52. The customer has a mix of 1130, 1240, 1140 and 1250 APs. They are starting to use N features.
Both controllers are now on the same subnet 10.1.113.x. I've upgraded the code on the 3750G_WLC to match the other controller. Here are some questions I have now:
1. Am I right to think I need to set these two controllers in the same mobility group?
2. Am I right in thinking I will need to manually configure each AP to point to the controller I want it to be associated with? In other words, is there an AUTOMATIC load balancing feature? (NOTE: The customer just purchased and will have installed WCS)
3. Once we exceed 50 APs, will any other configuration changes need to happen?
This is all a little sketchy for me because we are right at the limit. So.. right now we are N+1 asking to load balance between the two. Once we exceed 50 APs, we're no longer N+1 and will need to designate which APs are high priority in case of failover.
Any tips on installing this second WLC into the network and any recommendations you may have for managing these 47 existing APs is appreciated.
So, when you define primary and secondary Controller, you specify the name of the controller. In 5.2 and above you also can specifiy the Management IP address of that controller. The AP will learn about these controller through the mobility group defined on the controllers.
About losing the controller, I didn't mean to configure priority after the fact. I meant that if you are trying to define priority (for the event that a controller was lost), then you would set the priority on the High Avail tab on the AP configuration. You would want to this when you install everything to begin with.
If you do not define primary/secondary, the AP should learn about the controllers from each controller, and when the controller sends its discovery replies, it should specify which one to load-balance to. Or something like that.
The bottom line is that if you don't have a primary/secondary defined, then the AP discovery process should get your AP on the least occupied controller. It is primary/secondary definitions that kind of overwrite this, which is what I was calling a "manual" load-balancing...
Does that help any more?
|
OPCFW_CODE
|
Supporting your critical end-of-life Windows 2003 Application Servers.
Free webinar, Wednesday 7th July from 11:00 to 11:45.
For more information read on, or join us on Wednesday 7th July from 11:00 – 11:45:
APPtechnology and Droplet will show you how you can safely support your legacy Microsoft Server 2003 and 2008 application server workloads for delivery across multi-platform infrastructures on physical, virtual or public cloud platforms.
Old server operating systems are still very much out there and continue to be used daily, even though they may well have long ago passed end of life. This means that security & patching vulnerabilities are a big problem, as the OS is no longer maintained. However, organisations frequently find they cannot be upgraded to Windows 2016 or Windows 2019 without breaking their delicate configuration. It might be necessary to maintain access to archive data for legal reasons or business continuity. And upgrades are often expensive, time-consuming, risky, and offer no commercial benefit.
Droplet Computing provides a containerisation solution that delivers secure server containers to platforms such as VMware vSphere, Microsoft Hyper-V, and Linux KVM as well as cloud-based environments such as Amazon AWS and Microsoft Azure.
Working together with Droplet Computing support staff, existing physical and virtual machines are converted into the container format without the need to rewrite or refactor the existing server-based applications.
Droplet Computing server containers do not substantially modify the source server OS. Nor are the server services inside the container modified in any way. Organisations simply deploy the Droplet Server Container Appliance (DSCA) on top of their existing infrastructure.
For example, with the DSCA you can run Windows NT4, Windows 2003, Windows 2008, or Windows 2012 containing any number of services required. Those system services applications run exactly as your Windows Admins are familiar with, and with all the administrative features and functionality they need and expect to see.
Droplet Computing server containers can containerise the application and allow it to run on the Droplet appliance, isolated and securely. For example, you may have an app that is dependent on Windows 2003, which in turn requires an older version of .NET. Containerising that app with Droplet Computing server containers allows you to continue running the app securely.
Once the physical or virtual machine has been converted into a server container, we establish a secure connection from the Droplet client to the Droplet server container. This establishes an encrypted end-to-end connection between the client application and the server services – effectively forming a secure bubble around the entire legacy software stack. In this configuration, the only way in and out of the legacy environment is via the Droplet secured link. This security is seamless to the end-user.
The Droplet Server Container Appliance (DCSA) is compiled into the Open Virtual Machine Format (OVF) and imported into the existing VMware vSphere or Windows Hyper-V
environment. Once deployed, it leverages the existing compute and storage resources of the virtualizations cluster platform and creates a layer of abstraction above the hypervisor which allows for our advanced security controls.
Once deployed, the DSCA is configured for your management network and discrete network for the server containers. This network can either block or allow access to the wider network
dependent on your security posture and application requirements. Additionally, the DCSA can be made aware of your Storage Area Network (SAN) to allow Droplet servers containers to be stored centrally, and empowering infrastructure managers to leverage the investment in features such as snapshots or replication.
Working with one of our expert engineers we will plan a conversion program with your infrastructure engineers where we “lift and shift” existing physical or virtual machines into the Droplet container format. This can be achieved nondisruptively without impacting on the current uptime of the server services.
COmotion is the process by which a container can be moved from one Droplet Server Container Appliance (DSCA) to another. This enables the system administrator to redistribute containers for performance, as well as evacuate the DSCA of all containers ready for a maintenance window.
The DCSA maps its networks by creating a “bridge” to network interfaces that map to network port groups on VMware Switch, or virtual subnets in Microsoft Azure or Amazon AWS. More sophisticated networking can enable native VLAN tagging inside the appliance, as well support for bonding/teams when the product is installed to physical hardware.
The DCSA supports a range of storage options including locally attached virtual disks, NFS,
SMB/CIFS, and iSCSI. NFS and SMB/CIFS shares can be used to store container images off the appliance and support our COmotion technology. In an on-premises VMware vSphere environment administrators can present block-based storage using VMware’s RDM feature to the appliance, and then on to the Droplet container. In cloud environments, iSCSI connections are generally used in environments where direct access to block-based storage is needed and an RDM feature is not available.
With Droplet Computing containers, apps are delivered completely isolated and secured away from the underlying OS of the device. This means that not only are the containers safe from attack (as tested by global experts in cybersecurity, NCC Group) containers are also portable across device platforms, without any change or update, enabling you to run your apps on different devices.
End users are presented with a secure and locked down workspace interface. The workspace allows them to launch only their apps via a tile system, gives no access to the container run times, and files cannot be copied and pasted between the container and the host device.
Together, the Droplet Server Container and Client Container lock together to create a secure “bubble” around the legacy application layer. This essentially makes it invisible on the network and protects it from hackers who want to steal your data.
The Droplet Computing client solution comprises two parts: the container app and the container image file. Being just two files means that build and deployment is quick, simple, and requires no additional infrastructure.
For mass deployment in the enterprise, the apps that get exposed to the end users are controlled by a small .json configuration file. This configuration file defines the apps and icons that are displayed to the end users and can be delivered via group policy. The container app can also be configured centrally using a simple settings configuration file which can also be centrally deployed.
If you want to quickly deploy apps (modern and legacy) to home workers and those that are also using their own device, then you can easily copy the files onto a removable media device and physically distribute.
|
OPCFW_CODE
|
I think that you're going to be pleasantly suprised in the near future.
.NET really is what Java wanted to be. Java is still 'write once, break anywhere'. .NET isn't at the point where it will 'break anywhere', but Mono is bringing it closer!
Ok - silliness aside, we've seen quite a few major shifts in the MS attitude towards being more open and supporting various open source initiatives. e.g. DNN, Mono, etc.
The problem is that people fundamentally don't understand Microsoft and their attitude. MS has traditionally been hostile to OS/FOSS/whatever, and is still against GPL licensing. There's a very good reason for it though. Microsoft was built on the backs of third party developers that created software for their platform. Offering for Windows flourished where other platforms were more or less barren.
But why would others develop for Windows? Money. Pure & simple. They can make a living doing it. Or at least beer money in any event. Microsoft's best interests are served by helping it's third party developers make money. The GPL sinks this, and so it's understandable that MS wouldn't cozy up to it. BSD licensing on the other hand makes sense. It enables a profit model. MS now has quite a few licenses that steer in this direction, e.g. MSPL, etc.
The markets and technologies are getting to the point now that it makes sense for MS to open up more than in the past. .NET is clearly the way of the future in how it works.
The CLS is open for anyone to come up with an implementation (CLI/CLR/.NET). Only Novell has stepped up to the plate in a serious manner. Well, there is Portable.NET
, but how far along are they? Mono seems to be the only serious contender.
MS is in no fear of having .NET being usurped for the moment. Looking at the various tools available:
Eclipse with BlackSun
, etc.Mono DevelopXDevelop
And comparing them to Visual Studio? There's no comparison. VS is light years ahead of all of them.
While a small developer may choose an alternative to VS due to cost, for a company that needs to be productive, VS is the only option available. The alternatives might be ok for compiling or whatever, but productivity in VS is just leagues above anything at the moment.
Businesses are and will be the main focus for a long time. .NET was never marketed to small developers when it came out - MS only targetted enterprises and government. .NET is now feasible for small developers and becoming more and more attractive all the time.
I think Mono is going to be the major force there in pushing .NET (or the CLS) forward for cross-platform development. There really isn't much else. RealBASIC. ANSI C. Java. A few others. But nothing is really coming close to the very rich set of tools that you get with .NET.
Time will tell of course, but with 2 billion dollars of initial investment from MS in the CLS, and all they are pouring into .NET, there's no way MS will ever let .NET fail. Right now they need to address the cross-platform issue and the open source issue in order to remain relevant and expand. It's only good business for them to embrace what's going on right now. And this time, it's not the MS 'embrace and extend' going on, it's MS embracing, and others on the outside 'extending'.
If MS isn't careful then .net is going to get surpassed by a similar project which isn't so wedded to one platform (windows).
Another way to look at it is that there is absolutely nothing out there other than the CLS that remotely addresses the issue. Java? Well... Not quite. It's very fragmented. What flavour of Java?
So who's going to drop $2 billion to come up with something relevant? I don't see anyone other than Novell stepping up to the plate there.
This is a good thing for Novell because Netware has really lost its relevance. Novell needs something to keep it in the game in the future. They're moving towards Linux now, but they'll need more than 'just another Linux distro' to do it.
Anyways, let's hope for the best. Better and faster development tools are always a good thing.
|
OPCFW_CODE
|
The Optimization and Machine Learning group (ORISON) aims at developing real-world solutions based on the latest results of two strategic fields of today’s Computer and Data Science: Numerical Optimization and Machine Learning (i.e. data-based Modelling).
In that respect it seeks to sit at the top of two megatrends strongly driving innovation in the world today and for the foreseeable future: the optimisation of the usage of resources and the use of machine learning/artificial intelligence everywhere it can help improve current processes (industrial, logistics, or other).
Unlike research groups hosted in Universities which typically have academic perspectives, the ORISON research group belongs to an RTO and so in that sense it has a strongly applied focus: its goal is to leverage the best of state-of the-art and mature academic concepts to solve real-world problems emanating from LIST partners or societal challenges. The breadth of problems ORISON can address is then very large, with the only desired prerequisite to have adequate data or information/know-how of the problem considered to start working from.
Main expertise fields:
- Operations Research/ Numerical Optimization: local/global, non-linear, non-convex, mixed-integer and/or combinatorial optimization.
- Modelling: data-driven and know-how (expert)based modelling, reinforcement learning
- Decision Support Systems & Simulation
The research field as described for the ORISON group is today very much an applied field and in that sense many tools in there can be developed or improved, and there is even more room for furthering theoretical results in that direction (a second direction which would be more the focus of universities research groups). Such challenges belong to topic areas like:
- Investigate how to hit the best trade-offs between complexity vs accuracy of the models developed, in view of using them to best improve the considered problem/process via leveraging numerical optimization procedures.
- Investigates the profound and powerful links between numerical optimisation and data-driven modelling, machine learning and other branches of artificial intelligence. It pursues the quest towards automated approaches for various challenges such as combinatorial problems, dynamic behaviour of real-time adaptive algorithms as well as ability to handle unpredictable instance characteristics (exact, stochastic and metaheuristic-based solutions). It seeks to contribute to a deeper understanding of the automated learning of model characteristics from data and of the impact of available knowledge on the quality of the optimized solution.
- Industrial applications or manufacturing (B2C, B2B, services or products for industry 4.0)
- Real-world scheduling, logistics, transportation,
- Cloud load balancing,
- Industrial Internet of Things applications
- Systems control/automation (e.g. robotics, energy, auto-mobility).
- OCTogone (collaborative project),
- SWAM (Bridge Project),
- SUCCESS CCC (EU project),
- Evacuate solution (PhD Thesis Peiman ALipour)
- Goodyear Data Science for Tires project (collaborative project)
- Control Tower (acting now) – COVID project funding
- Sarvari P.A., Ikhelef I.A., Faye S., Khadraoui D. A dynamic data-driven model for optimizing waste collection. 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020.
- Gutierrez-Gomez L., Petry F., Khadraoui D. A Comparison Framework of Machine Learning Algorithms for Mixed-Type Variables Datasets: A Case Study on Tire-Performances Prediction. IEEE Access.
- Rezgui D., Bouziri H., Aggoune-Mtalaa W., Siala J.C. A comparative study of local search techniques addressing an electric vehicle routing problem with time windows. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
- Imeri A., Lamont J., Agoulmine N., Khadraoui D. Model of dynamic smart contract for permissioned blockchains. CEUR Workshop Proceedings.
- Sébastien Faye, Foued Melakessou, Wassila Mtalaa, Prune Gautier, Neamah AlNaffakh, Djamel Khadraoui; 2019/11/13; Proceedings of the 1st ACM International Workshop on Technology Enablers and Innovative Applications for Smart Cities and Communities; Pages: 38-45
- Kanwar Singh, Peiman Alipour, Frank Petry, Djamel Khadraoui; Application of machine learning & deep learning techniques in the context of use cases relevant for the tire industry; 2019/10; VDI Wissensforum, Hannover
- Peiman Alipour Sarvari, Mohammad Nozari, Djamel Khadraoui; The Potential of Data Analytics in Disaster Management; 2019; Industrial Engineering in the Big Data Era; Pages 335-348; Springer, Cham
|
OPCFW_CODE
|
March 14, 2015 in Classic self-hosted technical help
My box runs Cpanel. It has one 1 GB HDD and I am adding a 250 GB SSD.
I need help to set up the box to use the SSD for the system, databases and static files.
I want to make 2 500 GB partitions on the HDD. One for logs and back ups, and the other one as a mirror of the SSD in case o failure.
Is that possible to achieve what I want in Cpanel?
Over the internet I only could find articles about running different channel accounts on different drives or partitions. That's not what I want. I want all accounts running from SSD, only logs and back ups on HDD and if possible a security exact copy of the SSD tu use in case o failure.
I think I understand what you want is:
SSD1 - hosts everything and runs everythingSATA1 - Mirrors SSD1 for instant boot from this drive in the event SSD fails.SATA2 - holds logs and backups of SSD1
To achieve what SATA1 does, what you want is called RAID1. In the event of a disk failure, with raid1, the other disk will keep the system running without a hitch. It won't even go down. BUT!!! Mixing an SSD and SATA for raid is absolutely terrible and would lose the performance of the SSD. So, I make 2 recommendations to you as an alternative.
Option 1 (best option, but $$$)
Get 2x SSD... And set them as raid1 that runs everything. Then you are protected from disk failure and have even higher performance now. SATA2 now is for backups and logs.
Option 2 (without modifying physical components and sticking to your plan)
You're going to have to give up the idea of instant booting from another drive or constant running despite disk failure.
Install everything, including cpanel on the SSD.
Make 2 partitions on SATA2 (I'll call it sdc1 and sdc2). Mount /backup on sdc1 and logs directory (i forget where...) on sdc2.
Install an identical OS on the SATA1 with cpanel and everything (again). After installing the OS, leave the default boot as SSD1. On the SSD1, mount the SATA1 as some new directory. Have a rsync daemon setup to clone the SSD's /home to SATA1's /home. So that they're always the same. You will not want to synchronize OS files. You do not want to sync SQL files either. It will get badly corrupt.
Now if the SSD fails, you change the boot record to boot up from SATA1 instead. After booted, load the mysql backup from SATA2 into SATA1's mysql. Now you have a running server. If the cpanel's SQL backup is not frequent enough for you, you can setup your own mysql backups to make it work.
Option 3 (changing your plan)
Have the two SATA loaded as RAID1. Install OS and cPanel there. Mount /var/lib/mysql on your SSD.
Now your server will still be running if any of the disk fails. Though, your sites won't. To recover, you'd unmount the SSD or after getting a new physical SSD, reinstall mysql and load from backup.
i am in the process of doing this tonight - all I did was text my hosting guy and make it happen
we are upgrading the RAM and adding SSD in the open slot - on that SSD will be SQL and my HOME directory the SATA 1 will be remaining home directories, OS. SATA2 is the backups. I think that how it will be.
I don't know if my Cpanel license will allow me to install the software in the sdc1. I will have to
This topic is now archived and is closed to further replies.
Started 3 minutes ago
Started 5 hours ago
Pushpendra Singh Chauhan
Started September 28, 2022
|
OPCFW_CODE
|
What should be the word vectors of token <pad>, <unknown>, <go>, <EOS> before sent into RNN?
In word embedding, what should be a good vector representation for the start_tokens _PAD, _UNKNOWN, _GO, _EOS?
Very vague question. Input vectors and the target vectors are both derived from a collection of texts. RNNs then learn weights for hidden layers that represent appear to represent relationships among words and texts.
The input vectors for RNNs are usually either word-document co-occurrence matrix weighted by TF*IDF or a word-word co-occurrence matrix (neighbors).
What should be vectors if you have pretrained word2vec model from google and you don't want to train embeddings again. Should it be just vector of zeros or ones or something different?
Spettekaka's answer works if you are updating your word embedding vectors as well.
Sometimes you will want to use pretrained word vectors that you can't update, though. In this case, you can add a new dimension to your word vectors for each token you want to add and set the vector for each token to 1 in the new dimension and 0 for every other dimension. That way, you won't run into a situation where e.g. "EOS" is closer to the vector embedding of "start" than it is to the vector embedding of "end".
Example for clarification:
# assume_vector embeddings is a dictionary and word embeddings are 3-d before adding tokens
# e.g. vector_embedding['NLP'] = np.array([0.2, 0.3, 0.4])
vector_embedding['<EOS>'] = np.array([0,0,0,1])
vector_embedding['<PAD>'] = np.array([0,0,0,0,1])
new_vector_length = vector_embedding['<pad>'].shape[0] # length of longest vector
for key, word_vector in vector_embedding.items():
zero_append_length = new_vector_length - word_vector.shape[0]
vector_embedding[key] = np.append(word_vector, np.zeros(zero_append_length))
Now your dictionary of word embeddings contains 2 new dimensions for your tokens and all of your words have been updated.
This is an interesting idea. Do you know if this is how it is done in most popular embeddings?
I don't know about the popularity of this, but this method has worked for me in in the past when creating a seq-2-seq model.
Did you come up with this idea yourself?
As far as I understand you can represent these tokens by any vector.
Here's why:
Inputting a sequence of words to your model, you first convert each word to an ID and then look in your embedding-matrix which vector corresponds to that ID. With that vector, you train your model. But the embedding-matrix just contains also trainable weights which will be adjusted during training. The vector-representations from your pretrained vectors just serve as a good point to start to yield good results.
Thus, it doesn't matter that much what your special tokens are represented by in the beginning as their representation will change during training.
It is common to use word2vec features as frozen embeddings that don't need further training to minimize training time. You can, of course, but you often don't when the embeddings are just part of a much larger whole.
|
STACK_EXCHANGE
|
@mw12554 Maarten can you comment on this (above):
With all respect for the work done, but committing code that breaks is bad practice.
Both single channel gateway implementations below give errors during compilation in Arduino IDE v1.8.4.
(And even more issues in PlatformIO but latter probably due to my limited PlatformIO experience.)
I used a custom pin configuration because I wanted to add an SSD1306 OLED (I2C) display for showing status messages. That did not yet workout however because I ran out of of available pins/ports.
Hallard’s board connects RFM95W’s DIO0, DIO1 and DIO2 all to a single ESP8266 GPIO port (with diodes, using a separate diode per DIOx port). I don’t have Hallard’s board but I may give that a try myself. I read somewhere (can’t remember) that this is not an ideal situation however.
Andres Spiess has a version of the software that already includes support for SSD1306 display, but is based on the much older things4u v1.0 version: https://github.com/SensorsIot/ESP-1ch-Gateway
- NodeMCU v1.0
- RFM95W on the following adapter board: https://webshop.ideetron.nl/ADAPT_RFM95W
(note: pinout shown on the webpage is not from the RFM95W adapter)
- A regular breadboard
(I did solder the RFM95W module onto the adapter board manually.)
I succeeded to get both version 3 and 4 of things4u ESP single channel gateway working with PlatformIO.
Including the ‘Hallard’ pinout. Connecting DIO0 and DIO1 via diodes to a single GPIO port, which left D1 and D2 free for the display.
while trying to build a SC gateway for development purposes only, I encountered an issue when connecting an ESP8266 to an RFM95W with the following software: https://github.com/things4u/ESP-1ch-Gateway-v4.0
I followed the readme and chose a “COMRESULT” pcb layout.
My connections are the following:
RFM95W ------> ESP8266
DIO 0/1/2 from RFM to D1/D2/D3 esp
G from RFM to G esp
SCKfrom RFM to D5 esp
miso from RFM to D6 esp
mosi from RFM to D7 esp
NSS from RFM to D8 esp
3.3V from 3.3V
The problem is that it can receive any packet (and forward to TTN), but is unable to send anything back (so OTAA will fail).
Can anybody spot my flaw? Maybe the RFM95W is different to other LoRa-modules available?
In order to have the gateway send downlink messages on the pre-set spreading factor and on the default frequency, you have to set the _STRICT_1Ch parameter to 1. Note that when it is not set to 1, the gateway will respond to downlink requests with the frequency and spreading factor set by the backend server. And at the moment TTN responds to downlink messages for SF9-SF12 in the RX2 timeslot and with frequency 869.525MHz and on SF12 (according to the LoRa standard when sending in the RX2 timeslot).
…but for LMiC, disabling uplink frequencies might not affect the downlink frequencies, in which case you need to ensure the frequencies as commanded by TTN are used by the node, just as usual in LoRaWAN.
I can follow your thoughts but unfortunately it doesn’t make a difference with my gateway. Still searching for the problem
8 posts were merged into an existing topic: Node not showing up in Applications
I have been experimenting with TTN last year with RPi based single channel gateway and RFM95 and RN2483 based nodes. The RPi and MCU’s were later re-used for other projects. Not long after the TTN environment was updated/changed.
I decided to wait until the TTN gateways that were planned for my neighbourhood become available before picking up again. However, availability of the ‘The Things Network Gateway’ devices (and their roll-out) has been seriously delayed and there still is no TTN coverage in my area. I decided to setup a single channel gateway again instead and continue from there.
I currently have a running Things4U ESP single channel gateway v5 (hardware: ESP-12E, RFM95W and SSD1306 OLED display). I am yet unable to test it due to a lack of nodes. Next step is setting up some nodes that will work with this gateweway.
I have the following questions, hope you can help:
Where can I find the most actual ‘standard reference’ code for a TTN node (RFM95/SX1276 based) for AVR (e.g. Pro Mini) and ESP8266?
How to properly configure the nodes / adapt the code so the nodes will work successfully with the Things4U single channel gateway?
Both ABP and OTAA. I’m looking for some clear step-by-step instructions and/or code examples.
I have previously used Thomas Telkamp’s single channel packet forwarder for Raspberry Pi software but that is no longer maintained (“this repository is deprecated, and the code is not up-to-date to use on The Things Network”).
What is the best alternative/most up to date software for a Raspberry Pi + RFM95W/SX1276 based single channel gateway?
I am playing since some weeks with BME280 and an arduino pro mini with the code from lex (LoRaWAN_TTN_Env_Node). Everything was fine. Since some days (tried also the cayenneLP integration) I didnt receive traffic neither in the traffic tab of my two getways within the ttn console nor in the data tab of my device. Gateways were an Dragino Raspi hat with the single_channel_paket_forwarder and the Dragino LG01-P. Both gateways are connected to the ttn.
I also changed the to the “relax frame counter” and I can also see the data packets sended by the raspi gateway in the raspi terminal. I get also the EV_TXCOMPLETE in the console. I use ABP btw.
Any ideas what’s wrong ?
Thanks for your help !
I have a single channel gateway (https://github.com/things4u/ESP-1ch-Gateway-v5.0) running.
Have no nodes running at the moment but the gateway picks up a few messages per day.
In TTN Console / Gateway / Gateway Overview I can see that the messages are received by the backend but no traffic is showing up in Gateway Traffic.
The gateway is only running since a few days. I have not seen any traffic showing up in ‘Gateway Traffic’ yet.
This appears similar to your issue.
I have a problem with my Single Channel Gateway and a sensor node.
My gateway is a Raspi + Dragino-LoRa-GPS-Shield. The sensor node is an ArduinoUNO with a Dragino-LoRa-Hat.
The code i use for the node is this:
The right keys from the console are in MSB inserted.
I can receive frames from the node in the gateway traffic window but in the application data and device data there are no frames at all and the device status is “never seen”.
The device is activated over ABP.
I tried to reset Frame Counter Checks without success. Deactivate Frame Counter Checks doesn´t help too.
I hope you can help me. There are a few threads with the same or nearly the same problem but the answers dont help in my case.
Not an answer to your question but I’m curious which single channel gateway software you are running on the RPi. Do you have a link?
Have you noticed the following?:
*** IMPORTANT ***
Please note this repository is deprecated, and the code is not up-to-date to use on The Things Network.
This repository will not be further maintained. Please find another repository if you want to deploy a single channel gateway.
Did you make any changes to make it ‘up-to-date’ (I’m not sure what parts need an update, if any)?
I found this guide:
It is based on this repository and when you follow the instructions, there will be some changes in the code…
the changes depend on your location of course…
I don’t see any code changes there, only configuration.
I’m not sure if any changes are needed but “depricated” and “not up-to-date” suggested to me that the code may have to be adapted to changes done on the TTN network.
I’m looking for an up-to-date version of a single channel gateway for RPi myself. No response on my question yet though (Single Channel Gateway part 2).
I’ve been trying to run a single channel gateway with downlink capability but I could not get it work. I`m trying the C++ version with now successful. I could make it work only on Lua version.
Does anyone could run the version in C++ with downlink?
Additionally I could not run the new 5.0 version. it causes ESP to restart in loop.
Does anyone have the same error? And could solve it?
Could your problem be related to incorrect wiring/configuration?
I have Things4U esp-1ch-gway v5.0 running on ESP8266 (ESP-12F) and get feedback on serial monitor and OLED display, no restarts here.
But it does not see my nodes yet. I tried with both _STRICT_1CH enabled and disabled on the gateway and tried nodes with both ABP and OTAA but the gateway does not pick up any traffic from my nodes.
I’m going to recheck my wiring.
|
OPCFW_CODE
|
ProHacktive has created Sherlock, an automated, plug & play cybersecurity auditing box. This solution protects you from attacks by giving you a complete inventory of all devices on your network and their security vulnerabilities in real time. Find for each of them simple patching proposals to be performed by your fleet administrator.
A successful attack is an infection on a workstation that spreads from device to device, until it takes over your entire network. Sherlock is as simple as it sounds: when vulnerabilities are patched, the infection stays local, no spread to the network, no widespread cyberattack.
How it works
- Every hour the ProHacktive servers update themselves with the known databases to discover new vulnerabilities.
- Every hour the boxes of our customers come to recover the new vulnerabilities.
- Every night our customers' boxes come to get the new features that our developers have concocted for you.
The discovery of the environment
- At startup, the Sherlock box will detect its network configuration and immediately find its neighbors
- As soon as the Sherlock box sees an address it doesn't know yet, it will try to find machines in this new subnet as well as in the neighboring subnets.
- For each detected machine, the Sherlock probe will try to find out what services are running on that machine
- For each detected service, the Sherlock probe will then retrieve the list of known vulnerabilities from its internal database.
- If the detected vulnerability matches a validating scan module, the Sherlock probe uses that module to verify the relevance of the discovery.
- When validating a vulnerability the probe will generate a random subdomain for each test.
- The probe then uses this random subdomain to launch the fake attack.
- The Sherlock probe then monitors for a few minutes this domain name.
- If the server responds that this domain name has been queried, the probe can deduce that the attack is possible without having touched anything on the production server.
- At the end of each audit, the Sherlock probe makes a summary of the results found and calculates the final score.
- If the rating has degraded (a new vulnerability has emerged, for example) or it has been a long time since the end customer has received a report, a new audit report is sent.
- Whatever happens a detailed audit report is generated locally and can be retrieved at any time via the GUI.
Exchanges with ProHacktive
- Every minute, Sherlock probes communicate with the ProHacktive infrastructure to report their health status.
- The ProHacktive teams but also partners can therefore monitor the health of your box and especially validate that the configuration is correct.
- In case of concerns and at the request of ProHacktive teams, your Sherlock probe can come and connect to our infrastructure so that internal teams can troubleshoot it.
|
OPCFW_CODE
|
Our great sponsors
- Amplication - open-source Node.js backend code generator
- Appwrite - The open-source backend cloud platform
- SurveyJS - A Non-Cloud Alternative to Google Forms that has it all.
- Mergify - Updating dependencies is time-consuming.
- InfluxDB - Collect and Analyze Billions of Data Points in Real Time
|4 months ago||6 days ago|
|MIT License||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Java 21 makes me like Java again
22 projects | news.ycombinator.com | 16 Sep 2023
and also FOSS (Apache 2): https://github.com/JetBrains/intellij-community (as well as PyCharm found in the "python" subdirectory)
Predictive Debugging: A Game-Changing Look into the Future
5 projects | news.ycombinator.com | 1 Aug 2023
Interesting that you didn't even mention their opensource community edition: https://github.com/JetBrains/intellij-community wich powers plenty of other open source IDEs
Nice FUD ;)5 projects | news.ycombinator.com | 1 Aug 2023
Ask HN: Small scripts, hacks and automations you're proud of?
49 projects | news.ycombinator.com | 12 Mar 2023
> I can make the batch file chimeric so it can run in a linux shell as well. Maybe that'll to be my lazy Sunday afternoon?
You may want to draw inspiration from the latest incarnation of JetBrains build bootstrapping script:
Gnome Builder as a lean IDE for Rust ?
2 projects | /r/rust | 23 Feb 2023
IntelliJ IDEA Community Ed. is open source: https://github.com/JetBrains/intellij-community
2 projects | /r/learnpython | 7 Feb 2023
"Горшков" запустил "Циркон" в Атлантике. Но есть нюанс..
2 projects | /r/liberta | 26 Jan 2023
Kotlin is tightly linked to IntelliJ and that's a risk
4 projects | /r/Kotlin | 22 Jan 2023
Remember too that IntelliJ Community Edition is Apache-licensed open source, as is Kotlin itself and all the Kotlin tooling. There's nothing proprietary.
SLT – A Common Lisp Language Plugin for Jetbrains IDE Lineup
2 projects | news.ycombinator.com | 15 Jan 2023
w.r.t. "THIS PLUGIN IS EXPERIMENTAL and can crash at any time! Please report all bugs!", there are quite a few existing plugins that capture exceptions from their plugin and send those error reports in some automated way (email, Sentry, I think one even had a GitHub issues action using an api token): https://plugins.jetbrains.com/intellij-platform-explorer/ext...
I mention this because (a) automated error reports are able to carry context with them that less experienced bug reporters might not know to send (b) it's a pet peeve of mine when an app asks me to gather up version and platform info that it already knows
This is the gory details: https://github.com/JetBrains/intellij-community/blob/idea/22...
You will never avoid rabbit holes
7 projects | /r/ProgrammerHumor | 25 Dec 2022
Nope, the community Edition is completely open source: https://github.com/JetBrains/intellij-community
What are some alternatives?
pylance-release - Documentation and issues for Pylance
oh-my-posh - The most customisable and low-latency cross platform/shell prompt renderer
vscode-kotlin - Kotlin language support for VS Code
kotlin-vim - Kotlin plugin for Vim. Featuring: syntax highlighting, basic indentation, Syntastic support
Apache NetBeans - Apache NetBeans
theia - Eclipse Theia is a cloud & desktop IDE framework implemented in TypeScript.
Code-Server - VS Code in the browser
KotlinLanguageServer - Kotlin code completion, linting and more for any editor/IDE using the Language Server Protocol
kotlin-sublime-package - Sublime Text 2 Package for Kotlin Programming Language
vscodium - binary releases of VS Code without MS branding/telemetry/licensing
Light Table - The Light Table IDE ⛺
rd - Container Management and Kubernetes on the Desktop
|
OPCFW_CODE
|
54 Open Source Cni Software Projects
Free and open source cni code projects including engines, APIs, generators, and tools.
Kube Ovn 1134 ⭐
A Kubernetes Network Fabric for Enterprises that is Rich in Functions and Easy in Operations
Kube Spawn 443 ⭐
A tool for creating multi-node Kubernetes clusters on a Linux machine using kubeadm & systemd-nspawn. Brought to you by the Kinvolk team.
Cni Genie 449 ⭐
CNI-Genie for choosing pod network of your choice during deployment time. Supported pod networks - Calico, Flannel, Romana, Weave
Kubernetes The Ansible Way 89 ⭐
Bootstrap Kubernetes the Ansible way on Everything (here: Vagrant). Inspired by Kelsey Hightower´s kubernetes-the-hard-way, but refactored to Infrastructure-as-Code.
K Vswitch 80 ⭐
k-vswitch is an easy-to-operate, performant and secure Kubernetes networking plugin based on Open vSwitch
Agorakube 78 ⭐
Agorakube is a Certified Kubernetes Distribution built on top of CNCF ecosystem that provides an enterprise grade solution following best practices to manage a conformant Kubernetes cluster for on-premise and public cloud providers.
Kubernetes For Windows 44 ⭐
Ansible playbooks and Packer templates for creating hybrid Windows/Linux Kubernetes 1.10+ cluster with experimental Flannel pod network (host-gw backend)
Bond Cni 44 ⭐
Bond-cni is for fail-over and high availability of networking in cloudnative orchestration
Ansible Kubeadm Contiv 27 ⭐
Ansible script to provision kubernetes cluster with contiv network CNI, also deploys sample applications with micro segmentation and istio service mesh.
Midonet Cni 26 ⭐
A CNI plugin written in Go which makes midonet talk to kubernetes, support for multiple namespace. Edit
K8s With Nsx T 2.4.x 16 ⭐
This repository contains a series of posts which outlines the steps to integrate NSX-T 2.4.x with K8S
K8s With Nsx T 2.5.x 14 ⭐
This repository outlines and explains the steps to integrate NSX-T 2.5.x with K8S
Cni Migration 42 ⭐
A CLI to migrate the CNI on a Kubernetes cluster from Canal (Calico + Flannel) to Cilium, live with no downtime.
Kilo 1207 ⭐
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)
Alibaba Hybridnet 86 ⭐
A container networking solution aiming at hybrid clouds, providing a hybrid-lay (overlay and underlay) network on node level.
Vsphere Kubernetes Drivers Operator 19 ⭐
vSphere Kubernetes Driver Operator to simplify and automate the lifecycle management of CSI and CPI for Kubernetes cluster running on vSphere
Flintlock 19 ⭐
Lock, Stock, and Two Smoking MicroVMs. Create and manage the lifecycle of MicroVMs backed by containerd.
Slirp Cni Plugin 12 ⭐
A user-mode network ("slirp") CNI plugin - container networking for unprivileged users
|
OPCFW_CODE
|
Add helper features for manipulating outputs and sources
In working on #60 and #62 it has become clear that developers will be changing csv records into something useful for addressing in memory and the other way around.
We should make it much easier for them to do so rather than having to write code like this over and over again:
def counts_to_output(counts):
return [
{ 'parent_id': key, 'sibling_count': value }
for key, value in counts.items()
]
def source_to_counts(rows):
return {
row['parent_id']: row['sibling_count'] for row in rows
}
Our design doc had the notion of adding to the row_step annotation the ability to specify lookups from sources and index them.
Lisa's comment with an alternate suggestion (https://github.com/lisad/phaser/pull/92#discussion_r1544840384):
Maybe there are enough sources and outputs that are dictionaries for mapping, rather than rows, that we should just support that as a IO type?
Loading in "State capitals" as mapping data to use to enrich - would be able to load it in as a dictionary from the beginning
Outputting "Employee count by manager" as an extra output would save it out as a dictionary
and in this example of generating mapping data in one phase and using it in another, it would stay in that format throughout
The format to save mapping data as a dict can still be a CSV with a index column as the first column, or when we support JSON that has a native dict format.
I think your instinct to add more functional data structures is correct. I'm using python the way I'm used to which is as the end-user of libraries, keeping things loose to be able to change things later (and it was always easy to quickly ship a bug fix). But I think its appropriate to work more in the direction of defined and limited data structures and type in a library, because we can't ship bug fixes to people quickly and may not even hear about bugs... so more safety is warranted.
As a developer, I want to write steps and phases with extra outputs and sources like this:
@row_step(outputs=['counts'])
def increment_counts(row, outputs):
parent_id = row['parent']
outputs.counts[parent_id] += 1
return row
class EnrichmentPhase(Phase):
extra_outputs = [ 'counts' ]
steps = [ increment_counts ]
I think we want a class that encapsulates what type of data the output will be (a lookup-style dictionary or a list of records). That would be specified in the phase since multiple steps may want the same output and should not define it each time.
class EnrichmentPhase(Phase):
extra_outputs = [ Lookup('counts') ]
steps = [ increment_counts ]
Then the phase can manage the variable to collect the data and add it to the context as an extra output when all of its steps are completed.
A Lookup would be saved as a CSV with an index column or as JSON. The step would treat a Lookup as a dict. There could be an analogous Records class that would be saved as regular CSV. When a step gets a Records object, it would treat it as a list.
Something similar would work for extra sources:
@row_step(sources=['departments'])
def assign_department(row, sources):
dept_id = row['dept_id']
row['dept'] = sources.departments[dept_id]
return row
class EnrichmentPhase(Phase):
extra_sources = [ Lookup('departments') ]
steps = [ assign_department ]
It feels weird to use row_step for that though, and when I get back to batch_step it seems pretty straightforward? I like the formatting of extra outputs and sources though, riffing on that and adding save-format
@batch_step
def increment_counts(row, context):
child_counts = context.outputs['counts']
for row in batch:
child_counts[parent_id] += 1
return batch
class CalculationsPhase(Phase):
extra_outputs = [ Lookup('counts', format='json) ]
steps = [increment_counts]
Two other alternative ways to specify and receive outputs and sources:
The sources and departments are passed in as arguments directly instead of being accessed through another object:
@row_step(sources=['departments'], outputs=['counts'])
def do_step_stuff(row, departments, counts, context):
pass
The sources and outputs are annotated on the parameters to the function rather than in the decorator (I do not know if this is possible in Python, but it may be my favorite way to do this, ergonomically).
@row_step
def do_step_stuff(row, @output('counts') count_dict, @source('departments') depts):
pass
It feels weird to use row_step for that though
I can see the annotations and extra parameters used for any kind of step, not just @row_step.
What I like about having parameters for sources and outputs is that it will make writing tests for steps more straightforward and clear. Passing in a context object that has the right things set in it is a little more of an opaque operation. I've seen APIs like this get hard to manage as the test suite increases or as dependencies change.
These are excellent points; it should look good and be readable when folks test their steps, and I agree we should find some way of passing inputs and outputs other than on the context. I wonder if we should also find ways to return the results ? Not just the batch but the counts? It wouldn't work well on row step though so probably not a good idea...
class MyPipeline(Pipeline):
counts = IOObject(style='lookup', format='json')
valid_parents = IOObject(style='list', format='csv')
phases = [CalculationsPhase]
@batch_step(extra_inputs=[MyPipeline.valid_parents], extra_outputs=[MyPipeline.counts])
def increment_counts(batch, valid_parents):
for row in batch:
if row['parent_id'] in valid_parents:
counts['parent_id'] += 1
return batch, counts
class CalculationsPhase(Phase):
extra_inputs = [MyPipeline.valid_parents]
extra_outputs = [ MyPipeline.counts]
steps = [increment_counts]
I believe this is possible - the @batch_step wrapper would look at the parameters of the function it's wrapping and supply the extra input which is also declared on the phase and the pipeline. The wrapper would also grab the return results of the step and return the batch the normal way and assign the other output to the pipeline's collections of data.
|
GITHUB_ARCHIVE
|
Using default nameservers vs hosting nameservers. Let me give you an example on resolving superuser.com in a non-cached way: The name server that I use is 192.168.1.1. I have a domain nomadapp.in which is registered at GoDaddy. DNS is the system that convert name into IP addresses. BJ DNS records are what contain the actual information that other browsers or services need to interact with, like your server’s IP address. What is the difference between a Nameserver and a DNS? To answer this question, think that you have a video and you want to upload it to YouTube.com. What nameservers do is use this information to translate domain names into numerical IP addresses – … MX is for mail, "A" is pointer etc. A name server refers to the server component of the Domain Name System (DNS), one of the two principal namespaces of the Internet.The most important function of DNS servers is the translation (resolution) of human-memorable domain names (example.com) and hostnames into the corresponding numeric Internet Protocol (IP) addresses (184.108.40.206), the second principal name space of the … (Although many people think "DNS" stands for "Domain Name Server," it really stands for "Domain Name System.") In this case, you are the domain and YouTube is your host. Counting all name servers is a bit tricky, because you can't simply enumerate all the domains. A name server will actually translate de domaine.tld to an IP and hand you over the result to unbound, who will hand it over to the client. Thank you, Triet. Ask Question Asked 6 years, 8 months ago. Solved! Active 6 years, 8 months ago. In the example above, we left out one point for simplicity: DNS records. NameServer vs DNS A-record. Nameserver is a server on the internet specialized in handling queries regarding the location of a domain name’s various services.. Nameservers are a fundamental part of the Domain Name System (DNS). First, what is a nameserver? One of the advantages of unbound is that it is compatible with DNSSEC. Go to Solution. Name Server (NS) is the server that converts website name into its IP address. These two DNS tools, unbound and dnsmasq, are mutual exclusive, you can use only one of them. Regards. Nameservers, on the other hand, help store and organize those individual DNS records. ip name-server versus dns-server Hi everyone, Could someone please tell me the difference between the two commands? like A,AAAA, MX, NS etc. Thanks! A DNS is the Directory of the Internet, where it searches for the IP address associated with the specific domain name to deliver various web-based services. That name server is asked for the IP address ("A record") of example.com. One of the most fundamental instruments of the internet is the Domain Name System, or DNS. For temp hosting i was using Dreamhost and had changed the NameServer in Godaddy to the ones of DreamHost. Name Server record is a part of whole DNS. A name server hosts or caches these translations, in the case where they are hosted the name server is often called a DNS Server. Viewed 2k times 2. Nameservers vs DNS Records. The internet and the World Wide Web are wild frontiers that rely on computer languages and codes to find and share data and information. there are many type of DNS records. This is very helpful. If a DNS entry is like a phone number in a phonebook, the nameserver is the phonebook, as it stores DNS records to be looked up. Nameservers are part of a large database called the Domain Name System (DNS), which acts like a directory for devices and the IP addresses attached to them. The browser now has the IP address of the web server of example.com. Your hosting provider will typically provide you with two or more nameserver records specific to the server your website is currently being … Whereas a Nameserver provides the specific IP address of particular services associated with that domain name. The browser now makes an HTTP request to the IP address discovered in step 1, asking it for the website "example.com". ip name-server and (in DHCP pool setup) dns-server I don't understand whether I need both of them or which one does what. The DNS system is checked for the name server responsible for example.com.
|
OPCFW_CODE
|
Changes to Configuring Retail Pro Prism Logging in Versions 1.14.7 and Above
In previous Retail Pro Prism versions, the logging was configured by either manually editing the Prism service INI file or by using the Prism TechToolKit.exe.
For Retail Pro Prism versions 1.14.7 and above, logging will be configured using the Web Tech Toolkit.
PrismLogging.ini is used for all log settings - How it Works
All log settings are located in PrismLogging.ini. This file is created in server\conf during the Retail Pro Prism installation for version 1.14.7s or later. Any existing LOG sections in services' config INI's are ignored. The installer does not remove these deprecated LOG settings, but they will be removed when the INI files are updated using Web Tech Toolkit.
PrismLogging.ini has a [global] section containing all log keys. These are used by all services and apps using the Logger unless a different setting has been entered in the file for the service.
The section name for an app or service in the PrismLogging.ini is the base file name of the binary that is using the Logger. So, if a log setting in a service's section differs from the global setting, the service will use the specific setting entered for the service.
When first installed, PrismLogging.ini will have only a [global] section with default log settings. The global values can be edited, or specific service settings can be added & edited, manually or by using Web TTK. These settings cannot be accurately viewed or updated using the native Desktop Prism Tech Toolkit. Native TTK will be removed from Retail Pro Prism installations with the Retail Pro Prism 2.0 release.
Updating PrismLogging.ini with Prism Web Tech Toolkit
- Updates to the settings for a service are made in the Services tool. The LOG settings are displayed grouped with the service's other INI settings, as they were in the native Desktop TTK, even though the LOG settings are maintained not in the service's INI but in a separate PrismLogging.ini. This grouping provides consistency with past UI displays and adds ease of use, rather than separating LOG settings in a separate web form.
- When a change is made to a service's log setting, if that change is different from the existing global setting, Web TTK will create a section for the service in PrismLogging.ini if it does not exist and write the Key / Value pair to the section.
- If the change being made is the same as the existing global setting, the Key / Value pair in the service's section will be removed since the global value will automatically apply without a setting for the service. This is to prevent duplicative clutter in PrimsLogging.ini which would result in a full set of logging keys for numerous services and apps. This is unnecessary, makes it difficult to examine the file to analyze or debug possible logging issues, and requires more time for services to load and read settings.
- As such, in general, PrismLogging.ini will only contain the global settings and the services' exceptions when using Web TTK for updates. There are some instances in which there could be duplication of a service setting with the global value.
Updates to Global Log Settings
- Global settings can be viewed and edited in the Services tool in the "Prism Stack" section.
- Changes to global settings only update the global settings and do not overwrite an existing setting for any service. At this time there is no feature to set all services to the same log setting using a 'global update' process. The global setting will apply to any service or app that does not have the same setting specified in its section, as explained above.
- Changing a global setting to the same value as may be specified for a service will not remove the service's Key / Value pair as would a similar update to the service's setting. This bit of cleanup may be added in a future release.
The following logging levels are supported.
• llNone = 0
No log entries will be written to the log. The log file will not exist.
• llMinimal = 1
Logs only ltError
• llNormal = 2
Logs ltError, ltWarn
• llVerbose = 3
Logs ltError, ltWarn, ltInfo, ltFlow, ltEntry, ltExit
• llDebug = 4
Logs ltError, ltWarn, ltInfo, ltFlow, ltEntry, ltExit, ltDebug
|
OPCFW_CODE
|
How can I make the return-type of a function taking a sequence as input dependent on that sequences types?
I am working on a library able to perform SNMP set operations. There is a function that has a signature like the following:
def multiset(ip: str, community: str, oids: List[str], values: List[SnmpValue]) -> List[SnmpValue]:
...
This is slightly altered to the real signature to illustrate the typing issue a bit better. On normal operations, the function will return the values that were set back as list of the same types. In this example that looks redundant, but in the real code there is a use-case for this (error-detection), but that would make this typing question a bit too convoluted.
The core is that I have a function that takes a sequence of values and returns a sequence of the same types. The types are only really visible on a line that calls that function.
Another way to illustrate this is the code example below:
from typing import Generic, TypeVar, Union, List
TSupportedTypes = Union[int, str]
T = TypeVar('T', bound=TSupportedTypes)
class Wrapper(Generic[T]):
def __init__(self, value: T) -> None:
self.value = value
def process(data: List[Wrapper[TSupportedTypes]]) -> List[TSupportedTypes]:
output = []
for item in data:
output.append(item.value)
return output
processed = process([Wrapper(1), Wrapper('foo')])
# Is it possible to make this work without an "isinstance" check?
processed[1].startswith('f')
In the last time, the type-checker only knows that each value in the list is either an int or a str. In that case, the types are only known at the time process is called.
In the above case, the type-checker complains that int has no attribute startswith, but the code would work nonetheless.
Is there a way to tell the type-checker that whatever returns from the process function is the same as that what went into it?
Currently I use a healthy dose of Any hints, but that defeats a good part of the type-checking in this code and I would be really curious to find out if this works.
Why not group all the possible OIDs and their types into a TypedDict(total=False) or a dataclass and pass the dict/object around? That way all the OID values are strictly type-checked when assigned.
That's a really good idea, but I am currently restricted to Python 3.5. With luck that will change in the coming 4-6 months.
TypedDict is available to the older versions of Python via the typing-extensions module.
oooh... nice... it even supports protocols <3 I will definitely give that a go. In any case, I think the TypedDict solution will work for my use-case. If you reformat this as an answer I will accept it.
The problem isn't process or even Wrapper. The problem is that the list type is considered a homogeneous container type.
List[X] means every element of the list is an instance of the same X.
Even this simplified version of your example doesn't type check:
from typing import Union, List
TSupportedTypes = Union[int, str]
processed: List[TSupportedTypes] = [1, 'foo']
# Is it possible to make this work without an "isinstance" check?
processed[1].startswith('f')
You can see that for the particular value in this program, processed[1].startswith is safe but if you look at only the types - which is what mypy does - then there is no way to know this. According to the types processed[1] could be either a str or an int.
Thanks for replying to this after 2 years. And with 2 more years of typing experience, I now have gathered enough experience to know that I can accept your answer ;)
|
STACK_EXCHANGE
|
HAXE DOCUMENTATION PDF
The Haxe Standard Library provides common purpose tools without trying to . Introduction to the Haxe Toolkit. The Haxe Toolkit is a powerful open source. Building Haxe from source. Obtaining the source. The Haxe compiler sources .
|Published (Last):||28 July 2010|
|PDF File Size:||4.14 Mb|
|ePub File Size:||7.52 Mb|
|Price:||Free* [*Free Regsitration Required]|
Editors and IDEs
Map type supporting enum value keys haxe. However, the git submodules are not included, so you will have to manually place the source code of submodules into appropreate sub-folders.
The Haxe programming language is a high level strictly typed programming language which is used by the Haxe compiler to produce cross-platform native code. For our exemplary usage we chose a very simple Haxe library called “random”. All libraries are free Every month, more than a thousand developers use Haxelib to find, share, and reuse code — and assemble it in powerful new ways. Work with HTTP requests and responses neko.
The dependencies can be easily installed by Homebrew. The Haxe Programming Language. Work with HTTP requests and responses php.
Feel free to contact us for any inquiry regarding Haxe usage. Tools for interacting with networks and running servers cpp. Contribute to this page.
That’s why we continued improving the documentation tool. Dodumentation unit-test framework haxe. Stack data structure which is optimized on static targets haxe. Introduction to the Haxe Language The Haxe programming language is a very modern, high-level programming language.
Read and modify directories; obtain information on files and directories sys. Lib Basic interactions with the Flash platform flash. The following examples demonstrate dovumentation you can tweak using the default theme settings: If you want to start hacking the Haxe compiler, it is better to clone manually and use the Makefile:. Enjoy Haxe; It is great! Shortcuts for alerteval and debugger js. Thread API, debugger, profiler etc.
The Haxe Compiler is very efficient and can compile thousands of classes in seconds. Byte operations on native representations haxe.
Dox 1.1 released, our documentation tool
Execute native commands; interact with stdin, stdout and stderr; various other native operations sys. The Haxe Foundation offers several support tiers to help with your organization’s technical challenges. Developers can publish native apps and games to every major platform without hassle.
With Haxeyou can easily build cross-platform tools targeting all the mainstream platforms natively. If you are looking for Support or technical partnership, the Haxe Foundation provides such services.
Responsive Support Get support directly from the Haxe team. We will learn more about the haxelib command in Using Haxelib.
Editors and IDEs – Haxe – The Cross-platform Toolkit
But we want a custom theme Yes, we wanted that too! The Haxe Foundation After years of open source development, the Haxe Foundation was created to fund long term Haxe development and provide support to companies using Haxe.
As such, the language easily adapts the native behaviours of the different platforms you have targeted yaxe your development project. The std directory itself contains a few top-level classes such as ArrayMap or String which can be used on all targets. As always, feel free to open a ticket on the GitHub for suggestions or bugs. Each Haxe target has a distinct sub-directory containing target-specific APIs.
The official haxe API documentation uses a custom theme too. Various extensions to String Type:
|
OPCFW_CODE
|
Add IEEE754 files to Visual Studio
Also adds a check for _MSC_VER to disable the FENV logic for versions prior to 2013. Older versions don't have the fenv.h header file that is required for this to work properly.
This should fix the issue seen in #946.
Thanks for taking care of this :)
Coverage remained the same at 99.909% when pulling 9500d53f6c7e9e6a7a14bcf006ec8911d24268a3 on Andne:vs2005 into fc9317df1afb0094a44e4bf216bbfbb60a0d57e9 on cpputest:master.
@arstrube The build looks good now, but for some reason VS2015 (v140) has a couple failures. Any thoughts on this? Should we try to fix them or just disable the plugin completely on Visual C++ for now?
I've also noticed that since we run the tests as part of the build step normally, it's causing the failures to show as build failures instead of failed tests. Should maybe look at what it would take to cleanup this behavior as well. Not really specific to this problem, just a general build cleanup issue.
Hi Andy, this looks indeed good (except for the one point noted above). The failures are due to the fact that compilers don't seem to implement some of the checks consistently. FE_INVALID is one of those. It doesn't mean that it's not working. We have several options here:
Remove FE_INVALID from all tests.
Use #ifdef in some way to exempt MS from these tests.
Set CPPUTEST_FENV_IS_WORKING_PROPERLY to 0 for MS (no tests will be run).
@basvodde what do you think?
Coverage remained the same at 99.909% when pulling bf58b42bc1ec75ad0a3cac037219dc42a7277c1f on Andne:vs2005 into fc9317df1afb0094a44e4bf216bbfbb60a0d57e9 on cpputest:master.
I think I have IEEE754PluginTest_c.c cleaned up, any thoughts on how to handle VS2013 and VS2015? They can pass some of the tests but a couple of them still fail.
Coverage remained the same at 99.909% when pulling 7d99131ca06c96f799ef296629d3b31ffeb3bc83 on Andne:vs2005 into fc9317df1afb0094a44e4bf216bbfbb60a0d57e9 on cpputest:master.
I think both failures are due to the invalid flag. I see the following options:
disable the tests alltogether (not a good idea since most pass)
remove tests that involves invalid (get rid of math.h as well)
#ifdef the invalid tests away for Msvc
I slightly prefer the second option over the third. Invalid will still work on those systems that pick it up, but it won't be documented as working by any tests. I would like to see sqrt() and math.h gone.
oooh. But just realized if we act on it and don't test it we will decrease coverage :-((
|
GITHUB_ARCHIVE
|
Violations of Assumptions (11.9)
1. random samples, interval or ratio scores
You can violate the random samples assumption. If you replicate the study, you are
more likely to be able to generalize the findings. The data must come from
independent samples (there are other methods for dependent samples) and must be
an interval or ratio variable. Cannot use ANOVA for nominal or ordinal data.
2. normal distribution
ANOVA is robust to violations of this assumption. It is important that each sample is
from the same shape distribution but they can all be skewed or rectangular, etc.
3. homogeneity of variance
ANOVA is robust to this assumption but only if the sample sizes are the same. Also,
the largest variance should be no more than 4 times the smallest.
Levene’s test of Homogeneity of Variance
Levene’s test seen in Chapter 7 (and covered in lab) takes the deviations from the
group mean and testing to see if those two groups of deviations are significantly
different. This test can be extended to more than 2 groups and the same procedure is
Alternatives to Levene
Box (1954) indicates that the degrees of freedom can be adjusted to deal with
violations. The most conservative test would be to compare Fobt to Fcrit(1, n-1). If you
still have a significant result than violations of assumptions are irrelevant. However, this
is very conservative.
The Welch Procedure
The Welch procedure has the advantage of power (lost with Box) and protection against
type I error. it should be used whenever a Levene test indicates heterogeneity of
variance and especially if you have unequal sample sizes.
An alternative way to deal with violations of assumptions is to transform data.
Transform data to a form that yields homogeneous variance.
Useful when standard deviation is proportional to the mean and when the data are
positively skewed. XNew = logX or XNew = lnX
The logarithmic transformation makes smaller numbers transform less and larger
numbers more (positive skewed data affected more by the transformation). If SD is
proportional (smaller means have smaller SD) then the transformation will reduce the
SD of the larger means more than the smaller means (making them more equal). You
can use log base 10 or log e. It doesn’t matter. If you have data points that are
negative or near zero you can add a constant to make them positive before doing the
Square Root Transformation
When the mean is proportional to the variance and no the SD, you can use a square
root transformation to stabilize the variance. X New X
When you have very large outliers in the positive tail a reciprocal transformation can
reduce the influence of these. XNew = 1/X
To decrease the effect of large tails in a distribution (rectangular), a sample may be
trimmed by removing 5% of the extreme scores in each tail.
Magnitude of Experimental Effect (11.12)
Eta Squared 2
Eta Squared is related to r2
Eta Squared is subject to sample bias.
Omega Squared 2 is an alternative (less biased)
SStreat (k 1)MSerror
That’s it. I am not going to cover Power for ANOVA. In your future research
careers, if you need to calculate power for ANOVA, hire a statistical consultant!
|
OPCFW_CODE
|
lots of browser-related stuff today from the Chrome team. The two posts I've included about TablesNG and RenderingNG are a really accessible look into the improvements being shipped.
I'm not a browser engineer, but one of the great benefits of being part of the CSSWG has been to listen to browser engineers explaining why something is difficult to implement. Many of the annoying inconsistencies, or the lack of support of some obviously useful feature, come down to them just being incredibly hard to fix or implement. Big projects such as RenderingNG may appear to slow progress for a while, as things get held up waiting for the big change to ship. Now it is all landing I think we are going to see a whole bunch of really annoying things fixed. Also, these changes allow future work on features we all want to see (such as Container Queries) to happen.
Do give these pieces a read, if nothing else I have found that an appreciation of this stuff makes the web platform somewhat less frustrating.
I mentioned TablesNG last week, with a link to a doc describing the changes. In this post you can read more about them, and see some examples.
TablesNG is just one part of the huge effort to build a new rendering architecture for Chrome. I'm very excited about this, as it enables all kinds of things to be fixed, as well as paving the way for future exciting features. This post introduces the various parts of RenderingNG, and there will be future posts diving into these in more depth.
On the subject of browser engineering, read this thread, where Ian Kilpatrick explains more about why it was easier to land the work on TablesNG as a big change, rather than small ones.
I had an enjoyable chat with Miriam Suzanne and Brian Kardell in this episode of Igalia Chats. We talked about the different people who work on standards.
Create a tooltip using the
mask-image property. Useful if you want to make a tooltip, but the same technique could be used for other shapes.
How best to serve high quality images is something I get asked about, and never have a really good answer for. This post contains all the details, with examples.
I've just found this site, via this article. The rest of the posts are worth a look too. I'm a big fan of taking cool examples that people have built and explaining how they work. With a few notable exceptions, the people who come up with the most amazing and creative ideas, may not be the best at explaining them. So these types of articles are great, as they make doing cool stuff more accessible to folk who would like to have a go themselves.
This is just incredibly cool, useful, and fun to play around with. Generate themes for your site, the Curves editor lets you edit color scales across channels for HSL, HSV, LAB, RGB, and LCH.
CSS Layout News Newsletter
Join the newsletter to receive the latest updates in your inbox.
|
OPCFW_CODE
|
Use of is Keyword in C# to declare variable inline
I am currently working on a C# project using v4.6.2 and Visual Studio just suggested a code change to the code below.
My question is, I have never seen the is keyword used to create a new variable instance in this manner.
The original code was:
var authInfo = inputs.FirstOrDefault(i =>
typeof(SoapAuthenticationBase).IsAssignableFrom(i.GetType()));
if (authInfo is SoapAuthenticationBase)
Visual Studio 2017 suggested this:
if (inputs.FirstOrDefault(i =>
typeof(SoapAuthBase).IsAssignableFrom(i.GetType()))
is SoapAuthBase authenticationContract)
I checked Microsoft's documentation on the 'is' keyword and found nothing that explained this syntax.
What version of C# was 'is' added in this way?
It's Pattern matching with is which is C#7
The original code also declares a variable using the is keyword, is that a typo? Nevermind, already edited.
see here : https://blogs.msdn.microsoft.com/dotnet/2017/03/09/new-features-in-c-7-0/
@TimSchmelter: "I don't understand the use case of it" - it saves a line under the assumption that you actually want to do something with the variable cast to the type it was checked for.
C# 7.0
Your question is tagged C# 4.0, the Visual Studio 2017 suggestion is a C# 7.0 feature. If you cannot use C# 7.0 then ignore the suggestion.
@TimSchmelter Yes except we are using C# 4.6.2, so why would Visual Studio let a C# 7 syntax work in C# 6?
.NET 4.6.2 != C# 4.6.2. .NET 4.6.2 is the Framework version, i.e. what is available in the Framework like System.IO. C# 7.0 is the language version and you need the right compiler. .NET versions and C# versions do not have to match. For example, in one project I have .NET 4.5.1 but compile with C# 7.2
@BenHoffman if you use VS 2017 for new projects the default language version is "latest major version" which is C# 7, not C# 6
My question is why would Visual Studio suggest checking against a different type (SoapAuthenticationBase becomes SoapAuthBase) and also why it would refactor the variable to a different name (authInfo becomes authenticationContract)
This feature is called pattern matching and it was introduced in the c# language in version 7. In your example its not very clear, but consider the following canonical example of Equals overriding:
public override bool Equals(obj other)
{
if (obj is Foo)
{
return Equals((Foo)obj);
}
return false;
}
This is essentially wasteful because you are checking the type twice: once to see if its in fact a Foo, and then again when performing the cast. It seems unnecessarily verbose.
Pattern matching allows a much more concise syntax:
public override bool Equals(obj other)
{
if (obj is Foo foo)
{
return Equals(foo);
}
return false;
}
You can read more on this feature here.
The first case is slightly contrived, in practice you would use as and check for null instead, but the pattern matching is an improvement either way.
@richzilla and what if Foo is a value type?
In your first snipper, the is keyword checks whether the thing on the left is an instance of the type on the right. is returns a boolean, the FirstOrDefault call is returning either null or an instance of SoapAuthenticationBase which is being assigned to your variable.
As @Ashley Medway pointed out, the second code snippet is actually an example of C# pattern matching. authenticationContract is an instance of SoapAuthBase that will only have a value if the thing on the left is an instance of it. If not, the entire statement will return false.
That said, personally i find your original code more readable. I would be inclined to ignore VS, and let the compiler sort it all out later.
This isn't strictly correct. You are describing how an an older is check works like if(variable is type) but the way its being used in the questions is pattern matching
"The second code snippet is cutting out the intermediate variable assignment" Then what is authenticationContract in the second code snippet if not a new variable?
It isn't so much that it isn't a new variable so much as how it is scoped. When pattern matching like this, authenticationContract is scoped to the if(). Also, in cases where it is false, it won't create the variable whereas the prior code will create a null instance of authInfo.
While the suggestion is creating a new variable, it is scoped to the if (i.e. much narrower).
var authInfo = inputs.FirstOrDefault(i =>
typeof(SoapAuthenticationBase).IsAssignableFrom(i.GetType()));
if (authInfo is SoapAuthenticationBase){
// authInfo exists
}
// authInfo exists
It's basically suggesting you drop the existing authInfo instance you're declaring.
if (inputs.FirstOrDefault(i =>
typeof(SoapAuthBase).IsAssignableFrom(i.GetType()))
is SoapAuthBase authenticationContract){
// authenticationContract exists
}
// authenticationContract does not exist
"It isn't creating a new variable" Yes, it is creating a new variable. It allows you to drop another variable, so the number of variables before and after the change is the same, because you're moving the variable declaration to a different line, but it's still there.
Okay, I should be clearer. It technically is creating the variable authenticationContract, but this var is scoped to the if whereas the prior authInfo variable is scoped to the method (guessing on this one since it's a small snippet).
|
STACK_EXCHANGE
|
There are many benefits to software testing, but the primary purpose is to identify bugs, defects, or unsatisfied requirements before a customer does. In addition to helping your team produce a better product, catching these bugs can also eliminate costly and time-intensive fixes that could reduce customer trust and hurt your bottom line.
To formalize the software testing process, established best practices call for test runs made up of test cases, which are structured actions performed on software to verify it is working as defined.
Although your team likely has plenty of experience writing test cases, they may not know that there are two main ways of writing them: positive and negative.
What is the difference between positive and negative testing in software testing? And how can each approach benefit your software development?
This article will explore both types, provide some examples, and demonstrate how the right test management tool can help your team leverage both successfully.
What is positive software testing?
Positive software testing is a way of structuring test cases that assumes the expected result—such as a function, output, or data—will be produced.
For example, positive test cases could be written like this:
If the “view reports” button is selected, all existing reports are displayed.
If a field in the tool requires a certain type of attachment, the tester will provide an acceptable file.
If the user provides the correct username and password, they are granted access to the system.
In other words, positive software testing is a quick and straightforward way to confirm that the original requirements defined during the design stage are represented in the developed product. This approach is another way to take the perspective of a potential user, providing the system with inputs and actions that it would expect to receive.
What is negative software testing?
Negative software testing is another way to draft test cases to purposefully check the system for unexpected conditions or results, such as:
Providing incorrect values.
Performing incorrect steps to see how the system responds.
Moving through a workflow without supplying the correct or required information.
Negative test cases could be written like this:
If a field is only meant to accept numeric values—such as a phone number or dollar amount—the tester will provide letters to see if it accepts the entry.
When incorrect credentials are provided, the system does not allow access.
Most software developers and quality assurance (QA) testers know how software is supposed to work, so negative testing intends to push the limits of the software. Negative tests can also replicate what a new user might experience, confirming that the checks in the workflow work as expected.
Also, testers can run both positive and negative test cases to provide an extra layer of validation to software performance. To introduce more efficiency and lower the administrative effort of this advanced blend of software testing, teams can use a modern test management tool like TestMonitor to set up a test plan, easily store pre-established test cases, and track and record the results.
|
OPCFW_CODE
|
How is Eclipse Help view implemented? This view is a rich collection of interlinked documents that provide the usual functionality of embedding images and navigating between different pages. In addition, it supports live help actions – hyperlinks that can call Eclipse actions (Java code). How is this implemented?
In a usual setup, the HTML content is stored in the file system. If the requested HTML page contains embedded images, these are stored as separate files, and requested separately by the browser. However, the main contents of Eclipse Help view come from one single file – the org.eclipse.platform.doc.user.jar in the plugins folder. This jar contains around one thousand files (in Eclipse 3.4), including HTML and PNG images. How do these get displayed in the Eclipse Help view?
A straightforward approach would be to “explode” the contents of this jar at runtime (the first time the Help view is activated) and then link directly to the files in the filesystem. This is simple to implement, but would require some bookkeeping to clean the files on closing the workspace. Also, you’re going to pay the performance penalty the first time the files need to be extracted from the archive and written to the disk. Instead, Eclipse 3.4 uses an embedded Jetty HTTP server with custom servlets to serve the content of the Help view.
While the implementation of the Help view is deep inside the internal packages of org.eclipse.help.webapp plugin, this functionality can be recreated by using public extension points (thus ensuring upgradability to the next Eclipse versions). The steps below describe the general setup of an embedded Help HTTP server.
First, you need to specify three extension points in your plugin.xml. For the complete example see the plugin.xml of the org.eclipse.help.webapp plugin. The extension points are:
Jumping a little ahead of time (the complete structure will be explained later), the URLs requested from the HTTP server will look like this:
The primaryID is the HTTP context ID specified in the first extension point above. It is also used in the specification of the second and third extension points. The secondaryID allows mapping your content via different servlets (see later). In the simplest example (where all content is coming from the same archive), you will have only one servlet specified in the third extension point. The last identification string is specified on the second extension point – it is the other.info filter on the service selector. This string must be the same as the one set during the initialization of Jetty server (see below).
Your MANIFEST.MF will need two changes. The first one is the Import-Packages section and should have the following entries:
The second one is in the Require-Bundle section and should have the following entries:
These sections will make sure that you will be able to use the relevant classes in your custom Jetty server and servlets.
The next step is the class that controls the lifecycle of the embedded Jetty server. This is the 127.0.0.1:12345 part in the URL above – it is a local HTTP server that is listening on port 12345. Since this specific port may be taken by another application, we are going to ask Jetty to auto select an available port. The complete implementation of the Jetty configurator can be found in the org.eclipse.help.internal.server.WebappServer class, and the main steps are:
Make sure that you’re running only one instance of the HTTP server (instead of creating a new instance for each HTTP request).
The http.port parameter should be set to 0 to allow Jetty to auto-select an available port.
The context.path parameter should be set to the HTTP context ID (primaryID in the example above).
The other.info parameter should be set to the same value as the service selector filter in the second extension point in the plugin.xml.
INFO / DEBUG messages of Jetty should be suppressed.
To check that Jetty has successfully started, get the org.eclipse.equinox.http.registry bundle and check that its state is RESOLVED.
To get the Jetty port (for creating the URLs), get the service reference for org.osgi.service.http.HttpService class and (other.info=yourServiceSelectorFilter) filter. Then, get the http.port property and cast it to Integer.
The next step is to create a custom servlet that will intercept the relevant HTTP requests and load the content from your archive. The complete (and very simple) example can be found in the org.eclipse.help.internal.webapp.servlet.ContentServlet class (registered with the third extension point above). In its init() method it creates a custom connector instance (more info below) and uses it in the doGet() and doPost() methods.
The last piece is the connector itself. It analyzes the incoming request, maps it to the corresponding resource and then transfers the resource contents to the response output stream. The beauty of this connector is that the content can come from anywhere. It can be a local file, a file in an archive, or it can be dynamically generated (corresponding to the requested resource, of course).
While EclipseConnector looks at a variety of sources to get the content and provides a custom error page implementation, the logic is very simple. In a simple example where all the content is coming from one archive, you create a URLClassLoader pointing to that jar (this should be done in the constructor to make the subsequent requests faster) and use the getResourceAsStream passing the trailing portion of the URL (stripping away the host, port, primary ID and secondary ID parts). If the returned InputStream is null, you can return a custom error page.
While the above may sound an overkill, it is quite useful and much more flexible than shipping a huge collection of separate files. With a custom HTTP server and a servlet you can control the contents of error page, fetch the content from multiple locations or even create the content dynamically from a database or another source.
|
OPCFW_CODE
|
Hi guys, I got a PYNQ-Z1 recently and trying to familarise myself with the platform. I’m making a very simple blinky project, to blink an LED and sending out some strings on UART0. My block diagram contains just a single PS block as below (Vivado 2021.1).
I then created an application in Vitis. Copy & paste the same piece of code I used to bring up another Zynq board.
static XGpioPs psGpioInstancePtr;
xil_printf(“Lab1 application started… \r \n”);
GpioConfigPtr = XGpioPs_LookupConfig(XPAR_PS7_GPIO_0_DEVICE_ID);
if(GpioConfigPtr == NULL)
xStatus = XGpioPs_CfgInitialize(&psGpioInstancePtr,GpioConfigPtr, GpioConfigPtr->BaseAddr);
if(XST_SUCCESS != xStatus)
print(" PS GPIO INIT FAILED \n\r");
xil_printf("Lab1 application started.... \r \n");
XGpioPs_WritePin(&psGpioInstancePtr, 54, 1);
XGpioPs_WritePin(&psGpioInstancePtr, 54, 0);
The board was programmed successfully and I could see the LED blinking. However UART is not working as I’m just seeing garbage strings on the terminal. There is a delay between the string outputs as expected.
There must be some configuration that I’m missing in the firmware. Any idea? I have tried using the “print” function but it behaved the same.
And just to confirm when I booted the platform from the SD card with the standard Pynq image, UART and Linux worked fine.
It’s not really a PYNQ related question. Asked in a related forum might get you a proper quick answer. Though, just sharing a thought, if you are receiving something on UART console, you might try changing the UART settings like baud rate or other stuff.
Hi, I will try the Xilinx forum too. But I feel like it may be something related to the PS CLK source of Pynq. Pynq uses 50MHz while my other board uses 33MHz.
The baud rate I used in my console was 115200, but I have tried all other baud rate options too, no luck.
What voltage level you had set in the ZYNQ Bank?
What UART USB adapter you used?
Any additional setting expanding the UART0 MIO14:15?
I’m using LVCMOS 3.3V for the banks and UART.
I’m using the 2 UART0 pins (pin 14 &15) which I presume are connected to the on-board FT2232 adapter. Unfortunately details of the connection are hidden from the schematics. So basically I’m trying to get UART output from the micro-usb port on the board.
Remember, PYNQ-Z1 power might not be enough if this is not connected to DC Jack powering.
This is commonly what user will encountered on many low-end FPGA.
5V 500mA with cable lost, DC-DC lost.
Try DC Jack powering first, plug in USB cable later.
And hot reset via button and see this is the case.
It is also worth mentioned that if SD card is bad brand.
Try lower the SD Card SDIO speed to 20M or less and see.
So I changed the jumper and powered the board with an external 12V brick supply. Still doesn’t work .
I’m programming the Zynq through Vitis so I’m not using the SD card here. I’m running out of idea so please let me know if there’s something else I could try. Thanks.
Can you simply used the example project HelloWorld inside the Vitis?
This will not require user to modify any code and simply build and run via JTAG.
If you had no idea how to do so, then you got to change your questioning to Xilinx forum.
While there could be chance that board goes faulty (shipping or any unexpected case) but chance are low.
I am helping out here due to PYNQ-Z1 board but the content as @mizan proposed is out of PYNQ Python ground nor the PYNQ environment itself.
Sure. Thanks for the help. I will put my question to the Xilinx forum. Sorry I didn’t realise this forum was mainly for Python and the software environment.
|
OPCFW_CODE
|
Openpyxl not showing second graph
EDIT: Solved, solution in answer below.
I have a graph created with openpyxl that has two y axes sharing a DateAxis. Although the first selection of data is showing on the graph, the second isn't. There's also a strange gray line on the bottom of the graph that wasn't there before. I think it's just a small error I'm missing somewhere, but I can't see where. Especially considering I have my range of cells defined correctly. What could I be doing wrong?
import openpyxl
from openpyxl import Workbook, chart
from openpyxl.chart import LineChart, Reference, Series
from openpyxl.chart.axis import DateAxis
from datetime import date, datetime, timedelta, time
ws2 = wb['sheet2']
dates = chart.Reference(ws2, min_col=1, min_row=2, max_row=sheet.max_row)
vBat = chart.Reference(ws2, min_col=2, min_row=1, max_col=2, max_row=sheet.max_row)
qBat = chart.Reference(ws2, min_col=3, min_row=1, max_col=3)
c1 = chart.LineChart()
c1.title = "SLA Discharge - 5.5A: V_BAT"
c1.style = 12
c1.x_axis.majorTimeUnit = "days"
c1.x_axis = chart.axis.DateAxis()
c1.x_axis.title = "Time"
c1.x_axis.crosses = "min"
c1.x_axis.majorTickMark = "out"
c1.x_axis.number_format = 'd-HH-MM-SS'
c1.add_data(vBat, titles_from_data=True)
c1.set_categories(dates)
c1.y_axis.title = "Battery Voltage"
c1.y_axis.crossAx = 500
c1.y_axis.majorGridlines = None
c2 = chart.LineChart()
c2.x_axis.axId = 500 # same as c1
c2.add_data(qBat, titles_from_data=True, from_rows=True)
c2.set_categories(dates)
c2.y_axis.axId = 200
c2.y_axis.title = "Qbat Percentage"
c2.y_axis.crossAx = 500
c1.y_axis.crosses = "max"
c1 += c2
s1 = c1.series[0]
s1.graphicalProperties.line.solidFill = "BE4B48"
s1.graphicalProperties.line.width = 25000 # width in EMUs.
s1.smooth = True # Make the line smooth
s2 = c2.series[0]
s2.graphicalProperties.line.solidFill = "48BBBE"
s2.graphicalProperties.line.width = 25000 # width in EMUs.
s2.smooth = True # Make the line smooth
ws2.add_chart(c1, "D5")
Interestingly enough,
vBat = chart.Reference(ws2, min_col=2, min_row=1, max_col=2, max_row=sheet.max_row)
is fine. However, doing the same thing to qBat with:
qBat = chart.Reference(ws2, min_col=3, min_row=1, max_col=3, max_row=sheet.max_row)
"corrupts" the workbook and displays an error message upon opening and doesn't print any chart. Removing max_row=sheet.max_row from both lines produces an incorrect DateAxis where there are only two points and they're both the first two values in the time column.
Does this work without using a DateAxis? Combined charts are tricky because they rely on the trick of sharing axes.
@CharlieClark I tried creating the chart without a DateAxis and had issues with the axes not displaying correctly. I understand that DateAxis doesn't provide much real benefit, but it's been the best shot I've had so far at properly creating the graph.
first, in c2.add_data(qBat, titles_from_data=True, from_rows=True), remove from_rows=True.
Then, change qBat to:
qBat = chart.Reference(ws2, min_col=3, min_row=1, max_col=3, max_row=sheet.max_row)
|
STACK_EXCHANGE
|
Running a 32-bit JVM is not supported in this platform in Intellij
While compiling my project, I'm getting error as 'Running a 32-bit JVM is not supported in this platform'.
When I get this error:
And I get this error when I change my JDK home path to '64-bit JDK' under Project structure -> Platform settings -> SDKs.
Current Solution :
Now if I change my SDK back to 32-bit JDK, it works fine. But when I have my SDK as 64-bit JDK, I'm getting the error as Running a 32-bit JVM is not supported in this platform.
My Question
Instead of changing my JDK, how to get out of this problem.
More than that, my question is in, "Running a 32-bit JVM is not supported". Actually I'm not running any JVM here in Intellj. I'm just compiling my Java Source to get a .class files. What is the word running mean here...
Missing an important piece of information - is your operating system 32- or 64-bit?
@gknicker Mine is MAS OSX 64 Bit
Thank you. If I understand correctly, you have two distinct JDKs installed - a 64-bit JDK and a 32-bit JDK. Is that right?
@gknicker ya I have 2 distict JDK installed
Please type this on a command line and tell me the results: java -d64 -version. Also, please read http://stackoverflow.com/questions/15827430/running-a-64-bit-jvm-is-not-supported-on-this-platform-with-java-d64-option-o and http://stackoverflow.com/questions/9512603/installing-32-bit-jvm-on-64-bit-linux so we can determine whether those posts help your issue.
@gknicker oh I'm getting Running a 64-bit JVM is not supported on this platform. on 32 Bit SDK and Running a 32-bit JVM is not supported on this platform. in 64-bit SDK if I type java -d32 -version respectively...
@gknicker Thanks for your time. I already read those posts before posting this question. Anyway for now, I'll use 32-bit JDK itself. Let see whether can I get any insights on this. Thanks for your time
We have determined that your operating system is MAC OSX 64-bit, and that you have two JDKs installed, both the 32-bit and the 64-bit version.
Other threads on this topic indicate problematic behavior when these two JDKs are installed on the same machine. When you run java command line with -d32 you get the error "Running 64-bit JVM is not supported on this platform" and when you run java command line with -d64 you get the error "Running 32-bit JVM is not supported on this platform".
Therefore, I highly recommend you uninstall the 32-bit JDK, then re-install the 64-bit JDK.
----I highly recommend you uninstall the 32-bit JDK, then re-install the 64-bit JDK.---- No I need 32 Bit JDK for DCEVM purpose on MAC OS.
|
STACK_EXCHANGE
|
|I am an Associate Professor in the Department of Computer
Science at the University of Auckland in New Zealand.
I graduated from Essex University and returned to take
an M.Sc. in Intelligent Knowledge Based Systems in the Department
of Computer Science. After obtaining my M.Sc. I went to live in
New York for a year and then returned to the UK to study for a Ph.D.
in the Department
of Computer Science at Liverpool University. In collaboration
with the Knowledge Engineering Group at Unilever
Research I developed a knowledge analysis methodology that
After my PhD I started work at the University
of Salford and was a Lecturer, Senior Lecture
and briefly promoted to Reader in Computer Science. Whilst there I developed an
expert system called EMMY to predict the cost of housing maintenance
for Housing Associations. This system has subsequently gone on to be
sold commercially by Engineering
Technology Ltd. I was also involved in the development of the
Client Centred Approach. This is a development methodology for
expert systems that supports rapid prototyping within the Waterfall
From my first hand experience of the difficulty of developing
knowledge-based expert systems I became interested in Case-Based
Reasoning (CBR) and am now one of the world's most active researchers in
this discipline running AI-CBR
the Internet site for CBR. This site is now mothballed and has been replaced by the community curated CBR Wiki.
I become interested in AI on the web and worked with an Australian firm to implement a CBR system on
the web. A paper describing this project won the "Distinguished
Paper Award" at IJCAI-99. I also worked for the UK
government's Cabinet Office on the development of the INFOSHOP
a web-based information support system for local government. On moving to New Zealand I collaborated with Prof. Emilia Mendes
on a study of software estimation techniques and in particular the
application of CBR to effort estimation. I am currently
researching memory-based approaches to Game AI in real time strategy
games and Texas Hold'em poker. More information on this research
can be found at the Game AI website.
I spend a considerable amount of time publicising computer science to
the general public through the writing of popular science books and
other activities. I am the CS department's blogger and get between 300 to 600 page views per day (this is a mirror of my personal blog).
I founded and moderate a LinkedIn group for Nao Robot users that has
over 700 members and founded and moderate a LinkedIn group for people
with an interest in Alan Turing that has 400 plus members and growing.
By writing my book, The Universal Machine, I have developed a keen interest in the history of computing. I have been invited by Event Communications, a London-based exhibition design group, to be an historical advisor to the redevelopment of the Communications Gallery of the Russian Polytechnic Science Museum in Moscow. This is part of a $250 million (USD) refurbishment of one of Moscow's cultural landmarks.
|
OPCFW_CODE
|
Dr. Avijit Pal Assistant Professor
Operator theory on function space
Cowen-Douglas class operator
Reproducing Kernel Hilber space
Multivariable Operator theory
Non-commtative function theory
Ph.D.:Indian Institute of Science, Bangalore, August,2008-November, 2014.
M.Sc.: The University of Burdwan, August, 2005-July, 2007
B.Sc.: Burdwan Raj College, August, 2002-July, 2005
Winter 2020:Multi-Variable Calculus, IIT Bhilai
Winter 2020:Operator theory II, IIT Bhilai
Monsoon 2019:Linear algebra I, IIT Bhilai
Monsoon 2019:Real analysis, IIT Bhilai
Monsoon 2019:Operator theory, IIT Bhilai
Winter 2019: Calculus-I, IIT Bhilai
Monsoon 2018: Linear algebra, IIT Bhilai
Monsoon 2018: Probability and Statistics, IIT Bhilai
Autumn 2017: Elementary number theory, IISER Kolkata
Autumn 2016: Elementary number theory, IISER Kolkata
Spring 2016: Curves and Surfaces, IISER Kolkata.
Assistant Professor: Indian Institute of Technology Bhilai, Raipur, June, 2018-continue
Postdoc Fellow: NBHM Postdoctoral Fellowship, (August, 2015- 27th June,2018).
Postdoc Fellow: Research Associate-I, Donor: DST, (15th April, 2015 - 14th July, 2015)
Postdoc Fellow: : MODULI IRSES project, (December, 2014 - March, 2015)
Awards and Accolades
Matrix project, Donor: DST, (19th October, 2020- 18th October, 2023).
Serb funded ECR project, Donor: DST, (14th March, 2019- 13th March, 2022).
Selected NBHM Postdoctoral Fellowship(2015-2018)
Selected MODULI IRSES project for Postdoctoral Fellow(December, 2014 - March, 2015)
IFCAM Fellowship (2013 - 2014)
UGC Fellowship, (2010 -2013)
G. Misra and A. Pal, Curvature inequalities for operators in the Cowen-Douglas class and localization of the Wallach set, Journal d’Analyse Mathematique, 136, pages 31–54(2018).
A. Pal and D. Yakubovich, Infinite-dimensional features of matrices and pseudospectra,Journal
of Mathematical analysis and application, Volume 447, Issue 1, 1 March 2017, Pages 109-127
G. Misra, A. Pal and C. Varughese, Contractivity and complete contractivity for finite dimensional
Banach spaces, Journal of operator theory, Volume 82, Issue 1, Summer 2019 pp. 23-47
|
OPCFW_CODE
|
The 2023 edition of our GameMaker Update video has been released, detailing all the major updates that are heading to the GameMaker software in 2023.
Let’s run through the headlines!
New Runtime (+ Beta)
The new runtime for GameMaker reworks the compiler and runtime toolchain, providing faster performance, easier debugging and improved coding. This new toolchain compiles to each platform natively, meaning you no longer have to choose between VM and YYC.
The Desktop and Web components of GameMaker’s new runtime will soon be available to a small number of closed beta participants.
Over the course of the beta period, we’ll be focusing on providing a compatibility layer between the current GMS2 runtime and the new runtime. This will allow existing games to use the new runtime with minimal changes.
When the beta phase has concluded, the new runtime will be available for everyone on the GX.games target, and on other platforms for subscribed users.
This new runtime marks the next evolution of GameMaker, and as such, you'll need a subscription to access it for other platforms.
We’ll be recruiting for the closed beta in the next few weeks.
Modding is coming to GameMaker! We’re adding an extension that will allow you to connect your games to mod.io. This is a popular website that supports a range of game mods, including simple DLC options and more sophisticated user-created levels and data.
Modding has been one of most popular requests from users, and while we currently offer limited support for modding, we’re looking to expanding that support over the coming months.
New Code Editor
A beta release of the redesigned Code Editor is scheduled for release in the autumn of this year. Initially, you’ll need to enable the new Code Editor within the IDE to access it.
Here’s what you can expect:
- The UI has been redesigned and the Code Editor is now hosted within a full screen window, allowing access to objects, events, and functions within the code file.
- New Objects and New Events can now be created from within the Code Editor itself, so there’s little need to move between the Workspace and the Code Editor.
- All Syntax Highlighting and Intellisense support is being moved to a Language Server, which allows us to support more languages within the Code Editor, such as Shader Languages (GLSL, HLSL etc) with Intellisense, JSON, and XML support.
- We’re also supporting Markdown within Notes and introducing side-by-side preview support to make it easier to see how changes affect the formatting.
We’ll be Open Sourcing our Language Server for GML, and since Visual Studio Code uses the same protocol, you’ll be able to use this within Visual Studio Code as well.
Any Language Server that uses the standard protocol will be usable within GameMaker, and we’ll offer configurations that allow GameMaker to use them while you’re editing.
In last year’s update, we introduced Prefabs - GameMaker projects that contain any number of GameMaker resources, and can have defined, editable parameters.
This year, the Prefab Library will be added to GameMaker. The Prefab Library will be a new window within GameMaker that displays built-in and user-created Prefabs.
You’ll be able to drag them from the Library into Rooms or Sequences without adding the Prefab contents to your project. The compiler will then pull the required components using resource references when testing or exporting the game.
This keeps the Prefab separate from your project, and allows you to update the Prefab without having to change your project.
We’re working on moving all the IDE code into plugins, keeping a minimal Core for the IDE that maintains the file formats and serialisation, and orchestrates the plugins (that do all the real work).
Language Servers will live within the Runtime rather than the IDE, and will have a different lifecycle to the IDE.
Our Language Translations for the IDE will be released as plugins and Open Sourced on GitHub. These will be open for community contributions, and we’ll be looking to recruit volunteers to ensure they’re translated accurately. This will include both the manual and IDE for all languages.
We want to ensure that we’ve covered the majority of use cases before we can allow users to create their own plugins. We expect to have this work complete by the end of the year with a closed Beta period for invited users to create plugins.
Hot off the heels of Opera’s recent collaboration with OpenAI, we’ve been experimenting with systems that allow AI queries and results to be incorporated directly into your projects.
We’re still in the very early stages, but we’re excited by the support AI can offer in generating code, creating graphical placeholders, and even allowing image in-painting (adding extra detail to an image) or out-painting (removing elements from images).
The improvements taking place within the AI sphere are breathtaking, and we’re aiming to provide these opportunities within GM over the coming months.
We want to be clear that as GameMaker explores the potential of AI, we are 100% committed to avoiding any uses or integrations that quash individual expression, or that draws on copyrighted materials. We’re interested in AI that compliments and simplifies your work, not that replaces or steals it.
In collaboration with Opera’s Cloud Gaming team, we’re also in the process of investigating a system for hosting a new Marketplace. We’re still in the very early stages of this process, but we hope to have more details to share later in 2023!
If you’d like to learn more about our plans for 2023, check out the full GameMaker Update 2023 video that’s available on our YouTube channel.
We’d like to offer a huge thank you to our passionate community for helping inform the future plans of GameMaker. We can’t wait to share these updates with you in 2023 and beyond.
|
OPCFW_CODE
|
import { WrapModeMask } from '../../../cocos/core/geometry';
import { ExtrapolationMode, RealCurve, RealInterpolationMode, TangentWeightMode } from '../../../cocos/core/curves';
import { AnimationCurve, Keyframe } from '../../../cocos/core/geometry/curve';
describe('geometry.AnimationCurve', () => {
describe('Constructor', () => {
test('new AnimationCurve()', () => {
const curve = new AnimationCurve();
expect(curve.keyFrames).toStrictEqual([
createLegacyKeyframe({ time: 0.0, value: 1.0 }),
createLegacyKeyframe({ time: 1.0, value: 1.0 }),
] as Keyframe[]);
});
test('new AnimationCurve(keyframes)', () => {
const curve = new AnimationCurve([
createLegacyKeyframe({ time: 2.0, value: 8.0, inTangent: -3.3, outTangent: 1.75 }),
createLegacyKeyframe({ time: 3.0, value: 9.0, inTangent: 4.2, outTangent: -7.1 }),
]);
expect(curve.keyFrames).toStrictEqual([
createLegacyKeyframe({ time: 2.0, value: 8.0, inTangent: -3.3, outTangent: 1.75 }),
createLegacyKeyframe({ time: 3.0, value: 9.0, inTangent: 4.2, outTangent: -7.1 }),
] as Keyframe[]);
});
test('new AnimationCurve(realCurve)(INTERNAL)', () => {
const realCurve = new RealCurve();
realCurve.assignSorted([
// Non weighted tangent
[0.1, ({
interpolationMode: RealInterpolationMode.CUBIC,
value: 0.1,
leftTangent: 0.2,
rightTangent: 0.3,
})],
// Non cubic keyframe
[0.2, ({
interpolationMode: RealInterpolationMode.LINEAR,
value: 0.1,
})],
// Weighted tangent
[0.3, ({
interpolationMode: RealInterpolationMode.CUBIC,
value: 0.1,
leftTangent: 0.2,
rightTangent: 0.3,
tangentWeightMode: TangentWeightMode.RIGHT,
leftTangentWeight: 0.4,
rightTangentWeight: 0.5,
})],
]);
const geometryCurve = new AnimationCurve(realCurve);
expect(geometryCurve.keyFrames).toStrictEqual([
createLegacyKeyframe({ time: 0.1, value: 0.1, inTangent: 0.2, outTangent: 0.3 }),
createLegacyKeyframe({ time: 0.2, value: 0.1, inTangent: 0.0, outTangent: 0.0 }),
createLegacyKeyframe({ time: 0.3, value: 0.1, inTangent: 0.2, outTangent: 0.3 }),
] as Keyframe[]);
});
test.each([
{ extrapolationMode: ExtrapolationMode.LOOP, expected: WrapModeMask.Loop },
{ extrapolationMode: ExtrapolationMode.PING_PONG, expected: WrapModeMask.PingPong },
{ extrapolationMode: ExtrapolationMode.CLAMP, expected: WrapModeMask.Clamp },
{ extrapolationMode: ExtrapolationMode.LINEAR, expected: WrapModeMask.Clamp },
])(`new AnimationCurve(realCurve)(INTERNAL): conversion of extrapolation mode $extrapolationMode`, ({ extrapolationMode, expected }) => {
const realCurve = new RealCurve();
realCurve.preExtrapolation = extrapolationMode;
realCurve.postExtrapolation = extrapolationMode;
const geometryCurve = new AnimationCurve(realCurve);
expect(geometryCurve.preWrapMode).toStrictEqual(expected);
expect(geometryCurve.postWrapMode).toStrictEqual(expected);
});
});
test.each([
{ wrapMode: WrapModeMask.Clamp, extrapolationMode: ExtrapolationMode.CLAMP, },
{ wrapMode: WrapModeMask.Loop, extrapolationMode: ExtrapolationMode.LOOP, },
{ wrapMode: WrapModeMask.PingPong, extrapolationMode: ExtrapolationMode.PING_PONG, },
])(`Wrap mode $wrapMode`, ({ wrapMode, extrapolationMode }) => {
const curve = new AnimationCurve();
curve.preWrapMode = wrapMode;
expect(curve.preWrapMode).toStrictEqual(wrapMode);
expect(curve._internalCurve.preExtrapolation).toStrictEqual(extrapolationMode);
curve.postWrapMode = wrapMode;
expect(curve.postWrapMode).toStrictEqual(wrapMode);
expect(curve._internalCurve.postExtrapolation).toStrictEqual(extrapolationMode);
});
test(`Add key`, () => {
const curve = new AnimationCurve();
// Clear
curve.addKey(null);
expect(curve.keyFrames).toStrictEqual([]);
curve.addKey(createLegacyKeyframe({
time: 0.1,
value: 0.2,
inTangent: 0.3,
outTangent: 0.4,
}));
expect(curve.keyFrames).toStrictEqual([createLegacyKeyframe({
time: 0.1,
value: 0.2,
inTangent: 0.3,
outTangent: 0.4,
})]);
// Clear again
curve.addKey(null);
expect(curve.keyFrames).toStrictEqual([]);
});
test('Keyframes', () => {
const curve = new AnimationCurve();
curve.keyFrames = [createLegacyKeyframe({
time: 0.1,
value: 0.2,
inTangent: 0.3,
outTangent: 0.4,
}), createLegacyKeyframe({
time: 0.5,
value: 0.6,
inTangent: 0.7,
outTangent: 0.8,
})];
expect(curve.keyFrames).toStrictEqual([createLegacyKeyframe({
time: 0.1,
value: 0.2,
inTangent: 0.3,
outTangent: 0.4,
}), createLegacyKeyframe({
time: 0.5,
value: 0.6,
inTangent: 0.7,
outTangent: 0.8,
})]);
curve._internalCurve.clear();
expect(curve.keyFrames).toStrictEqual([]);
curve._internalCurve.assignSorted([
// Non weighted tangent
[0.1, ({
interpolationMode: RealInterpolationMode.CUBIC,
value: 0.1,
leftTangent: 0.2,
rightTangent: 0.3,
})],
// Non cubic keyframe
[0.2, ({
interpolationMode: RealInterpolationMode.LINEAR,
value: 0.1,
})],
// Weighted tangent
[0.3, ({
interpolationMode: RealInterpolationMode.CUBIC,
value: 0.1,
leftTangent: 0.2,
rightTangent: 0.3,
tangentWeightMode: TangentWeightMode.RIGHT,
leftTangentWeight: 0.4,
rightTangentWeight: 0.5,
})],
]);
expect(curve.keyFrames).toStrictEqual([
createLegacyKeyframe({ time: 0.1, value: 0.1, inTangent: 0.2, outTangent: 0.3 }),
createLegacyKeyframe({ time: 0.2, value: 0.1, inTangent: 0.0, outTangent: 0.0 }),
createLegacyKeyframe({ time: 0.3, value: 0.1, inTangent: 0.2, outTangent: 0.3 }),
] as Keyframe[]);
});
});
function createLegacyKeyframe ({
time,
value,
inTangent = 0.0,
outTangent = 0.0,
}: {
time: number,
value: number;
inTangent?: number;
outTangent?: number;
}) {
const keyFrame = new Keyframe();
keyFrame.time = time;
keyFrame.value = value;
keyFrame.inTangent = inTangent;
keyFrame.outTangent = outTangent;
return keyFrame;
}
|
STACK_EDU
|
WARNING: The following mod contains extreme amounts of blood and gore and is not recommended for players under 18 years of age.
Here's the latest version of Mr. Ifafudafi's CoD2SP Blood Mod, now Beta 7. I don't believe there's much explanation required for this, so just check out the list of updates below, see the screenshots, and give it a download if it's your thing. ;)
----------------------------------------------------------- Bloodlust Call of Duty 2 Single-Player Mod (Beta 7) Copyright © Mr. Ifafudafi (http://ifafudaficod2.freephpnuke.org) Official Readme-type-thing ----------------------------------------------------------- /////////////// Contents! \\\\\\\\ -Contents -Udpates -Installation -Legal junk -Contact info -Credits OFFICIAL DISCLAIMER: This mod has what some would consider mature material, (blowing people's heads off and watching a bunch of blood and guts come out) and neither Mr. Ifafudafi or any member of http://ifafudaficod2.freephpnuke.org can be held responsible for any mental or physical alterations, damages, or other negative effects that come as a result of the downloading, using, or witnessing of the content in this this mod. ////////////// Updates! \\\\\\\ New in beta 7: -Decapitated models! When you blow someone's head off, there won't be an empty hole in space anymore. You'll actually see the remnants of their neck. A huge, and I mean HUGE thanks to WCP for letting me use his decapitated models; Merciless' didn't work right -Some more weapon tweaks -Fixed distortion effect -Decapitating/exploding scripts changed to be damage based. We don't want a Colt blowing someone's head off -Some more blood effects, including a squirt or two for powerful attacks, and a good bit of chunks for body explosions -Yet more general glitch fixes -Some more blood tweaks and modifications -Some comments added in the script files for those of you who like messing around with that kind of stuff New in beta 6 updates & fixes: -BIG FAT NOTICE: I've moved the mod to http://ifafudaficod2.freephpnuke.org, so make sure you go there instead. -Small normal-weighted notice: Many of y'all have asked about the lack of a Shotgun, Grease Gun, or Sten. I've taken these out due to conflicts with the mod or just general glitches, so you probably won't see them for a while. -Obligatory blood tweaks -Replaced old blood decals with the ones from Dark Messiah of Might & Magic (definitely a huge improvement) -Added a distortion effect to the muzzle flash of rifles (only shows in DX9, though) -General glitch fixes -General weapon tweaks From beta 6: -In-game deactivation. If you turn blood off in the Options menu, you won't see any more blood until you turn it back on. Handy for nosy parents -Weapons tweaked slightly, most notably a decreased BAR/Bren zoom, and less sway and time to recenter sights -Blood effects tweaked; barf is more visible, more animations actually play, body explosions actually create blood splatters, among many, many other tweaks -Grenades do a small amount of damage if thrown on somebody -Weapon tweaks and blood effects seperated into two different IWDs -Various glitches fixed From beta 5: -PRECACHING! This fixes Du Hoc Assault and other levels in which some effects wouldn't play or would screw up -Weapon sway a bunch more -Damage tables updated -Melee damage drastically decreased, as I'm pretty sure you can't kill someone in two whacks... >_> -On many missions, you'll randomly spawn with either a pistol or a submachine gun, instead of always one of those -Grenade cooking -Brain effects fixed so that they actually show up now From beta 4: -Location-based blood pooling fine-tuned -Body explosions and head pops tweaked to be more realistic From beta 3: -Shotgun issues fixed, still need to aim for the most part though -Full body explosion when killed by explosive weapons -"Bloody Barf" fx when shot in the chest From beta 2: -Blood fx tweaked to be more intense -Fixed some coding issues From beta 1: -Head popping -Blood splatters, -Blood squirting from decapped heads. Thanks to Team Merciless from http://www.mercilessmod.com for these. Hopeful updates in beta 8 -Damaged body and head models -Dead body shooting/exploding ////////////////// Installation! \\\\\\\\\ Alright, this has changed a slight bit to help y'all not die from Multiplayer. Stick the IfafudafiCallofDuty2SP *folder* inside your Call of Duty 2 directory. Not your Call of Duty 2/main directory, just your Call of Duty 2 one. Then, copy the shortcut that you use from the desktop into this folder, and add the following line inside the "Target" box under the "Shortcut" tab: +set fs_game "IfafudafiCallofDuty2SP" Hit OK, double-click on the shortcut, and there you go! If for some reason this doesn't work, just stick the IWDs inside your Call of Duty 2/main directory. The end! ///////////////// Legal junk! \\\\\\\\\ I've been having some issues with usage of the mod's content, so I'll make it a bit clearer. If you want to use anything in this mod that's not owned by Merciless or anyone else, ask me for permission first. If you want to use anything in this mod that is owned by Merciless or anyone else, ask them. All gore models and a few images are made/taken by Team Merciless from www.mercilessmod.com, and most of the scripts in the FX folder are created by them as well. Decals are taken from Dark Messiah of Might and Magic (© Ubisoft & Arkane Studios) /////////////////// Contact info! \\\\\\\\\\ Simply contact me at phazonflakesAThotmailDOTcom. Of course, make sure to replace the AT and DOT with the proper symbols. Or, head to http://ifafudaficod2.freephpnuke.org and sign up for the forums to discuss the mod there. //////////// Credits! \\\\\\ A big, fat, whoppin' thanks to WCP for letting me use his decap models. A very special thanks to Bloodlust of http://www.bloodlustmods.com for kicking off the mod and giving me a start on what it is today. Thanks to Team Merciless from http://www.mercilessmod.com for inspiration and the material to build upon it, CoDFiles for secondary hosting, all the members at http://ifafudaficod2.freephpnuke.org for their support, and thanks to you for checking out the mod.
There are no comments yet. Be the first!
|
OPCFW_CODE
|
Leopard Gecko Handling Guide
Handling your leopard gecko is pretty much a given, I mean ultimately that’s the best part about having one.
Whether you just got your leopard gecko or are hoping to raise a hatchling to be accustomed to handling, there are a few things you should educate yourself on before trying to handle your leopard gecko.
Leopard gecko handling isn’t that much of an issue since these guys are generally docile creatures by nature. However, they are easily stressed out by nature as well; and when this happens, their natural reaction is to drop their tail in defense.
If you don’t want your lizard to lose its tail out of fear, there are some things you want to do to ensure that they will only get more and more comfortable with you handling them.
When it comes to figuring out when the best time to start handling your leopard gecko is, you need to observe and consider their age.
If you bring home a baby leopard gecko, they will generally need a few weeks, maybe even up to 4 weeks, to get used to their new surroundings and you as well.
Feeding them daily will help them see you as their caregiver rather than a very large predator. If they aren’t acting skittishly or hide less, you can start trying to put your hand in their enclosure or slowly get into tong feeding a few times so that they can get used to seeing your hand near them.
Don’t rush your leopard gecko and let them take their own time to get used to you, otherwise, you can stress them out and will have to start the whole process over again.
A stressed-out gecko will hide every time you come near or they might even make vocal noises.
Make sure you keep an eye out for signs that they are not stressed out by your trying to handle them. As long as they are eating and having normal bowel movements, you should be fine.
How to Correctly Handle a Leopard Gecko
When you know your gecko is healthy and has grown used to you, you can start attempting to handle your leopard gecko. In order to avoid them dropping their tail, start by slowly introducing your hand to them by letting it in the tank now and then.
It is recommended that you start by trying to gently introduce them to the idea of getting on your hand. Allow them to have full control of the situation while you remain calm and set your hand out in front of them like a step they can get onto.
You must never pick them up by the tail! This will definitely make them feel uncomfortable and might scare them enough to make them lose it.
If your gecko seems comfortable enough to get onto your hand in their cage, you can start trying to pick it up by its mid-body, which is the safest place to handle them.
You should move on taking them out of their cage by removing their hide from the cage with them in it and getting them out onto a large open space. Make sure there isn’t anywhere they could run and fall as you attempt to practice handling them comfortably.
Do not continue to try to handle them if they act stressed out. This is important as you want your pet to be comfortable around you.
Always give them breaks and take it steady. Be gentle and patient with the process.
Check out GoHerping’s YouTube video on handling your leopard gecko here:
When Not to Handle Your Gecko
If your leopard gecko is showing obvious signs of discomfort while you try to handle them, this is when you stop until they are more used to you. Give them another week of non-handling and being in their presence.
In younger geckos, you might hear them hiss or screech at you. They will act skittishly and try to remove themselves from you.
In most cases, they also might just stop eating completely or might show irregular bowel movements. They will also try to hide from you for longer periods of time than usual.
If you are not handling them or even attempting to and they are showing signs of stress, make sure you rule out any environmental, diet, or health issues that might be stressing them out before you do attempt to handle them at all.
How to Avoid Getting Bitten
As we mentioned many times before, be gentle. Go slow and take it step by step.
Leopard geckos are prone to getting stressed out and we want them to be comfortable at a young age. It will be a process, so do not rush your leopard gecko.
Happy and healthy leopard geckos will be calm and easy-going since that is their nature. They aren’t usually ones to bite, which is why they are great pets for beginner reptile-owners.
If your leopard gecko is showing signs of stress or are biting and you’re not sure why, check out our article on how to tame your leopard gecko here:
Can I Handle My Leopard Gecko Often?
Once your cute little lizard friend is finally used to you and is allowing you to handle them, you might be tempted to have them in your hands all day, every day.
As cute as it would be, there is also such thing as too much handling, which can also stress your leopard gecko out.
These creatures are timid and independent, making them calm beings that are easily stressed out.
We recommend that you limit handling to only every other day, giving them a break in between and only for about 20 minutes a day. They are still trying to get used to their environment and their new life with you.
With some time, it is possible to increase the amount of handling in order to bond with your leopard gecko, but it is still recommended to keep it under 30 minutes.
We understand that you love your leopard gecko and want to carry them with you everywhere, but these independent lizards need their space and don’t enjoy handling for too long.
Make sure that you are not stressing your leopard gecko out with too much handling, while also allowing the two of you to bond with the handling you do end up doing.
Now that you know how to safely start getting your leopard gecko accustomed to handling, time to put it into practice. Remember to take it day by day and don’t expect progress right away or instant gratification.
Educating yourself on the safest way to do it possibly will save your relationship with your pet and possibly their whole tail!
More Leopard Gecko Stuff
Care & Overviews
- Leopard Gecko Care Sheet
- Best foods for leopard geckos
- Best treats for leopard geckos
- Different types of leopard gecko morphs
- Crested gecko vs leopard gecko
- How to breed leopard geckos
- All about leopard gecko eyes
- How Much Do Leopard Geckos Cost?
Table of Contents
|
OPCFW_CODE
|
The number of computers in use has increased rapidly over the last two decades, not only in schools and companies but also in homes, and this trend shows no sign of slowing down. This is because the internet has been a revelation for families, business firms and consumers, giving access to each other and to information in ways that were not possible before.
Computers are always improving and one of the most powerful new devices is the personal computer. Every year, new features and functions are added to these machines, making them more powerful. These additions have made it possible for people to manage their own computers, to play games on them, and to download and upload large amounts of data.
Besides playing games, there are several other main functions that computers perform which have made them so popular. These include word processing, emailing, spreadsheets, printing and the likes. All these activities can be done effectively from a single machine and they make computers an indispensable part of our lives.
However, for all these functions to work optimally, the computers will need to be updated with software. Now, there are many different types of computers, each with a different set of features which will need different types of software.
You can buy specific software which is specifically designed for a certain purpose.
For example, there is software designed for the security of your computer, for gaming, for sharing documents, for the cleaning up of a hard drive, and for the technical support you will require from the company you buy the software from.
However, the problem with these is that there is a wide variety of these available for computers. Some may be sold separately and some may be available as one package. This means that you will have to decide what you want before buying a piece of software to make sure that it suits your computer.
Multiple operating systems
It is a fact that it is easy to keep track of your computers with the help of multiple programs. In order to get these programs, you have to use the internet and search for companies that will sell you the programs you need.
When you buy software, you will get a license to use the software. Each program will have a number of sub-programs, which will control specific parts of your computer.
There are also specific tasks that your software will perform for you. You may for example use it to stop your computer from crashing or you can use it to shut down the programs that are causing problems.
You also get access to user interfaces and some of these are keyboard layouts, language translation, and email application types. There are also options that will let you play games and listen to music without having to open the actual computer, while others will only be able to run on certain types of operating systems.
Basically, there are a lot of different uses for your computers and depending on what you intend to do with them, you will need a specific set of software to help you. To find a good set of software that suits your needs, you will have to do some research.
|
OPCFW_CODE
|
import re
class DottedDict:
"""
String subscripts are interpreted as keys to be used at successive
layers of subscripting through the dictionaries and lists of a
JSON-like record.
Given a DottedDict with the following structure:
dd = DottedDict({"first": {"second": [{}, {}, {"third": "bingo"}]}})
the value returned by the expression
dd['first.second[2].third']
should be the string 'bingo'.
"""
def __init__(self, d):
self._d = d
def _apply_key(self, o, k):
try:
return o[k]
except ValueError:
raise KeyError("Non-integer list subscript")
except IndexError:
raise KeyError('Invalid list index at end of "{}"'.format(k[: self.pos]))
except KeyError:
raise KeyError(
'Unrecognised field name at end of "{}"'.format(k[: self.pos])
)
def __getitem__(self, key):
"""
Returns the result of walking into the nested
data structure using key as path specifier.
"""
o = self._d
for k in self._parse_path_key_spec(key):
o = self._apply_key(o, k)
return o
def __setitem__(self, key, value):
"""
Set the nested element located at the specified path key
Currently handles only dicts.
Does not recursively create missing structures
"""
v = self._d
fs = self._parse_path_key_spec(key)
k = next(fs)
for nk in fs:
v = v[k]
k = nk
v[k] = value
def __delitem__(self, key):
"""
Delete the nested element located at the path key.
"""
v = self._d
fs = self._parse_path_key_spec(key)
k = next(fs)
for nk in fs:
v = v[k]
k = nk
del v[nk]
def _parse_path_key_spec(self, key):
parser = KeySpecParser()
for position, fragment in parser.parse(key):
self.pos = position
yield fragment
class KeySpecParser:
IDENTIFIER_PATTERN = r"(?P<name>[_A-Za-z][_A-Za-z0-9]*)"
SUBSCRIPT_PATTERN = r"\[(?P<index>-?\d*)\]"
HEAD_PATTERN = re.compile(rf"{IDENTIFIER_PATTERN}|{SUBSCRIPT_PATTERN}")
TAIL_PATTERN = re.compile(rf"\.{IDENTIFIER_PATTERN}|{SUBSCRIPT_PATTERN}")
def parse(self, key):
self._initialise_parser()
end = len(key)
while self.current_position < end:
token = self._next_token_match(key)
yield self.current_position, token
def _initialise_parser(self):
self.current_position = 0
self.current_pattern = KeySpecParser.HEAD_PATTERN
def _next_token_match(self, key):
pattern_match = self.current_pattern.match(key, self.current_position)
self._raise_error_if_syntax_error(key, pattern_match)
self.current_position = pattern_match.end()
self.current_pattern = KeySpecParser.TAIL_PATTERN
return self._convert_to_token(pattern_match)
def _convert_to_token(self, pattern_match):
string, integer = pattern_match.groups()
if string:
return string
return int(integer)
def _raise_error_if_syntax_error(self, key, match):
if match is None:
raise KeyError(
"Cannot find name or list subscript at start of {!r}".format(
key[self.current_position:]
)
)
|
STACK_EDU
|
One of the nicer laptops I've owned, with a big screen. Solid construction; owned since July of 2008. Newer Intel graphics driver (04-09) displays terrific quality, too. Dual screen w/VGA adapter is fine, too.
DVD drive is noisy and a little clunky, and I wish I had a 1GB Ethernet port. My wireless connection is faster than the installed 100MB, so I use that. And, as usual, an obnoxious bundle of useless software comes pre-installed. Just have to uninstall it, clean up the registry, and you're good to go.
Purchased for 11 y.o. daughter and seems to be doing the job very well. Integrated camera works well and processor is certainly fast enough for what she needs it for. Value for dollar is very good and would purchase again if needed.
I work from home and needed a portable computer to accomplish multiple business tasks; this Compaq was exactly what I was looking for. It's light weight, loads applications fast and the screen illuminates nicely.
The only thing I really did not care for is the built-in web-cam and the right "shift" key is very small took getting used to.
I've had this laptop for a little over a year and it's amazing. It's incredibly beautiful, sturdy (I should know - it's been dropped), and it has a great screen. Watching movies on it is like being in a theater, although you have to use external speakers or headphones for the sound quality to be any good. The keyboard is excellent, quietish and fun to use. I haven't had any issues with Vista. Then again, I don't do any programming or gaming or anything. I haven't even used a mouse at all - the touchpad is pretty good. In fact the only bad things I can think of is that sometimes (maybe once/twice a month) the screen and mouse freeze for a few minutes. I don't mind it in general, but it freaks out anybody else using my computer. I'd say this is a fabulous computer for students, or for people that don't need super-speedy stuff (not that it's slow). In all, I was very pleased with my purchase, especially since I got it refurbished for half the price.
Could someone who has purchased the Compaq Presario CQ70-120US respond to a question I have: is this computer exactly as shown in the illustration on this page? Specifically, does it have a black keyboard and surrounding area and a chrome strip with a chrome touchpad in front? I ask because I purchased and returned an HP G70-250US (the HP version of this computer) and the top surface, as well as the keyboard, were all chrome. Before I order I would like to somehow verify what is in the box. You can leave a comment as a review on this page. I would greatly appreciate it. Thank you.
|
OPCFW_CODE
|
// =================================================================
//
// Copyright (C) 2020 Spatial Current, Inc. - All Rights Reserved
// Released as open source under the MIT License. See LICENSE file.
//
// =================================================================
package grw
import (
"fmt"
"io"
pkgalg "github.com/spatialcurrent/go-reader-writer/pkg/alg"
"github.com/spatialcurrent/go-reader-writer/pkg/bufio"
"github.com/spatialcurrent/go-reader-writer/pkg/compress/flate"
"github.com/spatialcurrent/go-reader-writer/pkg/compress/gzip"
"github.com/spatialcurrent/go-reader-writer/pkg/compress/snappy"
"github.com/spatialcurrent/go-reader-writer/pkg/compress/zlib"
)
// WrapWriter wraps the given writer with a buffer and the given compression.
// alg is the algorithm. dict is the initial dictionary (if the algorithm uses one).
//
// - https://pkg.go.dev/pkg/archive/zip/
// - https://pkg.go.dev/pkg/compress/bzip2/
// - https://pkg.go.dev/pkg/compress/flate/
// - https://pkg.go.dev/pkg/compress/gzip/
// - https://pkg.go.dev/pkg/compress/zlib/
// - https://pkg.go.dev/pkg/github.com/golang/snappy
// - https://pkg.go.dev/pkg/github.com/go-reader-writer/pkg/bufio
//
func WrapWriter(w io.WriteCloser, alg string, dict []byte, bufferSize int) (io.WriteCloser, error) {
if bufferSize < 0 {
return nil, fmt.Errorf("error wrapping writer: invalid buffer size of %d", bufferSize)
}
switch alg {
case pkgalg.AlgorithmBzip2:
return nil, &ErrWriterNotImplemented{Algorithm: alg}
case pkgalg.AlgorithmFlate:
if len(dict) > 0 {
fw, err := flate.NewWriterDict(bufio.NewWriter(w), flate.DefaultCompression, dict)
if err != nil {
return nil, fmt.Errorf("error wrapping writer using compression %q with dictionary %q: %w", alg, string(dict), err)
}
return fw, nil
}
fw, err := flate.NewWriter(bufio.NewWriter(w))
if err != nil {
return nil, fmt.Errorf("error wrapping writer using compression %q: %w", alg, err)
}
return fw, nil
case pkgalg.AlgorithmGzip:
return gzip.NewWriter(bufio.NewWriter(w)), nil
case pkgalg.AlgorithmSnappy:
return snappy.NewBufferedWriter(bufio.NewWriter(w)), nil
case pkgalg.AlgorithmZip:
return nil, &ErrWriterNotImplemented{Algorithm: alg}
case pkgalg.AlgorithmZlib:
if len(dict) > 0 {
zw, err := zlib.NewWriterDict(bufio.NewWriter(w), dict)
if err != nil {
return nil, fmt.Errorf("error wrapping writer using compression %q with dictionary %q: %w", alg, string(dict), err)
}
return zw, nil
}
return zlib.NewWriter(bufio.NewWriter(w)), nil
case pkgalg.AlgorithmNone:
if bufferSize > 0 {
return bufio.NewWriter(w), nil
}
return w, nil
}
return nil, &pkgalg.ErrUnknownAlgorithm{Algorithm: alg}
}
|
STACK_EDU
|
Clean hard drive
Choosing the right hard drive
cheep hard drives
Clean HD (reformat) using XP
clean install leaving gig on my hard disk
Clean out hard drive?
Cleaning hard drive D.
cleaning of hard drive
Cleaning out the harddrive.
Cleaning Hard Drive the proper way
Cleaning off the hard drive
cleaning up after partition merge error
Cleaning hard drive thoroughly
Clearing the Harddrive
Clicking noise > hard drive dying? Need backup advice etc.
clear hard drive
Clicking external hard drive
click noise in hard disk WD
Clearing an old internal HD
Clearing and formatting a used primary drive
clicking sound from hard disk- does not boot properly
clean install with sata drives
clean my hard drive
click ahrd drive goes to search
Clearing Space on Primary Partition
Clicking Hard Drive
Clicking sound in hard drive
clone a harddrive
Clone old HDD settings and files to new HDD to boot from new
clone a harddisk
Clearing out my computer/hard drive
Cloning a drive
cloning a computer
Cloning a Dell Computer
Cloning Hard Drive
Clone hard drive with ME
Cloning Hard Drive issue
Cloned hard drive
clone/image to external hardrive
Clone HD in Ghost
Cloned Hard Drive - Rebooted with HD plugged in. :(
cloning a hard drive
Cloned Hard Drive Bad Blocks
Clone/Snapshot Vista partition to new HDD
Clone Laptop Internal to External HD with Ghost
Cloning vs Imaging=difference?
Cloning my main drive.
Cloning a HDD
Cloning a partitioned HDD
Clone Machine Hard Disk
cloning hard drives
Cloning or copying HDD
clonning the PC
Clone old drive to brand new drive
Cloned/Imaged PC Issues
Clone HD with Ghost on it
Cloning the second PC drive (D:\)
Clone or backup on my hardware backup program
Cloning Laptop Hard Drive
Clone from dying HD to new HD
Clone Hard Drive -- first time doing this
Clearing hard drive
cloning SATA hard drive with Windows XP to USB hard drive
Cloning Slow - can I stop it and start over?
Clone a laptop hard drive from an external case to an internal hard drive
Clone new HDD steps
Clicking noise which i suspect is harddrive plese help
Clicking Noise? Hard Drive or Fan?
Cloning of Full Hard Drive to include Computer Id
Color problems after replacing hard drive
Cold Hard Drive Problems
Comment on my HDD procedure please
Comp reboots after installing second hard drive
Compact Laptop Hard Drive Failure
compaq 5108 new hard drive problem
Compaq and large HDD
COMP Cant find Hard drive!
Compaq N600C laptop hard drive file cale for file recovery
Compaq Laptop New Hard Drive
Compaq hard drives
Compaq Presario 17XL360 - Need to know largest laptop HDD I can buy?
Compaq Presario HD Installation
Compaq Presario HDD replacement ?s
Compaq laptop not recognizing hard drive
compare hard drives
compaq hard drive replacement
Compaq with lost hard drive
Compaq w/ 10 gig hard drive
Compatible Hard Disk
Compaq hard drive
Compaq Presario Can't Find Hard Drive
Compaq Presario Hard Drive replacement
compaq presario 800 harddisk upgrade
Complete Hardware erase (Help
Compatible external harddrive
Complete HDD Format
Compatible Notebook Hard drive
completely format hardrive "XP"
Completed hard drive format
compatable hard drive question
Completely dead harddrive (like a rock) WD5000AAKS-00TMA00
completely separate two harddrives on one computer
completely erasing a hard drive
Completely reformat HD?
Compressing Hard Drive
Compatibility of dell inspiron 600m with WD Elements 250GB [external hard drive]
Computer asks me to format slave hard drive
Complete format of XP.now SLOW
Computer can't read from hard drive
Computer can't recognize iomega external HD
Computer can't see second hard drive.
complete wipe of SATA laptop drive
Computer continually accessing hard drive?
Computer always running
Computer boots to second drive
Computer Constantly Running for No Reason
Computer continuously running
Computer Corrupting Harddrives?
Computer completely dead. Data recovery possible?
Computer doesnt detect my Slave Hard Drive
Computer does not read larger hard drive properly
Computer does not recognize IDE devices during bootup
Computer does not recognize hard drive
Computer doesn't reconize Hard Disk Drives
Computer Doesn't Recognize Extra Hard Drives
Computer doesn't recognize hard drive
Computer Freezes Harddrive Problems?
Computer freezes when writing into the secondary hard drive
Computer getting slower by the second
Computer Drive Is Always Blinking
Computer has died and won't detect hard drive
Computer Hard Drive down to minimal space
Computer Game to Hard Drive
Computer isnt turning on when I attach a second hd
Computer Incredibly slow. Hard drive failing?
computer locks up until it accesses my external HD (there is no reason it should)
Computer killing hard drives.
Computer makes clicking noises and screen is blue
Computer Not Recognizing Hard Drives Capacity
Computer not recognizing all space on hard drive
Computer memory andexternal hard drive help?
Computer makes a 'click' and then everything freezes. please help!
Computer No Longer Sees My Second Hard Drive? HELP!
Computer not recognizing secondary hard drive
Computer not detecting hard drive suddenly? RAID/Array error?s?
Computer Problem (windows wont install even after format)
Computer locks up on defrag and chkdsk; suspected hard drive failure
© Copyright 2017 newpast.net. All rights reserved.
|
OPCFW_CODE
|
import sys
import xml.etree.ElementTree as ET
import datetime
from datetime import datetime
import math
def into_float(value):
try:
return float(value)
except ValueError:
sys.exit("Unexpected input format, exiting...")
def into_int(value):
try:
return int(value)
except ValueError:
sys.exit("Unexpected input format, exiting...")
def get_date_format(value):
date = value.split('T')[0].split('-')
time = value.split('T')[1].split('Z')[0].split(':')
head, _, _ = time[2].partition('.')
time[2] = head
return datetime(into_int(date[0]), into_int(date[1]), into_int(date[2]), into_int(time[0]), into_int(time[1]), into_int(time[2]))
def seconds_into_time(seconds):
seconds = seconds % (24 * 3600)
hour = seconds // 3600
seconds %= 3600
minutes = seconds // 60
seconds %= 60
return "{:.0f}:{:02.0f}:{:02.0f}".format(hour, minutes, seconds)
def meters_into_kilometers(meters):
return math.floor((meters / 1000.0) * 10 ** 3) / 10 ** 3
class Lap:
def __init__(self, id, time, distance = 1.0):
self.id = id
self.time = time
self.distance = distance
class Activity:
def __init__(self, activity_type='Running'):
self.id = ''
self.activity_type = activity_type
self.total_time_seconds = 0
self.total_distance_meters = 0
self.total_completed_laps = 0
self.track = None
self.laps = []
self.device_name = ''
self.device_version = ''
def set_total_laps(self):
self.total_completed_laps = int(self.total_distance_meters // 1000)
def render_info(self):
print(self.id)
print(self.device_name)
print(self.device_version)
print(self.total_time_seconds)
print(self.total_distance_meters)
print(self.total_completed_laps)
def get_lap_time(self, start_datetime, finish_datetime):
return (finish_datetime - start_datetime)
def get_rest_distance(self, distance_meters):
return math.floor(((self.total_distance_meters / 1000.0) - distance_meters) * 10 ** 3) / 10 ** 3
def get_rest_distance_approx(self, prev_laps_count):
return math.floor(((self.total_distance_meters / 1000.0) - prev_laps_count) * 10 ** 3) / 10 ** 3
def set_laps(self):
lap_number = 1
if self.total_distance_meters < 1000:
self.laps.append(Lap("{}.".format(lap_number), seconds_into_time(self.total_time_seconds), meters_into_kilometers(self.total_distance_meters)))
else:
if self.track == None:
sys.exit("No valid input data (missing track in .tcx file), exiting...")
else:
start_datetime = get_date_format(activity.id)
distance_meters = 0.0
#finish_distance = distance_meters
trackpoint_number = 1
trackpoint_count = len(self.track)
for trackpoint in self.track:
distance_meters = into_float(trackpoint[1].text)
if distance_meters // 1000 >= lap_number:
finish_datetime = get_date_format(trackpoint[0].text)
#finish_distance = distance_meters / 1000.0
self.laps.append(Lap("{}.".format(lap_number), self.get_lap_time(start_datetime, finish_datetime)))
start_datetime = finish_datetime
lap_number = lap_number + 1
elif trackpoint_number == trackpoint_count:
finish_datetime = get_date_format(trackpoint[0].text)
#self.laps.append(Lap("{}.".format(lap_number), self.get_lap_time(start_datetime, finish_datetime), self.get_rest_distance(finish_distance)))
self.laps.append(Lap("{}.".format(lap_number), self.get_lap_time(start_datetime, finish_datetime), self.get_rest_distance_approx(lap_number - 1)))
trackpoint_number = trackpoint_number + 1
def render_into_table(self):
print('\n')
laps = []
times = []
distances = []
laps.append("{} v{} ({})".format(self.device_name, self.device_version, self.activity_type))
times.append("{} (total)".format(seconds_into_time(self.total_time_seconds)))
distances.append("{:.03f} (total)".format(meters_into_kilometers(self.total_distance_meters)))
for lap in self.laps:
laps.append(lap.id)
times.append(lap.time)
distances.append("{:.03f}".format(lap.distance))
titles = ['Laps', 'Times [h:m:s]', 'Distances [km]']
data = [titles] + list(zip(laps, times, distances))
for i, d in enumerate(data):
line = '|'.join(str(x).ljust(30) for x in d)
print(line)
if i == 0:
print('-' * len(line))
if __name__ == '__main__':
file_path = ""
if len(sys.argv) > 1:
file_path = sys.argv[1]
else:
file_path = input("tcx file path: ")
tree = ET.parse(file_path)
root = tree.getroot()
activity = Activity(root[0][0].attrib['Sport'])
activity.id = root[0][0][0].text
activity.device_name = root[0][0][2][0].text
activity.device_version = "{}.{}.{}.{}".format(root[0][0][2][3][0].text, root[0][0][2][3][1].text, root[0][0][2][3][2].text,root[0][0][2][3][3].text)
activity.total_time_seconds = into_float(root[0][0][1][0].text)
activity.total_distance_meters = into_float(root[0][0][1][1].text)
activity.set_total_laps()
activity.track = root[0][0][1][6]
activity.set_laps()
activity.render_into_table()
input("\n\nPress enter to exit\n")
|
STACK_EDU
|
R v Ratt, 2020 SKCA 19: Whatcha gonna do when they come for youR v Ratt, 2020 SKCA 19 (CanLII)
Everyone in Canada has “the right not to be arbitrarily detained” under section 9 of the Charter or, according to the Court of Appeal for Saskatchewan in the recent decision of R v Ratt, 2020 SKCA 19, everyone in Canada has “the right not to be arbitrarily detained” but only if that individual first submits or acquiesces to the (possibly arbitrary) detention.
The broad conclusion in Ratt, which seems to find support in the jurisprudence (see R v Atkins, 2013 ONCA 586 at ¶10), should encourage practitioners to seriously consider abandoning altogether seeking relief under the Charter in those cases - such as assault police and obstruct police - where the Crown must prove beyond a reasonable doubt that the police officer was acting “in the execution of his[/her] duty”. As noted by Justice Code in Zargar, this is often “tactically wise” given that the higher burden will then fall on the Crown.
And while the facts in Ratt are decidedly not ideal to argue otherwise - “Mr. Ratt was already running away from the scene when the police got out of their vehicle and issued their commands to “stop” and that Mr. Ratt was “under arrest” (italics added, see ¶37) – the bare, descriptive assertion in Ratt that “an attempted detention is not a detention for the purposes of s. 9” (see ¶38) is overly broad and runs counter to a generous and purposive approach to interpreting the Charter – including one that seeks to prevent breaches before they occur (see R v Côté, 2011 SCC 46 at ¶84, in the context of s. 8 jurisprudence).
Without much analysis, the Court of Appeal for Ontario in R v Dunkley implicitly embraces the approach which I advocate here: that there should be no requirement for submission or acquiescence, even if “extremely brief” (see Ratt, at ¶36) , in order for a “detention” under s. 9 to coalesce. In that case Mr. Dunkley had exited a gas station kiosk when the police approached him (for a particularized criminal investigation) and asked for his identification, and in response to which Mr. Dunkley “backed away and then ran from the scene” (see ¶10). On appeal, Mr. Dunkley apparently argued - and quite curiously so - that he was not detained (see ¶25), a claim which Hourigan J.A. briefly rejected at ¶28:
…I see no error in the trial judge’s conclusion that the appellant was detained when the officers approached him outside the gas station. The officers confronted the appellant for the purposes of their investigation and identified themselves. In reaction to that confrontation, the appellant chose to flee. Although the detention was only momentary, it was a detention nonetheless.
It seems to me that in cases where the police are clearly attempting to detain or arrest an individual – such as by shouting “stop, you’re under arrest!”(as distinguished by what occurred in Nesbeth, where the police language and conduct was not at all suggestive of an intention to detain) - it is simply wrong to deny the protection of s. 9 of the Charter merely because the individual immediately flees (to be sure, how do we really measure “momentary” in Dunkley?). After all, the fact of flight “may well be some evidence that [the individual] believed that he had no choice but to comply, and instead of complying, decided to escape” (see Atkins, at ¶10). Indeed, other than submission or acquiescence, what could be better evidence - than immediately running away - that the person did not feel that they had the choice to just walk away (see R v Grant, 2009 SCC 32 at ¶30, 41-42, R v Le, 2019 SCC 34 at ¶25-26 & 72).
The purported requirement of submission or acquiescence prior to flight - even if “extremely brief” - seems to unduly constrain the “realistic appraisal of the entire interaction” demanded by Grant. More importantly, such a requirement tends to denude the purpose underlying section 9: “to protect individual liberty from unjustified state interference” (and here, I would argue, before they occur), including by guarding “against incursions…without adequate justification” (see Grant at ¶ 20).
Individuals ought to have the right not to be arbitrarily detained without first submitting to that arbitrary detention. They should not have to acquiesce to an imminent violation of their rights in order for that right to be later vindicated. That said, I will not be advising my clients to flee from the police, not least because it is inherently dangerous, it is illegal in the face of a valid demand, it might supply further grounds and it could be used as after-the-fact conduct. But if they happen to run away from an impending (possibly arbitrary) detention, be ready to argue section 9 or, if applicable, Zargar.
|
OPCFW_CODE
|
Login incorrect after installing nfs-utils
I am attempting to install nfs-utils (and thus all its dependencies) in a stateless RHEL6.5 KVM VM. The VM is configured to have a read-only root via the /etc/sysconfig/readonly-root file, Linux magic that I didn't implement, and the "Readonly" option in the VM settings GUI. The installation is done using virt-customize -a image.img --run install_script.sh. The script uses a here-document to build the .repo file (which goes to the CentOS vault for 6.5), then I use yum install -y nfs-utils to do the actual installation. I've taken this approach because it appears easier, cleaner, and less error-prone than getting the VM to boot in read/write mode, installing, cleaning up, then shutting down. Also, I can't figure out how to get it to boot into read/write.
The output of the virt-customize showed that the packages were installed successfully. The only failure was the removal of the .repo due to a typo.
After this installation, I booted up the VM and attempted to login as usual. My attempts are now being rejected because Login incorrect.
I checked with virt-cat to make sure that the login shell for the user was set correctly, and the encrypted password in /etc/shadow looks the same as the original.
My original approach was to create an ISO image containing all the dependencies and nfs-utils, and attach that to the VM and have a script handle the installation. The same issue occurred then. I narrowed the issue's source down to the installation of one of the NFS packages: either nfs-utils or nfs-utils-lib (One of the two, I can't remember), and the rpcbind package. When one of these packages is installed, and then the VM is booted, login breaks. I am guessing that this is the same issue occurring now.
The image I'm using for the VM is cp-ed from the original VM's image as this is a test to figure out how to do the installation correctly.
Yes, I am entering the password correctly. I've tried to change the user's password (via libguestfs tools), but I still can't log in, so I'm not sure if the change failed (the return code for the commands indicates success) or the same problem is in effect.
Question:
Can the login issue be debugged and fixed? If so, how?
How do I do the installation correctly so that login doesn't get borked? Clearly this way doesn't work.
The core issue here was SELinux. There are multiple ways to install software into a VM, but they boil down to the VM being booted, or not being booted. The latter is what screws everything up, and it's how I did the install in the post.
As best as I can figure, when the VM is shutdown, and software is installed e.g. via the command in the question, SELinux realizes something changed, and it doesn't know what happened when the VM is booted after install, and locks everything down. Allowing SELinux to relabel fixes the issue, but broke a lot of other stuff for me. There is, effectively, no way to script an installation when the system is offline because you'll probably run into this. Note, this issue could just be with RHEL6.5 VMs and their version of SELinux (we are version-locked to RHEL6.5).
The proper way of doing an install for a VM like this is to boot it into read/write mode and carry out the installation either via mounting an ISO with the required RPMs, or using a package manager. While booted, SELinux will be aware of the installation, and everything will be fine and dandy.
|
STACK_EXCHANGE
|
How to Know if the CPU is Compatible With the Motherboard?
While overhauling your PC, it is essential to guarantee that the parts you are placing in are viable. One of the most basic parts to check is the CPU since, in such a case that it isn't viable with the Motherboard, your PC won't work.
This article will tell you the best way to verify whether the PC Discover CPU is viable with the Motherboard.
Stage 1. Turn on the PC and press the "F2" key to enter BIOS.
Stage 2. Type "Intel" into the pursuit box and press Enter key to show all Intel CPU-related choices.
Stage 3. Go to the "Computer processor" tab and snap on the CPU Model you need to check. The number close to it is the CPU model number that you are looking into.
Stage 4. Click on the "Examination" box, and afterward click on the CPU Model you need to contrast and. The following screen will show a rundown of CPU models and their viable renditions.
Stage 5. Match the CPU model number and its viable adaptation to check whether it is viable or not.
Check the Computer chip attachment types:
The essential thing to remember is the attachment. That is the actual prerequisite in Motherboard for which a CPU should fit. This deterrent is mostly connected with Intel CPUs, as attachments might vary however aren't viable with chipsets.
The attachment types are Socket 478, Socket 775, A, and LGA 775. The necessities of a CPU to squeeze into the Motherboard (attachment type) impacts its capacity and execution. The exhibition and limit of a CPU are primarily subject to the Motherboard and its attachment type. Consequently, you should be exceptionally cautious while choosing a motherboard for your PC.
Check for viable CPU:
Equipment similarity is fundamental while purchasing a motherboard. It is savvy to check whether the CPU you need to purchase squeezes into the Motherboard you intend to buy. For instance, on the off chance that you purchase a Socket A motherboard, you really want an Intel Core 2 Duo processor. To purchase a Socket 775 motherboard, you really want an Intel Pentium 4 processor.
To purchase doesn't squeeze into a motherboard, you will be up the creek without a paddle. It won't make any difference what new elements the Motherboard has in light of the fact that the actual CPU is contrary with it. The CPU you really want relies upon the Motherboard and the framework you will purchase. Try to check the Motherboard's similarity with the CPU that you are wanting to purchase.
Check the Processor Compatibility Directly on Motherboard Manufacturer's Website
The motherboard makers frequently have their sites. On them, you can find data about the similarity of your desired CPU to purchase.
SIIT Courses and Certification
Also Online IT Certification Courses & Online Technical Certificate Programs
|
OPCFW_CODE
|
Hi folks, in this fourth post of the new Mule Agent blog post series I will introduce the Mule Agent architecture and the main advantages it provides.If you missed the previous posts in this series, check them out below:
- Introducing the new Mule agent »
- Mule Enterprise Management and the new Mule agent »
- How to: New Mule Agent »
During its development, we discussed the best ways to make the agent extensible and configurable, allowing users to customize it depending on their main use cases, adapting it to their current development cycle and practices. So please enter and take a look at the architecture we defined and implemented for those purposes.
The Mule Agent is a plugin that exposes an API through REST and Websockets, highly configurable and extensible, providing access to the main Mule ESB management and monitoring capabilities. An overview of the architecture is exposed in the diagram below:
External Handlers: Handle any external message and call the corresponding services to perform the requested operation.
Agent Services: Include the logic to perform the operation, injecting the corresponding Mule Services to interact with the ESB management and monitoring API. These services are the only component that call directly the ESB API.
Internal Handlers: Injected into the Agent Services, they give support to call external services (e.g. Monitoring Systems, Logging platforms) and send messages produced by Agent Services (e.g. Mule Event Tracking notifications, JMX metrics).
Example: How Deployment Works
So now that we have a better understanding of the Agent architecture we will take a common feature, deploying applications, to exemplify how each component interacts in the process.
The first step to deploy an application is calling the API (we will use REST) sending the application information. The request should look like this:
PUT http://localhost:9999/mule/applications/testingapp HTTP/1.1
This request will be handled by the REST API at the Transport Layer which will dispatch it to the Deployment External Handler.
Once the external handler gets the message it will decide which services should be called to perform the operation, in this case the Mule Agent Application Service, which receives the request and call the Mule ESB Deployment Service to perform the operation.
Here is when the internal handlers take place. The Mule Agent Application Service injects every handler able to manage application deployment notifications (this means that you can provide your own handler implementation just adding your libraries to the classpath. In future posts we will dig into how extending the agent). After the service sends the deployment request it will process every notification from the Mule ESB Deployment Service and call all the injected internal handlers (you can have more than one per message type) to dispatch the notifications.
Now the process should generate the message back for the client which called the API. The response is originated in the Mule Agent Service and enhanced by the Deployment External Handler.
You should get a response like:
“status”: “Deployment attempt started”
You can find more about the new Mule Agent at MuleSoft documentation portal (http://www.mulesoft.org/documentation/display/current/The+Mule+Agent). Please stay tuned for new Mule Agent posts and share your comments or questions, we will be more than happy to hear of you!
See you soon!
|
OPCFW_CODE
|
Binelli G., Montaigne W., Sabatier Daniel, Scotti-Saintagne C., Scotti I. (2020). Discrepancies between genetic and ecological divergence patterns suggest a complex biogeographic history in a Neotropical genus. Ecology and Evolution, 10 (11), 4726-4738. ISSN 2045-7758.
Titre du document
Discrepancies between genetic and ecological divergence patterns suggest a complex biogeographic history in a Neotropical genus
Année de publication
Binelli G., Montaigne W., Sabatier Daniel, Scotti-Saintagne C., Scotti I.
Ecology and Evolution, 2020,
10 (11), 4726-4738 ISSN 2045-7758
Phylogenetic patterns and the underlying speciation processes can be deduced from morphological, functional, and ecological patterns of species similarity and divergence. In some cases, though, species retain multiple similarities and remain almost indistinguishable; in other cases, evolutionary convergence can make such patterns misleading; very often in such cases, the "true" picture only emerges from carefully built molecular phylogenies, which may come with major surprises. In addition, closely related species may experience gene flow after divergence, thus potentially blurring species delimitation. By means of advanced inferential methods, we studied molecular divergence between species of the Virola genus (Myristicaceae): widespread Virola michelii and recently described, endemic V. kwatae, using widespread V. surinamensis as a more distantly related outgroup with different ecology and morphology-although with overlapping range. Contrary to expectations, we found that the latter, and not V. michelii, was sister to V. kwatae. Therefore, V. kwatae probably diverged from V. surinamensis through a recent morphological and ecological shift, which brought it close to distantly related V. michelii. Through the modeling of the divergence process, we inferred that gene flow between V. surinamensis and V. kwatae stopped soon after their divergence and resumed later, in a classical secondary contact event which did not erase their ecological and morphological differences. While we cannot exclude that initial divergence occurred in allopatry, current species distribution and the absence of geographical barriers make complete isolation during speciation unlikely. We tentatively conclude that (a) it is possible that divergence occurred in allopatry/parapatry and (b) secondary contact did not suppress divergence.
Plan de classement
Sciences du monde végétal
GUYANE FRANCAISE ; AMAZONIE
Fonds IRD [F B010078917]
|
OPCFW_CODE
|
June 12, 2011, 8:02 p.m.
posted by lambda
Motion Capture and Video Conferencing Fun
Keep an eye on the world with your webcam.
Some years back, it was the height of geek cred to have a webcam. At that point in history, the average webcam was a hulking device that looked more like a CCTV camera and cost an inordinate amount of money. Many of these bulky units also needed an expensive video card to squeeze the huge amounts of data through weedy `486 processors. Since those early days, the success of the webcam has catapulted and virtually everyone has picked one up for peanuts.
With this explosion of webcams and the rapid growth in broadband speed, videoconferencing has become something of a reality. [Hack #63] covered how to use the GnomeMeeting application to make phone calls over the Internet. In this hack, I cover the video conferencing side of GnomeMeeting as well as explore how to enable motion capture so that you can use it to form a security system in your home/office.
Setting Up a Webcam
Before you get started using GnomeMeeting and motion capture, the first step is to ensure that you have a working webcam configured. With more and more people using Project Utopia [Hack #93], device configuration is becoming less of an issue, but it probably needs a brief discussion.
First, you should find out which driver your webcam needs. A number of online hardware databases and some sensible Google searching can help you with this. Then you can find out if that driver is included in your kernel version or if you need to upgrade to a later kernel [Hack #89] that does support your webcam. If the driver is not included in the latest kernel version or you need a newer version of the driver than the one that's included in the kernel tree, you will probably need to patch the kernel source to get the driver you need.
In addition to using a driver for your webcam, you should also ensure that you compile Video 4 Linux support into your kernel. Video 4 Linux provides a standardized method for the kernel to handle video devices. Support for this is in the main kernel tree. It is recommended that this be compiled as a module that can be loaded when you access your webcam.
Most webcams are USB-powered, so you need to ensure that your USB system is configured correctly [Hack #93] . When you plug in a camera, it should load the Video 4 Linux module. Check that it does with this command:
[email protected]:~$ lsmod
In the output you should see videodev listed. If it is not listed, you should insert it with insmod:
[email protected]:~$ insmod videodev
Once Video 4 Linux is loaded, it creates one or more video entries in /dev. Check this with:
[email protected]:~$ ls -al /dev/video*
When you run this command, you should see at least one entry appear. If this is not the case, your camera is not working with Linux. You should double-check your previous work to make sure you did everything necessary.
When you first run GnomeMeeting, you are taken through a configuration druid that helps you set up and configure the program. Included in this setup routine are some features for ensuring your webcam is working properly. At the end of this process, you can click the webcam icon and see the video from your camera in the window.
If you see a corrupted picture when viewing video in GnomeMeeting, the webcam driver might have some bugs that might require an update to a newer driver version; this has been a problem with the OV511 chip-based range of devices. You should check your camera with a range of software such as xawtv or Camorama. If you can get video working in other tools, it might be a problem with how GnomeMeeting is accessing the device. If this is the case, you should contact the GnomeMeeting developers at http://www.gnomemeeting.org.
Creating a Motion Capture Camera
The concept of motion capture is fairly simple. You set up a camera in a particular location and the camera registers when a particular threshold of pixels changes. As an example, you could have a camera focused on a room, and if someone walks past the camera the recording software is triggered by the motion.
This hack covers a tool called motion that is incredibly flexible in dealing with a variety of motion capture needs. What is particularly interesting about motion is the range of responses that can be triggered when motion occurs. The software can send you an email, update a database, save a picture, record a video clip, play a sound, and more. motion is also flexible in how it is configured and used.
To get started, first you should install motion using your distribution package manager, or from the official web site at http://www.lavrsen.dk/twiki/bin/view/Motion/DownloadFiles. You also need to download the software dependencies if you want to save images and movies when movement occurs. Details about these requirements are on the motion web site.
Running motion is simple; just run it from the command line:
[email protected]:~$ motion
motion reads a central configuration file called motion.conf, which is normally stored in /etc. This easy-to-configure file contains settings for all features within motion. The first section that you should concentrate on is called Capture Device Options. Here you should set videodevice to the device in /dev that you are using (this is usually /dev/video0). You should also adjust the frame rate, as this affects the accuracy of the webcam. The next important section to complete is Motion Detection Settings, where you should set the threshold setting to something that is suitable. This setting specifies how sensitive the motion capture is. To test this, run motion, move in front of the camera, and see how the software reacts. A good test is to look at the camera, stand still, and move your eyes, mouth, and other parts of your face to see if the movement triggers the camera.
The rest of the file contains settings that can be used to send you an email when motion occurs, store information in a database, and store images and video. If you want to store images and video, you should ensure that you set the target_dir setting to a directory where you want to store the images/video.
motion also includes a comprehensive set of command-line options that negate the need for a configuration file in some cases. These command-line options are useful if you want to use motion in a very specific way, possibly in a script or cronjob. In addition to this flexibility, motion includes a special execute option with which you can specify a script or command that can be executed when motion is detected.
|
OPCFW_CODE
|
Do not forget to mention that an ALAC file converted on the fly (as when listening to the file by way of iTunes) with no loss in sound high quality, i.e. the result is NOT the identical as playing the WAV or AIFF file. the result is similar as enjoying the WAV or AIFF file, a FLAC file converted on the fly (as when listening aiffpack to flac free converter the file through foobar2000) there can usually be a loss in sound high quality, i.e.
Nevertheless, an AIFF file is sort of five times the dimensions of an MP3 file recorded at 256 kbps and ten times larger if recorded at 128kbps. That signifies that a 5-minute song can take about 50MB of disk area. When it comes to the storage, AIFF audio files can take more than 10MB per audio minute. Its identical to a WAV file because it uses the identical sampling rate and pattern measurement.
Earlier than deciding to go for which one, I had in contrast the sound high quality of the WAV information transformed by these softwares. He all the time responds friendly and shortly. Finally, I have chosen AuI ConverteR, simply because I find the sound quality of the files transformed by it is better than the opposite softwares that I’ve tried. In addition, it’s also a nice experience to suggestions my ideas to the AUI developer for enchancment and fixing points. No, I all the time convert my audio to AIFF or WAV first.
-eight (or -quick and -finest) that management the compression stage actually are simply synonyms for different groups of particular encoding choices (described later) and you may get the identical impact through the use of the identical options; three) flac behaves similarly to gzip in the best way it handles input and output recordsdata. Before going into the complete command-line description, a couple of other things assist to kind it out: 1) flac encodes by default, so you will need to use -d to decode; 2) the options -zero.
For example, there is a compressed model referred to as AIFF-C and one other model called Apple Loops which is used by GarageBand and Logic Audio — and they all use the same AIFF extension. Additionally much like WAV information, AIFF information can comprise a number of sorts of audio. AIFF stands for Audio Interchange File Format. Much like how Microsoft and IBM developed WAV for Home windows, AIFF is a format that was developed by Apple for Mac methods back in 1988.
The second AIFF is a ‘seize’ of the output from the AAC file decoder. There is no such thing as a ‘up-conversion’ as such: lossy data codecs corresponding to AAC throw data away and it can’t be retrieved. Loss-less audio knowledge compression options, corresponding to FLAC or ALAC (as being chosen right here in iTunes), could not cut back the file dimension as much as lossy codecs like MP3 or AAC — but neither do they sacrifice any actual audio data, and so the quality remains every bit nearly as good as the CD, WAV or AIFF unique from which the compressed file is Technical Editor Hugh Robjohns replies: The first AIFF is, in theory, a bit-correct copy of the CD audio and should sound identical to the CD. In follow, the accuracy of the rip will depend on the cleanliness and high quality of the CD itself, and the potential of the disc participant and ripping software.
He advised me, Reliability, stability and quality.” He pointed out that NCH Software program has always updated and improved Swap for more than 20 years, and every time a brand new version is released, it passes by a wide range of intensive inner testing procedures.” If you are critical about the quality of your music assortment and different audio files, it is worth spending a couple of dollars to make sure the software program doesn’t impart undesirable artifacts or noise in the course of the conversion process. I talked to the senior audio software engineer in charge of Switch and requested him why you must pay for conversion software.
Most software program supports Ogg (see chart), but all the things helps AAC, so chances are you’ll wish to take into account it over MP3 if you convert aiff to flac lossless music down from lossless codecs. AAC and Ogg Vorbis files weigh in simply barely bigger than MP3s, albeit a negligible quantity. It’s a subtle distinction, but once you know it is there, it’s a bit annoying that MP3 grew to become the ever present format, moderately than one of many others.
Quantity adjuster amplifies the volume of too quiet audio or its parts like speech and voices. You can find enjoyment in upper class of converted sound due to built-in quantity adjuster and audio results. Audio effects embody fade-in, fade-out, trimming of silence initially and finish of tracks. Enabling of audio results ensures the flawless transition between the songs.
Subsequent, click on Configure Encoder to alter the settings for the LAME MP3 encoder. By default, it may be set to Customary, Fast, which doesn’t provide you with a very top quality MP3 file. Some music file varieties, together with WAV (.wav), AIFF (.aiff), and RA (.r) can’t be uploaded to your library utilizing Music Supervisor or Google Play Music for Chrome. An uncompressed PCM audio file is about 10 times greater than a CD quality MP3 file. Most definitely you will be utilizing a compressed or uncompressed lossless format like PCM Audio, WAV, AIFF, FLAC, ALAC, aiff2flac converter or APE.
|
OPCFW_CODE
|
Presentation on theme: "OverView Over View Introduction to IPv6Introduction to IPv6 IPv4 and IPv6 ComparisonIPv4 and IPv6 Comparison Current issues in IPv4Current issues in IPv4."— Presentation transcript:
OverView Over View Introduction to IPv6Introduction to IPv6 IPv4 and IPv6 ComparisonIPv4 and IPv6 Comparison Current issues in IPv4Current issues in IPv4 IPv6 solutions for IPv4 issuesIPv6 solutions for IPv4 issues New issues of new protocolNew issues of new protocol
The problem is that the current Internet addressing system, IPv4, only has room for about 4 billion addresses -- not nearly enough for the world's people, let alone the devices that are online today and those that will be in the future: computers, phones, TVs, watches, fridges, cars, and so on. More than 4 billion devices already share addresses. As IPv4 runs out of free addresses, everyone will need to share. The Problem
How are we making space to grow? Clearly the internet needs more IP addresses. How many more, exactly? Well, how about 340 trillion trillion trillion (or, 340,000,000,000,000,000,000,000,000,000,000,000,000)? That's how many addresses the internet's new "piping," IPv6, can handle. That's a number big enough to give everyone on Earth their own list of billions of IP addresses. Big enough, in other words, to offer the Internet virtually infinite room to grow, from now into the foreseeable future.
IPv6 Adoption Measuring the availability of IPv6 connectivity among Google users. The graph shows the percentage of users that access Google over IPv6.
At Google was believed IPv6 is essential to the continued health and growth of the Internet and that by allowing all devices to talk to each other directly, IPv6 enables new innovative services. Replacing the Internet's plumbing will take some time, but the transition has begun. World IPv6 Launch on June 6, 2012, marks the start of a coordinated rollout by major websites and Internet service and equipment providers. You do not need to do anything to prepare, but if you're interested in learning more and supporting IPv6. When is the transition happening?
Introduction to IPv6 Why IPv6? IPv6 Important features : Large address Space Simplified header Faster Packet Processing Enhanced QOS Improved Mobility and Security (Mobile IP, IPSec) Greater protocol Flexibility Dual-Stack approach (6to4 tunneling)
031 VerHLTotal Length IdentifierFlags Fragment Offset 32 bit Source Address 32 bit Destination Address 482416 Service Type Options and Padding Time to Live Header Checksum Protocol
031 VersionClassFlow Label Payload LengthNext HeaderHop Limit 128 bit Source Address 128 bit Destination Address 4122416
IPv6 Addressing rules are covered by multiples RFC’s Architecture defined by RFC 2373 Address Types are : Unicast : One to One Anycast : One to Nearest Multicast : One to Many No Broadcast Address -> IPv6 Use Multicast One to Many One to Nearest One to One Anycast is similar to Multicast in that the destination is a group of address but instead of delivering the packet to each of them, it tries to deliver to just one of them. (Any member of the group possibly the closest). Example of typical Anycast addressing will be a client wants to access information from Servers, “any” server will be fine. ….( Mobile IP)
Tunneling is encapsulating the IPv6 packet in the IPv4 packet Tunneling can be used by routers and hosts IPv6 HostB IPv4 IPv6 Network Tunnel: IPv6 in IPv4 packet IPv6 HostA Dual-Stack RouterB Dual-Stack RouterA
In a dual stack case, an application that: Is IPv4 and IPv6-enabled Asks the DNS for all types of addresses Chooses one address, for example, connects to the IPv6 address DNS Server IPv4 IPv6 www.google.com = * ? 3ffe:b00::1 10.1.1.1
|
OPCFW_CODE
|
Clone this wiki locally
This effort has been sparked by Dale's presentation at STIC 2012.
The goal is to have a version controlled environment for cross-dialect projects. We do not aim to replace existing dialect solutions. There should be a convenient way to run development of a particular branch of a project in a dialect specific manner but be able to export the results into a (usually github based) common cross-dialect repository. The vision is that a cross-dialect project will maintain separate master branches for each dialect and rely on merging capabilities of the version control system to move changes between these branches.
Current Activities (as of March 2012)
Currently there are several dialect specific solutions brewing in their specific project. This project is the umbrella where they will eventually merge. At STIC 2012 we've agreed on the general layout and format of the code storage in a git repository that we will support from all dialects. The format is versioned to allow its evolution, i.e. a repository will have metadata at the top level indicating which format it uses.
Amber + Gemstone
Dale intends to create a non-Monticello based solution for both of these dialects
Squeak, Pharo, Gemstone
At this point we expect the FileTree (Monticello based) project to evolve towards these goals
Travis Griggs created STIG as a VW specific integration with git. Martin Kobetic is using that as a basis to provide a Cypress solution
Bob Nemec will code the VA Smalltalk implementation.
An implmenetation of Cypress for Smalltalk/X can be found at https://bitbucket.org/janvrany/stx-goodies-cypress.
The Cypress support is now a part of Smalltalk/X jv-branch.
Proposed file structure
(Following text was added at ESUG 2012 as a result of discussion between Dale Henrichs, Jan Vrany, Martin Kobetic, Martin McClure)
All common files must be UTF8 encoded.
Cypress repository must have a property file in its root directory side-by-side with the package directories. The lower levels are as shown in the picture above. Extra files, not defined here, are allowed within the repository directory structure, clients must tolerate their presence.
Common property keys
Properties of repository, package and class are stored in file named 'properties.ston' at the corresponding location in the directory structure and contains STON object (for portability it must be restricted to the JSON subset). Each property file contains property named "_cypress_copyright" where the value is same as the copyrightLine property below. Empty properties can be omitted.
Dialect specific property keys should use vendor prefix of the form (e.g. 'vw...', 'squeak...').
- commentFile - filename to be used to store comment (of class or package)
- copyrightLine - single line copyright statement that is included in the header of each method file
- licenseFile - filename used to store license text (at the package or repository level).
- name - the name of the class (just the unqualified class name)
- super - the name of the superclass (just the unqualified class name)
- namespace - optional namespace of the class
- superNamespace - optional namespace of the superclass
- instvars - lists the names of instance variables
- classinstvars - lists the names of class instance variables
- classvars - list of class variable names.
- _xx_type - class type identifier; xx is the vendor prefix and the property value is vendor specific; this attribute should be omitted for the most common class type, i.e. non-variable pointer class
- pools - list of variable pool names
- category - category of the class
Method File Format
Method file starts with following four lines:
- a single double-quote
- "notice: " + the contents of the copyrightLine property (see above)
- "category: " + method category/protocol
- a single double-quote
The method source code starts on 5-th line
|
OPCFW_CODE
|
In certain cases, you may wish to change the nature or behaviour of Collection Objects in Islandora. By creating a custom collection object, you can override the default behaviour of Islandora. A simplified overview of Collection Objects is provided in the introduction of this guide. For example, you can return objects that have a different relationship to the collection object, you can present objects in your collection in a custom way to your viewer, and you can create security policies that will restrict access to the items in your collection (overriding Fedora’s default behaviour). The following chapter provides more information about the default behavior of Islandora, how Collection Objects can be constructed, and how they can be extended and customized.
Collection Objects have one mandatory Datastream (COLLECTION_POLICY) and several optional Datastreams. The optional Datastreams override the default behaviour of the Islandora Module. You may add Datastreams by navigating to the collection object you wish to modify, and then adding Datastreams via the interface. You may also add these Datastreams using any Fedora tools that you are familiar with.
- CHILD_SECURITY: gives a POLICY Datastream to child objects
A Collection Object can have four Datastreams, although the COLLECTION_POLICY is the only mandatory stream. If you do not have a COLLECTION_POLICY Datastream, additional objects cannot be ingested as members of that collection object. In other words, in order to add items to a collection or sub-collection, the collection object (or “parent-type” object) must have a COLLECTION_POLICY stream. Here is an example of a COLLECTION_POLICY Datastream (as viewed using the Islandora interface to view in a browser)
The COLLECTION_POLICY Datastream must have a isMemberOfCollection relationship declared, and must be affiliated with the islandora:collectionCModel.
The relationship statement tells Islandora that this Fedora object is a collection object. Islandora can then query the resource index for objects that have a relationship of isMemberOfCollection to this collection object.
The isMemberOfCollection is the default relationship used by Islandora, but other relationships can be used by declaring that relationship in the COLLECTION_POLICY Datastream. If you use another relationship other than this relationship, you will have to use a QUERY Datastream as well. (In other words, any new relationship declared in the COLLECTION_POLICY Datastream will make the QUERY Datastream mandatory.)
If you wish to create a new COLLECTION_POLICY stream, you will be writing XML. One way to do this is to start with an example collection policy (there is one available in....) and edit it. The DSID of this datastream must be COLLECTION_POLICY.
A QUERY Datastream is an ITQL query that overrides the Islandora’s default ITQL query. If you have declared different relationships (not a hasModel relationship) in your COLLECTION_POLICY Datastream, you will have to write a custom QUERY stream to return these relationships. In order to do this, you will have to have an understanding of ITQL. Resources for learning ITQL are offered in the Bibliography for this guide. Your ITQL query must return SPARQL XML to be parsed by the default collection view xslt file, or by a custom COLLECTION_VIEW xslt that you have written yourself.
When you write a QUERY Datastream, you ask the Islandora module to retrieve items that have a different set of objects related to your collection object from those in the default ITQL query. The default ITQL query is located in the islandora module in the collection_class.inc file. This is the query:
$query_string = 'select $object $title $content from <#ri> where ($object <dc:title> $title and $object <fedora-model:hasModel>
$content and ($object <fedora-rels-ext:isMemberOfCollection> <info:fedora/' . $pid . '>
or $object <fedora-rels-ext:isMemberOf> <info:fedora/' . $pid . '>)and $object <fedora-model:state>
<info:fedora/fedora-system:def/model#Active>)minus $content <mulgara:is> <info:fedora/fedora-system:FedoraObject-3.0>order by $title';
Note that if you write a QUERY Datastream, you may also have to write a COLLECTION_VIEW Datastream to parse and display your results. Sample QUERY Datastreams are provided in the Resources section of this guide.
A COLLECTION_VIEW Datastream contains an XSLT that will define how objects in a collection are displayed. You may wish to write a custom COLLECTION_VIEW stream to change the look and feel of your collection for visitors. For a custom XLST used for a COLLECTION_VIEW Datastream, please refer to the samples and resources section. The XSLT in your COLLECTION_VIEW Datastream has to be matched to either the default ITQL query used by Islandora (and found in the Islandora module under sparql_2_html.xsl) or the custom QUERY Datastream that you have written. Your XSLT will parse the SPARQL XML returned by either the default query, or the query you have written. This is the default xslt, called from the Islandora module at object_helper.inc.
The optional CHILD_SECURITY Datastream is a hand-written eXtensible Access Control Markup Language (XACML) policy that provides security at the collection level. To learn more about XACML, visit our resources section. The CHILD_SECURITY Datastream interacts with the default set-up of your Fedora repository. In order to use the CHILD_SECURITY stream effectively, you may wish to review the Islandora and Security section of this guide.
The CHILD_SECURITY Datastream overrides whatever default security you have configured as part of your Fedora and Drupal installations (see the Fedora installation section of this document, particularly information about global XACML policies). For example, if objects in your Fedora repository are, by default, available to the public, you may wish to write a CHILD_SECURITY stream for a collection to restrict access to that collection to specific users, or specific Drupal Roles.
All of the objects that are ingested as members of a collection object that has a CHILD_SECURITY stream will have a POLICY stream. Without the POLICY Datastream, the objects default to your base security configuration. This means that if you add a CHILD_SECURITY stream to an object after items are already affiliated with the collection, these objects will not adopt the CHILD_SECURITY policies (and they will have no POLICY Datastreams).
Note that Islandora does not change the UI in the case where a POLICY Datastream exists. This means that the icons for managing objects (such as the purge option) will still be available to users. However, if users attempt to perform the action and they do not have permissions corresponding to that action, they will receive an error. We are hoping that future versions of Islandora will not have this limitation.
Generating XACML Policies
Non-developers may want to use the XACML Editor module to generate XACML policies using a graphical user interface. Further instructions for this module are found in Chapter 5: Using the XACML Editor
Hand-written XACML policy files can be added to the $FEDORA_HOME/data/ fedora-xacml-policies/repository-policies. You can retrieve an example XACML policy file from the Resources section of the guide. However, this example opens API-M to all of the users in your Drupal instance that are authenticated users.
When you write a CHILD_SECURITY stream you are writing a XACML policy. That XACML policy must be parseable (usable) by Islandora’s simple parser. Islandora’s simple parser expects the CHILD_SECURITY Datastream to contain a XACML policy that denies access to all users, and then provides exceptions for users with certain Drupal Roles, or User IDs. If users have IDs or roles that are permitted access in the XACML policy, they will be allowed to ingest, view, or modify elements in that collection. You can view an annotated sample XACML policy in the Resources section of this document. This document can act as a starting point for a collection-object CHILD_SECURITY Datastream.
In order for Islandora to be able to browse collections, your collection object must also have a hasModel entry in the RELS-EXT Datastream that points to islandora:collectionCModel. This lets the module know that the object represents a collection and it will then query for objects that are members of this collection.
|
OPCFW_CODE
|
Blender is an awesome free 3d application which provide similar functionality like Autodesk Maya, 3ds Max and Cinema 4d. Blender is a powerful easy to use 3d application if learned wisely. If you are a user of other 3d application you may get confused with nonstandard GUI, keyboard shortcuts and lack of well detailed documentation.
I was one of the failed Blender climbers who tried several times Blender before. But being stuck with Autodesk Maya, it was a pain to learn other 3d software. I even tried Cinema 4d which is very easy to learn but failed. So I understood that Blender is not hard but my love for Maya is so strong. So I just started using Blender in my home and production and soon I get used with Blender. Now I can make motion graphics with Blender easily and am still learning it. Anyway during my experiments with Blender I found that there are so many gaps in the area of Blender tutorials. So I am adding more tutorials about Blender.
Enough introductions. We can now learn to use Blender render layers efficiently in production. Don’t expect very complex set up, but I will guide you through three cubes in a scene and with render layers. It will be easy to follow and understand. I am adding a text version but I recommend watching the video tutorial.
Author : FermiCG
Software : Blender 2.68a
Difficulty : Beginner
Download : Render Layer cubes finished.blend
Blender has a well coded render layer, render passes and compositing work flow. Once you learn how things works inside Blender it is easy to use and straight forward than Maya’s render layer system.
This tutorial is based on blender 2.68a.But 2.6x series will work fine. If you don’t have blender download it free from blender.org.
Start with a new scene. You will get a default cube, camera and a light. Select the cube and add a new material and change the colour to red. (Find the material tab and press the + button and add a material. Change the diffuse colour to red.).
In viewport duplicate the cube (shift +d) and delete the existing material by clicking – button on material tab. Add a new material and change colour to green.
Duplicate the cube again and replace the material with blue one.
Place the cubes in such a way that they intersect each other. See the image.
Press 0 to look through the camera. If needed, adjust the camera so three cubes are in focus.
We are going to render the scene with cycles. So change to cycles render. Set the quality and dimensions. Optionally you can turn on transparency and RGBA. Press render and see how the render looks.
Jump to composite mode and see the connections. You will get a render layer node connected to composite node (just turn on nodes). We need to get three cubes in three different render layers. Back to default view.
In order to get render layers properly, we need scene layers also. So select the green cube and press M and select 2nd layer. This will cause the green cube to be in 2nd scene layer.
Select the blue cube and press M and select 3rd layer. You can click on each scene layer to confirm they are in correct order. Select first layer and render. Now we have the red one with lighting. Select second layer and render. 2nd and 3rd layer won’t have any lights or shadow. Select 1, 2 and 3 layer (shift select) and render again. We will get all the cubes with light and shadow.
We successfully created scene layers. But we need render layers. So find the render layer tab in properties menu.
You can see a drop down window with render layer inside and “+, -“sign outside. This is the render layer tab. Below that you can see layer tab. Expand it and you will get scene, layer, exclude and mask with 20 columns each. They are different render layer settings. The first one scene is same for the scene layer in viewport. You can click on 1, 2 and 3 scene layer to check. The layer in the right is actual render layer. So select the 1st scene layer and rename the render layer to red. Also select the 1st column in layer. Make sure you are not selecting anything else.
Select the 2nd scene layer and add a new render layer by pressing + button. Rename to green. Select the 2nd column under layer.
Select 3rd scene layer and add a new render layer. Rename it to blue. Select the 3rd column under layer.
Select all the three scene layer and render.
Blender will render the cubes one after other. You may see black holes in render. You will get something similar to this.
Switch to composite view. You can see render layers is connected to composite node. Select the render layers and click on the layers and chose different layer. You can use this for further compositing.
Leave it to red. Duplicate render layers 2 times (shift +d) and change the 2nd one to green and 3rd to blue. (Shift+ backspace for full view).
Now connect the image of red to the bottom image of alpha over node.
Add another alpha over node. Connect the blue image to bottom image of 2nd alpha over node. Also connect image of 1st alpha over to the top image of 2nd alpha over node.
Now we need to connect the 2nd alpha image to the composite node. You can lose the connection between red layer and composite node.
Connect the second alpha over image to the image of viewer node.
Now we have the composited image. You can swap the connection between render layers with alpha over node. But still you won’t get the original output. The black hole remains there. In order to get rid of those black holes we need to add mask layers. Switch to default view. Select the red render layer (which contains red cube) and under mask layers select 2nd and 3rd columns because we need them as mask for the first layer. Repeat the same for all the layers. Remember you need to choose the layers you want to mask. For green, use 1st and 3rd. for blue use 1st and 2nd.
Now we have the cubes in correct order. You can swap the connection and still the render is the same.
We need to explain one more render layer settings named exclude layer. Exclude layer in Blender is used for excluding something from render. For example we have a plane just intersecting the three cubes. We need to render the plane separately without having any shadow or mask, we can use exclude layer. Additional use of exclude layer can be explored on further tutorials.
So add a plane and place it in such a way that it intersects the cubes and faces towards the camera.
Move it to a new scene layer (4).Add a new render layer named plane. Set the render layer. Don’t use any mask layer.
Jump to composite mode. Add a new alpha over node and a render layer node. Change the render layer to plane. Connect it to the bottom image of 3rd alpha over node. You need to connect the 2nd alpha over node to the top image of 3rd alpha over node. Also connect the 3rd alpha over to the viewer node and composite node. Render to see the result. You should get something like this.
We need the plane as it is. Shift to default view and in the render layer settings select the plane render layer. In the exclude layer select 1, 2 and 3 column. Select the entire scene layer and render again. Now we will get
This concludes the render layer in Blender tutorial.
Hope you enjoyed this tutorial. Drop your comments and critiques.
|
OPCFW_CODE
|
Fix tagged page page copy regression in _copy_m2m_relations
This pull request fixes a regression that was introduced during the https://github.com/wagtail/wagtail/pull/6277 refactor. Currently copying a page that inherits from a page that inherits from a TaggedPage (a page with ClusterTaggableManager) fails with the following error.
ValueError: Cannot add [] (<class 'modelcluster.queryset.FakeQuerySet'>). Expected <class 'django.db.models.base.ModelBase'> or str. when copying a SubBlogPage.
This is because _copy_m2m_relations (source) uses source._meta.parents instead of source._meta.source._meta.get_parent_list().
Thanks for contributing to Wagtail! 🎉
Before submitting, please review the contributor guidelines https://docs.wagtail.io/en/latest/contributing/index.html and check the following:
Do the tests still pass? Yes
Does the code comply with the style guide? Yes
For Python changes: Have you added tests to cover the new/fixed behaviour? Yes
For front-end changes: Did you test on all of [Wagtail’s supported environments] (https://docs.wagtail.io/en/latest/contributing/developing.html#browser-and-device-support)? N/A
For new features: Has the documentation been updated accordingly? N/A
It may also be helpful to see a full broken example. I've created a proof of concept app that is unable to copy.
In this example trying to copy a SubBlogPage will raise the following error.
`ValueError: Cannot add [] (<class 'modelcluster.queryset.FakeQuerySet'>). Expected <class 'django.db.models.base.ModelBase'> or str.` when copying a `SubBlogPage`.
Once I switch to a patched version of wagtail that uses source._meta.source._meta.get_parent_list() all is well.
#
from django.db import models
from modelcluster.fields import ParentalKey
from modelcluster.contrib.taggit import ClusterTaggableManager
from taggit.models import TaggedItemBase
from wagtail.admin.edit_handlers import FieldPanel
from wagtail.core.models import Page
class BlogPageTag(TaggedItemBase):
content_object = ParentalKey(
Page, on_delete=models.CASCADE, related_name="tagged_items"
)
class HomePage(Page):
pass
class BlogPage(Page):
tags = ClusterTaggableManager(through=BlogPageTag, blank=True)
promote_panels = Page.promote_panels + [
FieldPanel("tags"),
]
class SubBlogPage(BlogPage):
pass
I'm happy to open an issue or make changes if needed. Thank you!
Hi @cspollar. Thank you very much for the thorough write-up and submission - this is high quality stuff :) I just have one suggestion, which might make for slightly easier reading.
Thanks @ababic your change makes sense to me (and the tests still pass).
@cspollar FYI, I raised this in today's core team meeting to see where this can fit in the release cycle. Because this change has been in place for a few versions, it falls out of the usual pattern of deserving it's own release. However, it looks like there are a couple of other things we might want to release next week, and this may be able to slide in alongside those.
Thanks again for such an excellent contribution! I'll let @gasman pick things up from here.
Thanks @cspollar! This fix looks good to me.
It looks like the incorrect parent model logic was also present in the pre-#6277 version of this code, and your test case does indeed fail when run against the 2.10.x branch too, so it seems that this bug isn't a regression - it's basically been around forever... On that basis, this will go through our regular release cycle and be included in the 2.14 release, rather than being backported to existing releases (which are generally limited to regressions, in the interest of not introducing unintended changes).
Merged in c9a55d8b1bf519f97fab32c854b4332f415b5176
@gasman Thanks for getting this merged. I upgraded from the 2.9.x branch (where the tests do pass). It looks like the regression was probably added in the 2.10.x branch. Regardless, I created a fork and will run from there until 2.14 is released. In the future I'll try to keep our version of Wagtail more up to date.
|
GITHUB_ARCHIVE
|
18. Wildcard week¶
Design and produce something with a digital fabrication process (incorporating computer-aided design and manufacturing) not covered in another assignment, documenting the requirements that your assignment meets, and including everything necessary to reproduce it.
Possibilities include (but are not limited to) composites, textiles, biotechnology, robotics, folding, and cooking.
This week, I’ve used a thermoformer, and learned from the molds I used, what are its limits, and what need to be done to have a good thermoformed mold.
Files are available here
Materials and processes¶
Having my final project in mind, my first idea was to work on felt or velvet and related materials needed to make it hold on a structure in order to obtain a deployable origami structure… But delivery schedule for the materials is short, so I decided to use the brand new Mayku FormBox thermoformer that has arrived in the lab:
I decided to reuse the molds made for the mold and casting assignment, and to adapt them to this thermoforming process. Redesign is done in Fusion360:
- Mold 1: A flat surface is added below the four cubes to hold them together.
- Mold 2: Small holes of 1mm diameter are added in the bottom of the cavities to allow the air to be vacuum cleaned and the form sheet to fit to the cavities.
Files are available here
Both molds are then 3D printed with PLA on the PRUSA MK3 printer of the Fablab. The thermoformer requires indeed a mold that is strong enough to resist the temperature and pressure applied during the process:
The idea is to compare the efficiency of the process for the two molds, as they are supposed to provide the same shape, knowing that the shape are at the limits of the thermoformer capabilities…
I enlarge a bit the holes with a 1.5mm mill, as some holes seem to be in bad shape…
I add an additional hole in the middle of the mold 1:
With the chosen structures, the second of the three rules for thermoforming is not applied:
- Remove all undercuts: OK
- Add draft angles: NOK
- Add air holes: OK
Moreover, the height/width ratio may be to big, and having four structures close together will probably causes difficulties…
The structure is small, and we want to test the capabilities of the process, so let’s try…
Two types of sheets are available:
- form sheets: white / satin surfaces / easy to cut / ready to paint / durable / replicates fine details
- cast sheets: transparent / reusable molds / food safe / non-stick surface / flexible / easy to cut.
As I plant to make chocolates for the french fabmanagers ;-), I’ll use cast sheets
The manufacturer provides settings data, but it quickly appears that the times provided are not long enough, the material is not soft enough. I’ll look at the sheet deformation, and when the curvature is really visible, I press it down to the mold.
Here are the results, after some difficult but successful extraction of the molds!!
As expected, not ideal process:
- mold 1 doesn’t provide closed cubic volumes as the cast sheet didn’t stick on the bottom plate, but the sheet thickness is quite fair, except in the center.
Adding additional holes on the plate between the plates doesn’t really improve the process…
- mold 2 provides nice cubic volumes, but the sheet thickness in the bottom of the volumes is too thin, so thin that one of the cube is drilled..
But this doesn’t stop us to make chocolates. Let’s use mold 2:
melting chocolate with some water in the microwave, and pouring the chocolate in the mold,
oups, it leaks a bit…
a few hours in the fridge, and here we have:
Still to do if I had time....¶
The process is far from ideal, but I’ve learned a lot on it, and I could make a perfect reusable mold with details!!
|
OPCFW_CODE
|
Office 2016 / ODBC - Problem with drivers in Access files - Please confirm if this is By Design
Customer has a HRD application connects to Access DB. After installing ODBC 2016 or Office 2016 (which includes ODBC), the awards in Logbook ceases to work and reports only zeros. Customer has a workaround for now:
Fix: An automatic workaround will be put into HRD.
Modify the registry to use the new (old) strings.
1. Pull up regedit and go to HKCU\Software\ODBC\ODBC.INI\ODBC Data Sources
2. Note the “Microsoft Access Driver (.mdb, .accdb)”. That was added partially in 2007 and fully in 2010. Some older entries may have the old style string already.
3. Change them back to the original “Microsoft Access Driver (*.mdb)”
(Although this looks trivial, the poorly written ODBC driver requires an EXACT match.)
4. Restart HRD and it should work.
6) Customer clearly confirmed me that there is nothing to do with bitness of office, OS or drivers.
What we need from PG:
Is this an expected behavior or anything By Design? Please confirm.
Thanks for calling out this issue Subhabrata.
We are investigating this issue.
@Mark Burns – this is a different issue that we are looking into as well. Please post as a separate suggestion so people can vote on it and we can address.
i am on win10 office 2016 C2R the driver string is not even showing up on my machine
Separate Item now posted as requested:
Frank Rotolo commented
@Mark, I agree with you. FYI, I recently changed my email address in user voice settings and low_and_behold, I now have 20 more votes available! The votes I had cast under my previously registered email address didn't disappear.
@Frank / Microsoft,
Isn't it a tad silly to have to UN-vote for something I DO support to add something new - only to then un-vote for what I just added to put my vote back where I had it? (and then leave the new suggestion/idea with NO votes?! ...or just forget that I supported another idea instead? The 20-vote limit is just silly to begin with, though I understand that it serves and anti-spam purpose.)
Oh, yes, I know that. They're there, but they're unusable by 3rd-party / out-of-process applications!
So, they're basically NOT there at the same time.
Frank Rotolo commented
Mark, the only temporary workaround is to unvote the least important ideas you voted on. As to the ODBC drivers, they're installed, but not visible outside of Office because they're inside the CTR virtualization container. I don't know if MS intentionally design it this way, and if so, why?, but its certainly a major problem!
I am unable to post any new items due to the 20-vote (dumb) limit for voting on items. Once this limit is reached posting new requests is disabled.
This is only made worse by the affects of the different MS-Office installers. Traditional .MSI office installs do have this issue as stated. Click-To-Run (App-V based) installed Office apps have a WORSE issue in that the ODBC drivers APPEAR TO NOT BE INSTALLED AT ALL to the host OS.
|
OPCFW_CODE
|
How to initialize EnvDTE80.DTE2 object to access the solution?
I am using reference of EnvDTE80 in my code to open a visual studio solution and then traverse through the projects present in it.
I am new to this and am using the below code snippet :
Firstly defined an object of the below type :
EnvDTE80.DTE2 dte2;
then tried to access the solution through it :
Solution2 solution = dte2.Solution as E2.Solution2;
if (solution == null)
{
return;
}
Projects projects = solution.Projects;
foreach (E1.Project project in projects)
{
Property outputPath =
project.ConfigurationManager.ActiveConfiguration.Properties
.Item("outputPath");
outputPath.Value = buildFolderPath;
project.Save(project.FullName);
}
Basically I am trying to change the project output path through the code snippet and whenever I run the code I get an error stating that "Object Reference not set to an instance of an object".
The "dte2" object is null.
Any suggestions how to initialize it ?
You did not initialize your dte2 variable, so it is null. You need to get an instance of that interface from somewhere, like dte2 = GetService(typeof(DTE2));, but I currently don't know which service provider is availabe in your code.
Your exception's stack trace should clearly show that it's not dte2.Solution which is null, but dte2 itself.
https://msdn.microsoft.com/en-us/library/68shb4dw.aspx
@RenéVogt Yes the dte2 itself is null. And I tried to initialize it using
(EnvDTE80.DTE2)System.Runtime.InteropServices.
Marshal.GetActiveObject("VisualStudio.DTE.12.0");
But it gives me an exception saying "Invalid class string (Exception from HRESULT: 0x800401F3 (CO_E_CLASSSTRING)) "
Go, read, do what it says, and if you don't figure out how to fix this, you can [edit] to let us know what you did and what happened and we can reopen this.
@Will Aware about the Null Reference Exception. I needed help with how to initialize the EnvDTE2 object.
I have solved it now by using IRunningObjectTable and IEnumMonikerpassing and getting the display name of the moniker and appending the process id of the same Visual Studio instance to get the Active Object.
!VisualStudio.DTE.9.0:" + pId.ToString());
Thanks for your help anyway.
@RenéVogt Thanks. Your suggestions somehow led me to the solution.
You should add this comment http://stackoverflow.com/questions/41803809/how-to-initialize-envdte80-dte2-object-to-access-the-solution?noredirect=1#comment70800762_41803809 as an [edit] to your question. That's the crucial bit of info it is lacking. Then you can add an answer with some detail about how you used the ROT and accept it as correct.
For future googler's, this is how to instantiate
var slnPath = @"C:\Code\public\src\website.sln";
var envDteType = Type.GetTypeFromProgID("VisualStudio.DTE.15.0");
var envDte = Activator.CreateInstance(envDteType, true);
var dte2 = (DTE2)envDte;
var solution = (Solution4) dte2.Solution;
solution.Open(slnPath);
|
STACK_EXCHANGE
|
Date should be either required or not
Today I was trying to do some experiment with IRB: "Date.today" threw a NoMethodError.
But Date was defined: "defined? Date" == "constant".
If I explicitly required for "date" it worked, but this is pretty much weird. It would be better if either:
- I got an undefined "Date" class instead
- "date" was automatically required
The current state is pretty much misleading.
This also happens in JRuby, so I guess somehow this is an intended behavior, but it doesn't make any sense to me.
Updated by mame (Yusuke Endoh) over 7 years ago
- Status changed from Open to Assigned
- Assignee set to drbrain (Eric Hodel)
This is caused by rubygems/specification:
$ ruby -rrubygems/specification -e 'p Date; Date.today'
<main>': undefined methodtoday' for Date:Class (NoMethodError)
Eric, is this intentional?
Yusuke Endoh email@example.com
Updated by rosenfeld (Rodrigo Rosenfeld Rosas) over 7 years ago
Is Date that slow to require when compared to Time? Wouldn't it be possible to make it faster to load by lazy loading some parts and include it by default instead of having to require it manually?
That would certainly make programmers happier ;)
Updated by drbrain (Eric Hodel) over 7 years ago
- Status changed from Assigned to Closed
The pull request was rejected due to its implementation. For compatibility with old gems a Date class must be defined. I decided to switch to require 'date' since rubygems/specification.rb is lazily loaded now and the cost of loading it is low. The commit to rubygems will be imported in the future so I will close this ticket.
Time is part of the core libraries with parsing and extra output formats defined in time.rb. Date exists entirely outside of the core ruby classes so you must require it separately.
Historically Date was much slower than Time due to its implementation. Today that gap is narrower, but Date is still slower than Time. (As a trade-off, Date and DateTime give you a much larger range than a Time.)
Remember, you must require libraries you directly depend on. Do not expect a dependency to load it for you indirectly. This will only lead to bugs and incompatibility as your dependencies change.
Updated by kommen (Dieter Komendera) about 7 years ago
This is an issue again in Ruby 2.0.0 preview 2 as the require 'date' in rubygems/specification.rb as described in note 6 in this issue was removed again with this commit: https://github.com/rubygems/rubygems/commit/04422c3c7fc0273a5ef9d01641fb0a2a4ee0d03d
I found it to be a backward compatibility issue for example with the compass gem: https://github.com/chriseppstein/compass/blob/stable/compass.gemspec#L6
Should this issue be reopened?
Updated by drbrain (Eric Hodel) about 7 years ago
I don't know if I can solve this problem with RubyGems.
Date is now a C extension.
At ruby install time, Gem::Specification is loaded by rbinstall.rb which is run from miniruby.
miniruby cannot load C extensions (as mentioned in the commit referenced above).
I'll see if Date can be required only when needed. I believe rbinstall calls ruby_code which uses Date, so it may be difficult.
Updated by drbrain (Eric Hodel) almost 7 years ago
- Target version changed from 1.9.3 to 2.6
I am not clever enough to solve this problem.
As I said before, tool/rbinstall.rb cannot load extensions. If I try to lazy-load Date in lib/rubygems/specification.rb tool/rbinstall.rb obviously, tool/rbinstall.rb fails.
I cannot figure out how to only load Date when RubyGems was not loaded from mini ruby. Attempting to detect the LoadError results in an infinite loop.
Perhaps someone more clever than I can fix this. I will leave it open for now.
Updated by kommen (Dieter Komendera) over 6 years ago
Just a short update from my side: Since my issue from comment #8 was mostly caused by bad .gemspec files from gems which didn't require 'date' before using it and most (all I know) have been fixed, I think this is not an issue any longer.
Though I still a agree with the original bug report, that this is a strange behaviour.
|
OPCFW_CODE
|
Abstract 3091: The Clinical and Electrocardiographic Phenotype of Unrelated Patients with Genotype-Negative Long QT Syndrome
Long QT syndrome (LQTS) is a heterogeneous group of channelopathies characterized by increased risk of potentially lethal ventricular arrhythmias. LQT1, LQT2, and LQT3 comprise 95% of genetically proven cases and exhibit a number of established genotype-phenotype correlations. The study aimed at examining the phenotypes of genotype-negative LQTS, accounting for ~25% of LQTS cases. An IRB-approved retrospective analysis was conducted on 56 patients (39 female, 25 ± 17 years) who, after genetic testing either in our sudden death genomics laboratory or with the commercially available Familion test, were negative for mutations in the 3 principal LQTS-susceptibility genes (KCNQ1, KCNH2, and SCN5A), and the minor genes underlying LQT5 and LQT6. All had been diagnosed with LQTS, with a clinical diagnostic score of ≥ 3.5 or QTc ≥ 480 ms. The mean diagnostic score was 4.4 (95% CI 4.2 – 4.7); mean QTc was 525 ms (95% CI 508 – 543 ms). Two-thirds were symptomatic (syncope, cardiac arrest, and/or seizures) with exercise-triggered events in 10 (26%). Twenty-one (38%) had a family history of sudden cardiac arrest. ECG showed a T wave pattern suggestive of LQT1 in 32%, LQT2 in 43%, and LQT3 in 18%. In those with exercise-induced symptoms, the ECG was LQT2-like in 50% and LQT1-like in 30%. One patient had post-partum syncope with an LQT2-like ECG. None had an auditory trigger, but 3 patients, all with an LQT2-like ECG, had a family history of auditory-triggered events. One-third of the patients had received an ICD, 58% as secondary prevention. Over 2/3 were on beta-blockers. Among the 45 patients so far tested for mutations in minor LQTS-susceptibility genes, 2 had LQTS-causing mutations in ANKB (LQT4), 1 in SCN4B (LQT10), 1 in AKAP9 (LQT11) and 2 in SNTA1 (LQT12). Genotype-negative patients with a firm LQTS diagnosis show marked phenotypic heterogeneity, suggesting multiple underlying pathogenic pathways. Only a few patients have LQTS-causing mutations in minor genes after complete LQT1–12 genetic testing. Classifying genotype negative patients into LQT1-, LQT2-, or LQT3-like profiles may guide the discovery of novel genes encoding channel interacting proteins corresponding to those specific signaling pathways.
|
OPCFW_CODE
|
In this tutorial, we will learn about the MySQL
CONCAT() function. Suppose you have an Employee table that has separate columns for the first name and the last name of the employee.
What if you had to display the full name (first name + last name) of the employee in MySQL? Such a query would require you to combine the two values in both the rows and display them.
For such purposes, MySQL provides us with the
CONCAT() function. The
CONCAT() function is one of the many string functions of MySQL. It combines two or more row values.
For instance, if you have the following four rows in your table – Address_Line_1, City, State, Postal_Code; then
CONCAT() enables you to combine the values of these 4 rows for a particular record and you get it as one value.
CONCAT() doesn’t alter the structure of the table by combining rows. It only combines the row values for the given records while displaying them as an output. So,
CONCAT() is usually used with the
Syntax for MySQL CONCAT()
CONCAT(expression1, expression2, expression3,...)Code language: SQL (Structured Query Language) (sql)
Where expression includes row names or any value that you want to concat with another value.
Examples of MySQL CONCAT() function
Consider the below Students table.
1. Basic Example of MySQL CONCAT()
Before we move on to combining row values of a table using
CONCAT(), let’s take a look at a basic example. Consider the below query,
SELECT CONCAT('Hello!', 'I am ', '14', ' years of age.') AS ConcatenatedString;Code language: SQL (Structured Query Language) (sql)
We have 4 string arguments passed in the
CONCAT() function. MySQL
CONCAT() will append them in that order and then display them under the alias name
ConcatenatedString. We get the output as,
2. Combining Row Values using CONCAT()
Now let us work with the Students table. How about displaying the first name and last name of every student as one value under the alias of Full_Name?
SELECT CONCAT(FirstName, ' ', LastName) AS Full_Name FROM Students;Code language: SQL (Structured Query Language) (sql)
And we get our output as,
3. MySQL CONCAT() With NULL Values
Now let us see how the
CONCAT() function deals with NULL values. Before we go ahead, let us insert a row with a few NULL values in the Students table and then display the updated table.
INSERT INTO Students(ID, FirstName, DaysPresent) VALUES (7, 'Meera', 84); SELECT * FROM Students;Code language: SQL (Structured Query Language) (sql)
Our output is,
Now, what will happen if we combine the values of the FirstName and LastName columns for the student with ID=7? Let us use the following query with the
WHERE clause and find out!
SELECT CONCAT(FirstName, ' ', LastName) AS Full_Name FROM Students WHERE ID=7;Code language: SQL (Structured Query Language) (sql)
And we get the output as,
It is important to remember that the result of
CONCAT() is NULL if any argument value happens to be NULL.
CONCAT() can be very useful to present your output in a more readable and understandable value. String functions like
CONCAT() are very important to understand and so I hope you’ll take a look at the link in the references below.
- MySQL Official Documentation on
|
OPCFW_CODE
|
Can you set Details for Videos with Powershell?
I have a NAS to which I Rip most of my DVD's. The problem comes with series. When I have to RIP a season, the Details (Title, Comments, etc) must be manually entered.
To combat this, I wrote the following script:
$array = @()
(Get-ChildItem -Path 'c:\Videos\Dead Like Me\*.mpg' ).FullName |
foreach{
$array += $_
}
$i = 0
Do {
$Episode = $i + 1
$NewName = "Dead Like Me S1E$Episode.mpg"
Set-ItemProperty -Path $array[$i] -Name "Title" -Value $NewName
Set-ItemProperty -Path $array[$i] -Name "Comments" -Value $NewName
Rename-Item -Path $array[$i] -NewName $NewName
$i += 1
} While ($i -lt $array.length)
It seems that Set-ItemProperty does not recognize the Title nor Comments, not other properties from the "Details" tab for the file.
I've also tried
Get-ChildItem $array[$i] | Set-ItemProperty -Name "Title" -Value $NewName
Either way, I get an error similar to the following:
Set-ItemProperty : The property string Title=Dead Like Me S1 D1 E3.mpg
does not exist or was not found.
At c:\Videos\Dead Like Me\tmp.ps1:20 char:30
+ ... ChildItem $array[$i] | Set-ItemProperty -Name "Title" -Value $NewName
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ReadError: (string Title=Dead Like Me S1 D1 E3.mpg:PSNoteProperty) [Set-ItemProperty], I OException
+ FullyQualifiedErrorId : SetPropertyError,Microsoft.PowerShell.Commands.SetItemPropertyCommand
Shouldn't Set-ItemProperty be able to address those properties?
@trebleCode Update
I ran the following:
get-itemproperty "C:\Videos\Dead Like Me\video.mpg" | Format-List -Property * -Force
It returns:
PSPath : Microsoft.PowerShell.Core\FileSystem::C:\Video\Dead Like Me\video.mpg
PSParentPath : Microsoft.PowerShell.Core\FileSystem::C:\Video\Dead Like Me Renamer
PSChildName : video.mpg
PSDrive : C
PSProvider : Microsoft.PowerShell.Core\FileSystem Mode : -a----
VersionInfo :
File: C:\Video\Dead Like Me\video.mpg
InternalName:
OriginalFilename:
FileVersion:
FileDescription:
Product:
ProductVersion:
Debug: False
Patched: False
PreRelease: False
PrivateBuild: False
SpecialBuild: False
Language:
BaseName : video
Target : {}
LinkType :
Name : video.mpg
Length : 321536
DirectoryName : C:\Video\Dead Like Me
Directory : C:\Video\Dead Like Me
IsReadOnly : False
Exists : True
FullName : C:\Video\Dead Like Me\video.mpg
Extension : .mpg
CreationTime : 2019-02-04 10:15:51
CreationTimeUtc : 2019-02-04 16:15:51
LastAccessTime : 2019-02-04 13:03:31
LastAccessTimeUtc : 2019-02-04 19:03:31
LastWriteTime : 2018-07-09 15:00:47
LastWriteTimeUtc : 2018-07-09 20:00:47
Attributes : Archive
If you get the item and output to format-list or format-title, what properties does it expose?
This is about metadeta not item properties. You call meta data diffrently.
There's an open source library called TagLib-Sharp that supports setting metadata on audio and video files. It's pretty easy to use - there's some sample code at this blog - the gist of it is:
Import-Module "D:\powershell\modules\MPTag\taglib-sharp.dll"
$BOXTYPE_TVSH = "tvsh"; # TV Show or series
$mediaFile = [TagLib.File]::Create($file.FullName)
[TagLib.Mpeg4.AppleTag]$customTag = $mediaFile.GetTag([TagLib.TagTypes]::Apple, 1)
$customTag.SetText($BOXTYPE_TVSH, $showName)
$mediaFile.Save()
|
STACK_EXCHANGE
|
One of those who comment regularly on my blog brought a news item to my attention. The OPNFV project has a new activity, introduced by AT&T, called “Event Streams” and defined HERE. The purpose of the project is to create a standard format for sending event data from the Service Assurance component of NFV to the management process for lifecycle management. I’ve been very critical of NFV management, so the question now is whether Event Streams will address my concerns. The short answer is “possibly, partly.”
The notion of events and event processing goes way back. All protocol handlers treat messages as events, for example, and you can argue that even transaction processing is about “events” that represent things like bank deposits or inventory changes. At the software level, the notion of an “event” is the basis for one form of exchanging information between processes, something sometimes called a “trigger” process. The other popular form is called a “polled” process because in that form a software element isn’t signaled something is happening, it checks to see if it is.
Many of the traditional management and operations activities of networks have been more polled than triggered because provisioning was considered to be a linear process. As networks got more complicated, more and more experts started talking about “event-driven” operations, meaning something that was triggered by conditions rather than written as a flow that checked on stuff. So Event Streams could be a step in that direction.
A step far enough? There are actually three things you need to make event-driven management work. One, obviously, is the events. The second is the concept of state and the third is a way to address the natural hierarchy of the service itself. If we can find all those things in NFV, we can be event-driven. Events we now have, but what about the rest?
Let’s start with “state”. State is an indication of context. Suppose you and I are conversing, and I’m asking you questions that you answer. If there’s a delay or if you don’t hear me, you might miss a question and I might ask the next one. Your answer, correct in the context you had, is now incorrect. But if you and I each have a recognized “state” like “Asking”, “ConfirmHearing”, and “Answering” then we can synchronize through difficulties.
In network operations and management, state defines where we are in a lifecycle. We might be “Ordered”, or “Activating” or “Operating”, and events mean different things in each state. If I get an “Activate” in the “Ordered” state, it’s the trigger for the normal next step of deployment. If I get one in the “Operating” state, it’s an indication of a lack of synchronicity between the OSS/BSS and the NFV processes. It is, that is, if I have a state defined.
Let’s look now at a simple “service” consisting of a “VPN” component and a series of “Access” components. The service won’t work if all the components aren’t working, so we could say that the service is in the “Operating” state when all the components are. Logically, what should happen then is that when all the components are in the “Ordered” state, we’d send an “Activate” to the top-level “Service object”, and it would in turn generate an event to the subordinates to “Activate”. When each had reported it was “Operating”, the service would enter the “Operating” state and generate an event to the OSS/BSS.
So what we have here is a whole series of event-driven elements, contextualized (state and relationship) by some sort of object model that defines how stuff is related. It’s not just one state/event process (what software nerds call “finite-state machines”) but a whole collection of such processes, event-coupled so that the behaviors are synchronized.
This concept is incredibly important, but it’s not always obvious that’s the case. But here’s an example. Suppose that a single VNF inside an Access element fails and is going to re-deploy. That access element would have to enter a new state, let’s call it “Recovering” and so the VNF that failed would have to signal with an event. Does that access element go non-operational immediately or does it give the VNF some time? Does it report even the recovery attempt to the service level via an event, or does it wait till it determines that the failure can’t be remedied? All of this stuff would normally be defined in state/event tables for each service element. In the real world of SDN and NFV, every VNF deployed and every set of connections could be an element, so the model we’re talking about could be multiple layers deep.
This has implications for building services. If you have a three- or four-layer service model you’re building, every element in the model has to be able to communicate with the stuff above and below it through events, which means that they have to understand the same events and have to be able to respond as expected. So what we really have to know about service elements in SDN or NFV is how their state/event processing works.
Obviously we don’t know that today, because we didn’t have even a consistent model of event exchange, which Event Streams would define. But the project doesn’t define states, nor does it define state/event tables or standardized responses. Without those definitions an architect couldn’t assemble a service from pieces because they couldn’t be sure that all the pieces would talk the same event language or interpret the context of lifecycles the same way.
The net of this is that Event Streams are enormously important to NFV, but they’re a necessary condition and not a sufficient condition. We still don’t have the right framework for service modeling, a framework in which every functional component of a service is represented by a model “object” that stores its state and the table that relates to event-handling in every possible state.
The question is whether we need that, or whether we could make VNF Managers perform the function. Could we send them events? There’s no current mandate that a VNFM process events at all, much less process some standard set of events. If a VNFM contains state/event knowledge, then the “place” of the associated VNF in a service would have to be consistent or the state/event interpretation wouldn’t be right. That means that our VNF inside an access element might not be portable to another access element because that element wanted to report “Recovering” or “Faulting” under different conditions. IMHO, this stuff has to be in the model, not in the software, or the software won’t be truly composable.
I’m not trying to minimize the value of Event Streams here. It’s very important, providing that it provokes a complete discussion of state/event handling in network operations. If it doesn’t, then it’s going to lead to a dead end.
|
OPCFW_CODE
|
#include "SiconosKernel.hpp"
#include "FirstOrderNonLinearDS.hpp"
#include "SolverOptions.h"
#include "NumericsVerbose.h"
#include <math.h>
static double t0 = 0.0;
static double T = 10.0; // end time
double R = 1.0;
double C = 0.1;
double L = 4.7e-4;
SimpleMatrix* compute_reference(double h)
{
try
{
double lambda10 = 0.0;
double x10 = 0.01;
size_t k = 0;
size_t N = int((T - t0) / h) + 2;
size_t outputSize = 1 + 4;
SimpleMatrix* dataPlot = new SimpleMatrix(N, outputSize);
double uck;
double u1k;
double tk = t0;
double ucdotk;
uck = -(1/R)*(sin(10*tk) - 1);
ucdotk = -(1/R)*(10*cos(10*tk));
(*dataPlot)(0, 0) = tk;
(*dataPlot)(0, 1) = x10; // x1
(*dataPlot)(0, 2) = - uck; // x2 = - B3*lambda - u2(tk)
(*dataPlot)(0, 3) = lambda10; // lambda1
(*dataPlot)(0, 4) = 0; // computed afterward as it is depending on \dot{lambda}
k+=1; // (k in loop = k+1 in algo)
// ==== Simulation loop - Writing without explicit event handling =====
while (k<N)
{
tk = (*dataPlot)(k-1, 0) + h;
(*dataPlot)(k, 0) = tk;
uck = -(1/R)*(sin(10*(tk-h)) - 1.0); // explicite (at "k")
u1k = sin(tk-h); //explicite
ucdotk = -(1/R)*10*cos(10*(tk-h)); //explicite
(*dataPlot)(k, 1) = (*dataPlot)(k-1, 1) + h*((u1k/L) - uck); // x1
(*dataPlot)(k, 3) = 0.0; // lambda1 only solution to 0 <= 1-sin(t) _|_ lambda >= 0
(*dataPlot)(k-1, 4) = -ucdotk - uck; // z at K-1 = -B3Ldot - B2L -v3dot -v2
uck = -(1/R)*(sin(10*tk) - 1.0); // at "k+1"
(*dataPlot)(k, 2) = -uck; // x2
k++;
}
dataPlot->resize(k-1, outputSize);
std::cout<<"Reference: " << k << " steps !\n";
return dataPlot;
}
catch(Siconos::exception& e)
{
Siconos::exception::process(e);
return NULL;
}
catch (...)
{
Siconos::exception::process();
std::cerr << "Exception caught in Computeref()" << std::endl;
return NULL;
}
}
int main(int argc, char* argv[])
{
double href = 5e-6;
SimpleMatrix* dataSolref = compute_reference(href);
ioMatrix::write("result_example2_ref.dat", "ascii", *dataSolref, "noDim");
std::cout<<"Done !\n";
}
|
STACK_EDU
|
Why is the assert statement not functioning as expected?
I created some code that will count the number of occurrences of a character in a String. Here it is
void main() {
print("1100011110000011".count(SingleChar('1')));
print("1100011110000011".count(SingleChar('1135465321')));
print("1100011110000011".count(SingleChar('')));
}
class StringCountArg{}
class CodeUnit extends StringCountArg{
int codeUnit;
CodeUnit(this.codeUnit);
}
class SingleChar extends StringCountArg{
String string;
SingleChar(this.string): assert(string.length == 1);
}
extension CharCounter on String {
int count<T extends StringCountArg>(T r){
if(r is CodeUnit){
return codeUnits.where((cu) => cu == r.codeUnit).length;
}else if(r is SingleChar){
return codeUnits.where((cu) => cu == r.string.codeUnitAt(0)).length;
}
return 0;
}
}
I created a class called StringCountArg as the base type for the argument (because I want to be able to supply either a codeunit, OR a String with a single character already in it).
The first line in main executes fine. Returns 8 because there are 8 "1" characters.
The second line does not execute as expected. I would expect the assert statement to catch the long string, but it doesn't.
The third line fails because the assert statement again has not caught the improperly sized string and has tried to access the first codeunit of an empty string.
What's going on here that I'm not understanding?
I think you should take a look at the characters package in the pub (https://pub.dev/packages/characters) to understand the relationships between all the various "string" representations. It might clear up your misunderstanding.
@RandalSchwartz My question isn't regarding the strings. It's regarding the assert(string.length == 1) apparently always passing even when the string is long or empty?
How are you running your program? assert is not executed unless debugging or running with --enable-asserts: https://dart.dev/guides/language/language-tour#assert
This is the result of running the code in dartpad, which has assert statements disabled. That issue is being tracked here
If you are not running the code in dartpad and are experiencing this issue, try running dart with the --enable-asserts flag (note that it should be passed directly after dart and not after the file to run.
dart --enable-asserts lib\main.dart
Yeah, that's where I was going next. Put assert(false) somewhere. :)
|
STACK_EXCHANGE
|
import { XActorRef } from './ActorRef';
import { ActorSystem } from './ActorSystem';
export interface Subscription {
unsubscribe(): void;
}
// export interface Observer<T> {
// // Sends the next value in the sequence
// next?: (value: T) => void;
// // Sends the sequence error
// error?: (errorValue: any) => void;
// // Sends the completion notification
// complete: any; // TODO: what do you want, RxJS???
// }
/** OBSERVER INTERFACES - from RxJS */
export interface NextObserver<T> {
closed?: boolean;
next: (value: T) => void;
error?: (err: any) => void;
complete?: () => void;
}
export interface ErrorObserver<T> {
closed?: boolean;
next?: (value: T) => void;
error: (err: any) => void;
complete?: () => void;
}
export interface CompletionObserver<T> {
closed?: boolean;
next?: (value: T) => void;
error?: (err: any) => void;
complete: () => void;
}
export type Observer<T> =
| NextObserver<T>
| ErrorObserver<T>
| CompletionObserver<T>;
export interface Subscribable<T> {
subscribe(observer: Observer<T>): Subscription;
subscribe(
next: (value: T) => void,
error?: (error: any) => void,
complete?: () => void
): Subscription;
}
export interface SubscribableByObserver<T> {
subscribe(observer: Observer<T>): Subscription;
}
export type Logger = any;
export interface ActorContext<TEvent extends EventObject> {
self: XActorRef<TEvent>;
system: ActorSystem<any>;
log: Logger;
children: Set<XActorRef<any>>;
watch: (actorRef: XActorRef<any>) => void;
send: <U extends EventObject>(actorRef: XActorRef<U>, message: U) => void;
subscribeTo: (topic: 'watchers', subscriber: XActorRef<any>) => void;
// spawnAnonymous<U>(behavior: Behavior<U>): ActorRef<U>;
spawn<U extends EventObject>(
behavior: Behavior<U>,
name: string
): XActorRef<U>;
spawnFrom<U extends TEvent>(
getEntity: () => Promise<U> | Subscribable<U>,
name: string
): XActorRef<any, U | undefined>;
stop<U extends EventObject>(child: XActorRef<U>): void;
}
export enum ActorSignalType {
Start,
PostStop,
Watch,
Terminated,
Subscribe,
Emit,
}
export type ActorSignal =
| { type: ActorSignalType.Start }
| { type: ActorSignalType.PostStop }
| { type: ActorSignalType.Watch; ref: XActorRef<any> }
| { type: ActorSignalType.Terminated; ref: XActorRef<any> }
| { type: ActorSignalType.Subscribe; ref: XActorRef<any> }
| { type: ActorSignalType.Emit; value: any };
export enum BehaviorTag {
Setup,
Default,
Stopped,
}
export interface TaggedState<TState> {
state: TState;
$$tag: BehaviorTag;
effects: any[];
}
export type Behavior<TEvent extends EventObject, TState = any> = [
(
state: TaggedState<TState>,
message: TEvent | ActorSignal,
ctx: ActorContext<TEvent>
) => TaggedState<TState>,
TaggedState<TState>
];
export type BehaviorReducer<TState, TEvent extends EventObject> = (
state: TState,
event: TEvent | ActorSignal,
actorCtx: ActorContext<TEvent>
) => TState | TaggedState<TState>;
export interface EventObject {
type: string;
}
|
STACK_EDU
|
Windows does have a more advanced calculator built in, but it’s kinda cool having a calculator right there in Word at all times.
Where is this calculator?
You actually need to add it to the Quick Access toolbar:
2. Make sure For All Documents is selected in the Customize Quick Access Toolbar drop-down box.
3. In the Choose Commands From drop-down box, select Commands Not in The Ribbon.
4. Locate Calculate in the list and double-click it to add it to the list of Quick Access commands, then click OK.
What the calculator does
The calculator handles addition, subtraction, multiplication, division, percentages, exponentiation and roots.
- Addition: +
- Subtraction: – or you place the number to be subtracted in parentheses,
- Multiplication: *
- Division: /
- Percentages: %
- Exponentiation and roots: ^
If you were to type the numbers
2 + 2
Select the numbers and click the Calculator button. The result (4) is displayed in Word’s status bar.
The result is also stored on the clipboard, so you can paste it into your Word document or into another program.
You do not need to use an equal sign.
If you omit the operator, the calculator assumes you want to add the numbers.
If you were to type: 123 456 78.9 without an operator, the result (657.9) is displayed in Word’s status bar.
The calculator works anywhere. You can even use the calculator on the following sentence:
At the meeting there were 5 realtors, 2 health experts and 1 MS Word nerd.
Select the sentence and click the Calculator button. The total number at the meeting will be calculated. Something to keep in mind though – if your text includes any special characters, you might get a wrong calculation.
You can also use the Calculator in tables to total up numbers in columns, rows or the whole table. In a table, you still need to use parentheses around a number or a minus sign to denote a negative number.
Something to take note of is that although it is possible to select numbers in non-adjacent cells in a table by holding down the Ctrl key, the calculator will not give you a correct total. Your selection must contain contiguous cells, rows or columns.
Order of Operators
The calculator uses operator precedence and parentheses to determine the order of calculations in more complex expressions.
gives you the answer 18, while:
produces the result 66.
If you don’t include parentheses in an expression, Word performs operations in this order:
- power and root
- multiplication and division
- addition and subtraction.
And there you have it, Word’s calculator.
Download the pdf here: MSWordCalculator
|
OPCFW_CODE
|
Ruby on Rails Developer
We are looking for an experienced and energetic Ruby on Rails Developer for one of our flagship products. The developer will be helping to add features into the platform such as SSO, Scrapers, integrating APIs, and taking the product to the next level.
Cyber Security Associate
We are looking for a Cyber Security Associate, Who can demonstrate some knowledge and/or a proven record of success in the consulting industry, especially relating to business processes that are enabled by identity and access management solutions, including: Requirements analysis, strategy, design, implementation, and migration for businesses. LDAP, Identity, and Access technology concepts and understanding of software development. Candidates with a minimum experience of 1 year can apply.
Block Chain Engineer
We are looking for a BlockChain Engineer who has general knowledge and understanding of basic concepts and techs used in blockchain, Expert knowledge of algorithms, data structures, math. And has the ability to drive technical excellence, pushing innovation and quality. Have demonstrated the ability to coach, mentor, and increase the skill level of teammates. Experience with REST web services, message brokers, network programming.
Backend Engineer (Node.Js)
We are looking for an experienced and energetic Backend Engineer to join our team of creators consistently raising the bar on user experiences. You should be comfortable working alongside a team as well as independently in the design and development of mission-critical websites, applications, and layers of the infrastructure.
Full Stack Developer
We’re seeking a Full-stack engineer who is ready to work with new technologies and architectures in a forward-thinking organization that’s always pushing boundaries. Here, you will take complete, end-to-end ownership of projects across the entire stack. Our ideal candidate has experience building products across the stack and a firm understanding of web frameworks, APIs, databases, and multiple Backend languages.
Software Engineers - Oracle Cloud Fusion
This role is expected to be an Oracle Cloud Fusion Software Engineer working in partnership with the Global Accounting teams driving automation and innovation. You will own and Lead the Oracle Fusion system implementation end to end understanding Twilio’s processes and systems and build scalable systems for Acquisitions.
We are looking for a great Go developer who possesses a strong understanding of how best to leverage and exploit the language’s unique paradigms, idioms, and syntax. Your primary focus will be on developing Go packages and programs that are scalable and maintainable. You will ensure that these Go packages and programs are well documented and have reasonable test coverage.
We are looking for a .Net developer to build software using languages and technologies of the .NET framework. You will create applications from scratch, configure existing systems and provide user support. In this role, you should be able to write functional code with a sharp eye for spotting defects. You should be a team player and an excellent communicator. If you are also passionate about the .NET framework and software design/architecture, we’d like to meet you.
Microsoft Power BI
We are looking for a Business Intelligence (BI) Developer to create and manage BI and analytics solutions that turn data into knowledge. In this role, you should have a background in data and business analysis. You should be analytical and an excellent communicator. If you also have business acumen and problem-solving aptitude, we’d like to meet you. Ultimately, you will enhance our business intelligence system to help us make better decisions.
PHP / Laravel Developer
Are you a highly experienced, ambitious Fullstack developer looking for a challenging role where you can learn lots more? We are looking for a motivated PHP / Laravel developer to come join our agile team of professionals. If you are passionate about technology, constantly seeking to learn and improve your skillset, then you are the type of person we are looking for! We are offering superb career growth opportunities, great compensation, and benefits.
Clinical Support Specialist
A clinical support specialist assists medical professionals in a clinical setting. In this career, your job duties include setting patient appointments, collecting patient data and insurance information, and providing any additional support that your coworkers or doctors might need. You need strong analytical and customer service skills and experience working in a clinical setting. They also need strong analytical and customer service skills and experience working in a clinical setting.
|
OPCFW_CODE
|
How to make a fn only callable by an offchain worker?
I don't think this is possible but is it?
I have data that comes from an API that ultimately needs to be stored onchain which I don't want to risk having done through a Pays::No fn in case the call takes longer than the block.
This seems like one of the only solutions I can find to do this; by creating logic that only specified stored accounts through a Pays::No fn can call on a per interim basis.
But this solution can possibly take longer than the block time since it uses an API call.
There are basically two ways I think I want to do this, either have a user who the blockchain knows who it is to make the http call (possible using Pays:No), or through initialize (same issue in case http takes too long), or have the offchain worker call it but this requires public functions.
Is there any way to have a offchain worker store data without exposing the Call to the public?
Or...Such as, storing the offchain API call in local storage, then allowing anyone to submit that data using an unsigned tx after a certain amount of time? But how safe is storing the data to local storage?
How would you solve this
Transactions from offchain workers are treated no differently from transactions originating elsewhere.
I suggest adjusting the assumptions you're making about your runtime architecture, so it's completely independent from offchain workers.
If you want to get data fetched by an offchain worker onto your chain, a couple solutions could be
a permissioned system where some offchain worker origin/s are trusted and the only ones allowed to set that storage or
allow anyone to 'stake' value when propose new data, and a mechanism to slash dishonest actors
What about not using an offchain worker with an API call? Wouldn't this be a huge risk if the API call + computation doesn't complete by the end of the block?
I see, so basically you're saying instead of calling through http, just require users to send the data in? because it technically works this way. instead of calling, we could allow a gasless tx with slashing if not honest
Pallet method that accepts data, offchain worker makes api call and calls the pallet with the data it fetched. From the POV of the pallet it doesn't know it's an offchain worker that'll usually be calling it. The offchain worker just automates what would otherwise be a laborious manual task of periodically calling the pallet.
I ended up going with the route of not using an offchain worker and instead requiring multiple users to send the data in using Pays::No tx so its gasless with rate limiters. this allows to blockchain to know whos calling it and whos allowed and how often. using offchain + unsigned leaves all the functions public
|
STACK_EXCHANGE
|