Compromise Budget Resolution Unlikely to Clarify Fate of Spending Caps

first_img Dan Cohen AUTHOR An amendment to the Senate version of the fiscal 2016 budget resolution calling for relief from the statutory spending caps for defense and non-defense spending over the next  two years is unlikely to be included in the compromise resolution the two chambers are expected to hash out over the coming weeks.The amendment, introduced by Virginia Sen. Tim Kaine (D) and passed with the support of four Republicans, called for replacing $148 billion in spending cuts in fiscal 2016 and 2017 with alternative offsets not yet specified over the 10-year budget window, reported CQ.As talks get under way over a compromise budget, conservative groups are pushing back against the provision because it would reverse the Budget Control Act spending caps. The fight over relaxing the caps is part of the larger budget battle between right wing champions of the spending limits and defense hawks.So far, no formal negotiations are expected on replacing the Budget Control Act spending cuts, but the issue likely will heat up as the new fiscal year approaches. Lawmakers would need to pass a bill altering the caps as language in the budget resolution is not binding.Despite the uphill battle, Kaine will attempt to include his provision in the budget conference report, as well as encourage bipartisan talks on a deal to provide a reprieve for discretionary spending, regardless of what happens in the budget conference, according to the story.A Kaine spokesman said the amendment was “important to ensure everything is on the table to replace the sequester cuts” and make clear that there should be relief to both defense and non-defense programs.“It’s his hope the amendment sets the stage for a multiyear deal on sequester similar to Murray-Ryan,” the spokesman said in reference to the December 2013 deal struck by Rep. Paul Ryan (R-Wis.) and Sen. Patty Murray (D-Wash.) that offered the Pentagon $31 billion in spending above the FY 2014 and 2015 caps.last_img read more

Elon Musk Is Sending 2 Wealthy Individuals to the Moon

first_imgFebruary 28, 2017 Enroll Now for Free This hands-on workshop will give you the tools to authentically connect with an increasingly skeptical online audience. Free Workshop | August 28: Get Better Engagement and Build Trust With Customers Nowcenter_img 2 min read It’s been 45 years since anyone has traveled to the moon. On Dec. 7, 1972, NASA launched its final moon mission, making Apollo 17 crew members Eugene A. Cernan, Harrison H. Schmitt and Ronald E. Evans the last three members of a very select club.But if Elon Musk has his way, that will change soon.The SpaceX CEO announced yesterday that the company had been tapped to send two private citizens on a trip around the moon in 2018, which would bring the grand total of lunar travelers to 26. Related: Watch Elon Musk’s View of the SpaceX Falcon 9 Rocket LandingFly me to the moon … Okhttps://t.co/6QT8m5SHwn— Elon Musk (@elonmusk) February 27, 2017There is no word yet on who the intrepid explorers will be, but we’re guessing that they have pretty deep pockets, as SpaceX noted that the mystery duo has “paid a significant deposit” to make the mission possible. More information will be revealed provided the individuals pass the rigorous fitness and health tests.SpaceX isn’t doing this alone. The mission is part of its ongoing partnership with NASARelated: SpaceX Pushes Back Mars Mission TimelineThe company recently announced that it revised its Mars mission timeline, with the first robotic mission to the red planet on track to take place in 2020 rather than 2018. Earlier this month, SpaceX also conducted a successful launch and landing of its Falcon 9 rocket at Cape Canaveral and sent a supplies delivery to the International Space Station.The lunar travelers will be sent into space with the company’s Falcon Heavy rocket, which will have its first test flight this summer. Meanwhile, SpaceX will also test the spacecraft that it hopes will carry astronauts to the ISS in 2018 later this year.last_img read more

Kubeflow 03 released with simpler setup and improved machine learning development

first_imgEarly this week, the Kubeflow project launched its latest version- Kubeflow 0.3, just 3 months after version 0.2 was out. This release comes with easier deployment and customization of components along with better multi-framework support. Kubeflow is the machine learning toolkit for Kubernetes. It is an open source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Users are provided with a easy to use ML stack anywhere that Kubernetes is already running, and this stack can self configure based on the cluster it deploys into. Features of Kubeflow 0.3 1. Declarative and Extensible Deployment Kubeflow 0.3 comes with a deployment command line script; kfctl.sh. This tool allows consistent configuration and deployment of Kubernetes resources and non-K8s resources (e.g. clusters, filesystems, etc. Minikube deployment provides a single command shell script based deployment. Users can also use MicroK8s to easily run Kubeflow on their laptop. 2. Better Inference Capabilities Version 0.3 makes it possible to do batch inference with GPUs (but non distributed) for TensorFlow using Apache Beam.  Batch and streaming data processing jobs that run on a variety of execution engines can be easily written with Apache Beam. Running TFServing in production is now easier because of the Liveness probe added and using fluentd to log request and responses to enable model retraining. It also takes advantage of the NVIDIA TensorRT Inference Server to offer more options for online prediction using both CPUs and GPUs. This Server is a containerized, production-ready AI inference server which maximizes utilization of GPU servers. It does this by running multiple models concurrently on the GPU and supports all the top AI frameworks. 3. Hyperparameter tuning Kubeflow 0.3 introduces a new K8s custom controller, StudyJob, which allows a hyperparameter search to be defined using YAML thus making it easy to use hyperparameter tuning without writing any code. 4. Miscellaneous updates The upgrade includes a release of a K8s custom controller for Chainer (docs). Cisco has created a v1alpha2 API for PyTorch that brings parity and consistency with the TFJob operator. It is easier to handle production workloads for PyTorch and TFJob because of the new features added to them. There is also support provided for gang-scheduling using Kube Arbitrator to avoid stranding resources and deadlocking in clusters under heavy load. The 0.3 Kubeflow Jupyter images ship with TF Data-Validation. TF Data-Validation is a library used to explore and validate machine learning data. You can check the examples added by the team to understand how to leverage Kubeflow. The XGBoost example indicates how to use non-DL frameworks with Kubeflow The object detection example illustrates leveraging GPUs for online and batch inference. The financial time series prediction example shows how to leverage Kubeflow for time series analysis The team has said that the next major release:  0.4, will be coming by the end of this year. They will focus on ease of use to perform common ML tasks without having to learn Kubernetes. They also plan to make it easier to track models by providing a simple API and database for tracking models. Finally, they intend to upgrade the PyTorch and TFJob operators to beta. For a complete list of updates, visit the 0.3 Change Log on GitHub. Read Next Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless frameworkIntroducing Alpha Support for Volume Snapshotting in Kubernetes 1.12‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectllast_img read more