<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[ronlut's blog]]></title><description><![CDATA[Tech life presented as code blocks]]></description><link>https://blog.ronlut.com/</link><generator>Ghost 5.26</generator><lastBuildDate>Wed, 22 Apr 2026 08:59:05 GMT</lastBuildDate><atom:link href="https://blog.ronlut.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Upgrading Ubuntu 20.04 to 20.10 ("Focal Fossa" to "Groovy Gorilla")]]></title><description><![CDATA[A short tutorial explaining how to upgrade Ubuntu 20.04 focal to Ubuntu 20.10 groovy.]]></description><link>https://blog.ronlut.com/upgrading-ubuntu-focal-to-groovy-20-04-20-10/</link><guid isPermaLink="false">63a7040177d1b60210b2bec7</guid><category><![CDATA[Linux]]></category><category><![CDATA[Technical]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Rony Lutsky]]></dc:creator><pubDate>Sun, 27 Sep 2020 10:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1517783999520-f068d7431a60?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1517783999520-f068d7431a60?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Upgrading Ubuntu 20.04 to 20.10 (&quot;Focal Fossa&quot; to &quot;Groovy Gorilla&quot;)"><p>I was getting pretty bad performance using Ubuntu 20.04 with a Dell XPS 9343. I upgraded to the yet-to-be-released Ubuntu 20.10 hoping it will help. <br>It did - performance is much better now.</p><p>This short post explains how to upgrade Ubuntu 20.04 <code>focal</code> to Ubuntu 20.10 <code>groovy</code>.</p><h2 id="steps">Steps</h2><!--kg-card-begin: markdown--><ol>
<li>Backup your important data to the cloud or external drive. Upgrading is always a risky process, better be safe than sorry.</li>
<li>Update package information<br>
<code>sudo apt update</code></li>
<li>Make sure all packages are up-to-date<br>
<code>sudo apt upgrade</code></li>
<li>Replace apt repositories list from <code>focal</code> to <code>groovy</code><br>
<code>sudo sed -i &apos;s/focal/groovy/g&apos; /etc/apt/sources.list</code></li>
<li>Update packages information from the new <code>groovy</code> repos<br>
<code>sudo apt update</code></li>
<li>Upgrade to Ubuntu 20.10<br>
<code>sudo apt dist-upgrade</code></li>
<li>Validate Ubuntu version</li>
</ol>
<pre><code># lsb_release -ca
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu Groovy Gorilla (development branch)
Release:	20.10
Codename:	groovy
</code></pre>
<!--kg-card-end: markdown--><p>I&apos;m running Ubuntu 20.10 over a week and so far so good.<br>Let me know how it works out for you!</p><h2 id="troubleshooting">Troubleshooting</h2><!--kg-card-begin: markdown--><ul>
<li>Broadcom Wi-Fi driver doesn&apos;t work<br>
I had a minor problem with the Broadcom Wi-Fi driver crashing and not working after the upgrade.<br>
To solve that:</li>
</ul>
<pre><code>sudo apt-get remove --purge bcmwl-kernel-source
sudo apt-get install broadcom-sta-source
sudo apt-get install broadcom-sta-dkms
sudo apt-get install broadcom-sta-common
# reboot
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Sharing API Gateway with Multiple Services in Serverless Framework]]></title><description><![CDATA[A short how-to on sharing one API resource with multiple services in serverless framework.]]></description><link>https://blog.ronlut.com/sharing-api-gateway-with-multiple-services-in-serverless-framework/</link><guid isPermaLink="false">63a7040177d1b60210b2bec5</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Python]]></category><category><![CDATA[Technical]]></category><category><![CDATA[Serverless]]></category><dc:creator><![CDATA[Rony Lutsky]]></dc:creator><pubDate>Thu, 03 Sep 2020 13:30:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1461354464878-ad92f492a5a0?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1461354464878-ad92f492a5a0?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Sharing API Gateway with Multiple Services in Serverless Framework"><p>While working on my little side project <em>JSONPerf</em> (check it out: <a href="https://jsonperf.com">jsonperf.com</a>), I stumbled upon an interesting problem I thought was worth sharing with you.</p><p>A quick summary of what JSONPerf is will help us understand the need before discussing the solution.<br>The project&apos;s goal is to benchmark JSON libraries&apos; performance in different programming languages. One of the features it provides is uploading a JSON file and benchmarking the different libraries on the user&apos;s specific file.</p><p>To implement this feature, I decided to implement the per-language benchmark endpoint using AWS Lambda as it provides me flexibility and I can use a different runtime for each function.</p><p>The problem I stumbled upon is the fact that in my <em>serverless framework</em> definitions file, I need to have multiple endpoints ( <code>/python3</code>, <code>/python2</code>, <code>/java</code> etc.) under the same API, each using a different language (runtime).</p><p>Unfortunately, this quickly turned out to be impossible because the python requirements plugin currently doesn&apos;t support my setup.<br>It allows you to specify different <em>requirements</em> file per function, but only if the file is called <em>requirements.txt </em>and the code is in different folders (<a href="https://github.com/UnitedIncome/serverless-python-requirements/issues/491">github issue</a>).<br>In my case, both Python 2 and Python 3 functions <strong>share the same code</strong> but have different requirements (different libraries to compare).</p><p>To solve it, I created a main serverless.yml that declares an API Gateway which is then shared between the other, per-function serverless.yml files.</p><h3 id="main-api-serverless-yml-file">Main (API) <code>serverless.yml</code> file</h3><!--kg-card-begin: markdown--><pre><code>service: api 

provider:
name: aws
stage: dev
region: us-east-1

resources:
Resources:
  ApiGw:
    Type: AWS::ApiGateway::RestApi
    Properties:
      Name: ApiGw

Outputs:
  apiGatewayRestApiId:
    Value:
      Ref: ApiGw
    Export:
      Name: ApiGw-restApiId

  apiGatewayRestApiRootResourceId:
    Value:
      Fn::GetAtt:
        - ApiGw
        - RootResourceId
    Export:
      Name: ApiGw-rootResourceId

</code></pre>
<!--kg-card-end: markdown--><h3 id="services-files-python-2-example-python-serverless_py2-yml">Services files, Python 2 example <code>python/serverless_py2.yml</code></h3><!--kg-card-begin: markdown--><pre><code>service: python2

plugins:
  - serverless-python-requirements
  - serverless-wsgi

package:
  exclude:
    - &apos;venv*/**&apos;

custom:
  wsgi:
    app: app.app
    packRequirements: false
  pythonRequirements:
    fileName: requirements_py2.txt
    dockerizePip: non-linux
    slim: true

provider:
  name: aws
  stage: dev
  region: us-east-1
  runtime: python2.7
  apiGateway:
    restApiId:
      &apos;Fn::ImportValue&apos;: ApiGw-restApiId
    restApiRootResourceId:
      &apos;Fn::ImportValue&apos;: ApiGw-rootResourceId
    websocketApiId:
      &apos;Fn::ImportValue&apos;: ApiGw-websocketApiId

functions:
  python2:
    handler: wsgi_handler.handler
    events:
      - http:
          method: post
          path: /python2
          cors: false

</code></pre>
<!--kg-card-end: markdown--><p>This is of course just an example and you can tweak it as much as you want with the flexibility you gain.</p><p>Hope this will help someone in finding the <a href="https://www.serverless.com/framework/docs/providers/aws/events/apigateway#easiest-and-cicd-friendly-example-of-using-shared-api-gateway-and-api-resources">little hidden documentation</a> to accomplish that using <em>serverless framework</em>.</p>]]></content:encoded></item><item><title><![CDATA[Flink Job Cluster on Kubernetes - File Based High Availability]]></title><description><![CDATA[How to achieve high availability on Kubernetes without using ZooKeeper by utilizing a custom, file-based high availability implementation]]></description><link>https://blog.ronlut.com/flink-job-cluster-on-kubernetes-file-based-high-availability/</link><guid isPermaLink="false">63a7040177d1b60210b2bec1</guid><category><![CDATA[Flink]]></category><category><![CDATA[k8s]]></category><category><![CDATA[Technical]]></category><dc:creator><![CDATA[Rony Lutsky]]></dc:creator><pubDate>Wed, 06 May 2020 17:54:57 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1461360228754-6e81c478b882?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1461360228754-6e81c478b882?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Flink Job Cluster on Kubernetes - File Based High Availability"><p>In my <a href="https://blog.ronlut.com/flink-job-cluster-on-kubernetes/">previous post</a>, I explained a bit about Flink and the difference between a job and session clusters. In addition, I showed how to deploy a job cluster in a way that works best in my opinion.</p><p>In this blog post, I will talk about how to achieve high availability on Kubernetes without using ZooKeeper by utilizing a custom, file-based high availability implementation. You can find the implementation <a href="https://github.com/ronlut/flink-k8s/tree/master/src/main/java/com/ronlut/flinkjobcluster/filesystemha">here</a>.</p><h2 id="why-would-one-want-to-run-flink-without-zookeeper">Why Would One Want to Run Flink Without ZooKeeper?</h2><p>When running Flink on Kubernetes I think we should strive to use the powers Kubernetes gives us. One of them is <code>ReplicaSet</code>, which gives us the ability to deploy a pod with specified replicas and keep this number of pods up, even if a node fails.</p><p>Flink uses ZooKeeper to support job manager(s) high availability. In case a job manager fails, a new one can be started and become the leader.<br>With Kubernetes pod scheduling, we don&apos;t need ZooKeeper to manage job manager high availability.</p><p>By using <code>StatefulSet</code> for the job manager and <code>Deployment</code> for the task managers, we make use of <code>ReplicaSet</code> behind the scenes, hence making sure our managers will stay up even without ZooKeeper.</p><h2 id="how-to-implement-a-custom-high-availability-service">How to Implement a Custom High Availability Service?</h2><p>To implement our custom HA service we need to implement a few things:</p><h3 id="leader-retrievers-and-election-services">Leader Retrievers and Election Services</h3><p>This tells flink how to elect a job manager to be the leader and where to retrieve the leader from.</p><p>In our case, when we have one job manager and he&apos;s always the leader we just tell flink to always choose the same leader, without election. This is why I used <code>StandaloneLeaderElectionService</code> for all the election services. <br>From the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/java/org/apache/flink/runtime/leaderretrieval/StandaloneLeaderRetrievalService.html">documentation</a>: <em>&quot;The standalone implementation assumes that there is only a single LeaderContender and thus directly grants him the leadership upon start up&quot;</em>.</p><p>For leader retrievers, I used <code>StandaloneLeaderRetrievalService</code> with the relevant constant address we can use thanks to the Kubernetes services we deploy. <br>From the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/java/org/apache/flink/runtime/leaderelection/StandaloneLeaderElectionService.html">documentation</a>: <em>&quot;This implementation assumes that there is only a single contender for leadership (e.g., a single JobManager or ResourceManager process) and that this process is reachable under a constant address.&quot;</em></p><h3 id="checkpoint-recovery-factory">Checkpoint Recovery Factory</h3><p>This is a factory that provides 2 classes that allow Flink to save checkpoints, retrieve checkpoints and keep a count of checkpoints.</p><p>In our case, both the checkpoints and the counter are saved on disk. You can find the implementations in <a href="https://github.com/ronlut/flink-k8s/blob/master/src/main/java/com/ronlut/flinkjobcluster/filesystemha/FsCheckpointRecoveryFactory.java">FsCheckpointRecoveryFactory.java</a> and <a href="https://github.com/ronlut/flink-k8s/blob/master/src/main/java/com/ronlut/flinkjobcluster/filesystemha/FsCheckpointIDCounter.java">FsCheckpointIDCounter.java</a>.</p><h3 id="submitted-job-graph-store">Submitted Job Graph Store</h3><p>This provides a way to save the job graphs and retrieve them.<br>As we are running a job cluster, we have only one job. This means we can always provide the same job graph.</p><p>Flink has a class that helps us with that, <strong><code>SingleJobSubmittedJobGraphStore</code></strong>. You can find the implementation <a href="https://github.com/apache/flink/blob/release-1.9.2/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/SingleJobSubmittedJobGraphStore.java">here</a>.</p><h3 id="running-jobs-registry">Running Jobs Registry</h3><p>Documentation explains the need for that component quite well: <em>&quot;This registry is used in highly-available setups with multiple master nodes, to determine whether a new leader should attempt to recover a certain job (because the job is still running), or whether the job has already finished successfully (in case of a finite job) and the leader has only been granted leadership because the previous leader quit cleanly after the job was finished.&quot;</em></p><p>I used the provided <code>FsNegativeRunningJobsRegistry</code> class. <br>Its documentation can be found <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/java/org/apache/flink/runtime/highavailability/FsNegativeRunningJobsRegistry.html">here</a> and implementation <a href="https://github.com/apache/flink/blob/release-1.9.2/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FsNegativeRunningJobsRegistry.java">here</a>.</p><h2 id="kubernetes-setup">Kubernetes Setup</h2><p>To make all of this work, we need to make a few adjustments in our k8s YAMLs.</p><ul><li>Change our <code>Deployment</code> to <code>StatefulSet</code>. This tells Kubernetes that the pods are stateful and the we have a persistent volume attached to each one of them (currently one).</li><li>We want to make sure our shared volume is accessible by the user Flink runs with. To achieve that I added an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/">init container</a> that changes the ownership of that directory.</li><li>We <a href="https://github.com/ronlut/flink-k8s/blob/master/k8s/storage-aws.yaml">add</a> a <code>StorageClass</code> and a <code>PersistentVolume</code> to be able to mount a volume to our pods. We mount the shared volume at <code>/flink-shared</code>.</li></ul><h2 id="flink-setup">Flink Setup</h2><p>A few things we need to change in our flink configuration to utilize our brand new HA service:</p><ul><li>Change our high-availability to our factory class <br><code>high-availability: com.ronlut.flinkjobcluster.filesystemha.SingleFsHaServicesFactory</code> </li><li>Set the HA storage dir to our mounted persistent volume <br><code>high-availability.storageDir: file:///flink-shared/ha</code></li><li>Set up the backend state storage to filesystem <br><code>state.backend: filesystem</code></li><li>Set the checkpoints and savepoints dir to the a shared directory that is mounted to the job and task managers <br><code>state.checkpoints.dir: file:///flink-shared/checkpoints</code> and <code>state.savepoints.dir: file:///flink-shared/savepoints</code></li></ul><p>Complete config file can be found <a href="https://github.com/ronlut/flink-k8s/blob/master/k8s/flink-configuration-fsha.yaml">here</a>.</p><hr><h2 id="final-notes">Final Notes</h2><ul><li>In this example I used <code>aws-ebs</code> in the <code>StorageClass</code> to show how this will work in a cloud environment. Change it to the equivalent provisioner for your cloud provider</li><li>If you are using EBS, job manager and task managers must be on same node as EBS can&apos;t be mounted to more than one node. <br>This is achieved by <a href="https://github.com/ronlut/flink-k8s/blob/master/k8s/taskmanager-fsha.yaml#L22">setting affinity</a> on the task manager deployment pods.<br>This might not be needed in other cloud providers, depending on your storage.</li></ul><p>Full working example can be found <a href="https://github.com/ronlut/flink-k8s">here</a>.</p><p>Let me know if you have any questions or something is missing.</p>]]></content:encoded></item><item><title><![CDATA[How to Correctly Deploy an Apache Flink Job Cluster on Kubernetes]]></title><description><![CDATA[I didn't think I would struggle with doing something pretty straightforward like deploying a job cluster on k8s.]]></description><link>https://blog.ronlut.com/flink-job-cluster-on-kubernetes/</link><guid isPermaLink="false">63a7040177d1b60210b2bec0</guid><category><![CDATA[Flink]]></category><category><![CDATA[k8s]]></category><dc:creator><![CDATA[Rony Lutsky]]></dc:creator><pubDate>Fri, 24 Apr 2020 11:45:28 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1507666405895-422eee7d517f?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1507666405895-422eee7d517f?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="How to Correctly Deploy an Apache Flink Job Cluster on Kubernetes"><p>I love Flink. I think it&apos;s an amazing product, with great documentation and community.</p><p>For readers who aren&apos;t familiar with Flink, it is a framework for computations over unbounded and bounded data streams. It runs in a distributed manner and designed to perform exceptionally at scale.<br>You can read more about Flink <a href="https://flink.apache.org/flink-architecture.html">here</a>.</p><p>I didn&apos;t think I would struggle with doing something pretty straightforward like deploying a job cluster on k8s, not to mention deploying it on k8s with file based high-availability configured, which will be covered in the next post.</p><p>TL;DR: <a href="https://github.com/ronlut/flink-k8s">GitHub repo</a></p><p>Just to be on the same page, let&apos;s explain what a job cluster is and how is it different from a session cluster.</p><h2 id="job-vs-session-cluster">Job VS Session Cluster</h2><p>Session cluster is a long-running Flink cluster, executing the jobs submitted to it.<br>Job cluster on the other hand, is a Flink cluster that is dedicated to run a single predefined job, without job submission.</p><h3 id="why-would-you-choose-one-over-the-other">Why would you choose one over the other?</h3><p>In my opinion, a session cluster is more suitable to a situation where you submit multiple short-running jobs, dealing with <strong>bounded data</strong>. The cluster&apos;s resources are shared for all the jobs running on it.<br>If you want to run a job that deals with unbounded data, this job is not intended to end, ever. You want to be able to upgrade the job and redeploy the cluster with the new job, instead of dealing with resubmitting jobs, hence a job cluster feels more appropriate.</p><p>Now, let&apos;s continue with our adventure (<em>using Flink 1.9.2).</em></p><h2 id="kubernetes-job-or-deployment">Kubernetes: Job or Deployment?</h2><p>Flink, in their <a href="https://github.com/apache/flink/tree/release-1.9/flink-container/kubernetes">official example</a> advices to use a kubernetes <code>job</code> for the job-manager. This makes no sense IMHO as you want your job manager to be a long running application and automatically restart and continue from where it stopped if the pod gets deleted. </p><p>This is why I decided to change the <code>job</code> to a <code>deployment</code>.</p><h2 id="probes">Probes</h2><p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes">Probes</a> is a useful feature in kubernetes that helps us makes sure the application is running.</p><p>With Flink it&apos;s pretty easy to configure a liveness probe by accessing the Flink dashboard ui.</p><p>You can find that in the <a href="https://github.com/ronlut/flink-k8s/blob/master/k8s/jobmanager-ha.yaml">jobmanager-ha.yaml</a> yaml.</p><h2 id="flink-configuration">Flink Configuration</h2><p>Another thing I didn&apos;t like was the fact configuration is passed to flink via the CLI in the k8s <a href="https://github.com/apache/flink/blob/release-1.9/flink-container/kubernetes/job-cluster-job.yaml.template#L35">container arguments</a>. <br>This is why I created a <code>configmap</code> and use it to set Flink&apos;s configuration, both for the job and task managers.<br>You can find the definition in the <a href="https://github.com/ronlut/flink-k8s/blob/master/k8s/flink-configuration-ha.yaml">flink-configuration-ha.yaml</a> file.</p><h2 id="web-ui">Web UI</h2><p>I added a rest service to be able to access Flink&apos;s web ui.<br>You can find the definition in the <a href="https://github.com/ronlut/flink-k8s/blob/master/k8s/jobmanager-rest-service.yaml">jobmanager-rest-service.yaml</a> file.</p><hr><p>You can find my fully working example <a href="https://github.com/ronlut/flink-k8s">here</a>.</p><p><strong>Don&apos;t forget to remove the <code>imagePullPolicy: Never</code> and set a real image name in the job manager and task manager yamls to run it in a non-minikube environment.</strong></p><p>In the <a href="https://blog.ronlut.com/flink-job-cluster-on-kubernetes-file-based-high-availability/">next blog post</a> I cover the details of deploying a highly available Flink job cluster on k8s without ZooKeeper, using a file-based high availability implementation.</p><p>Enjoy your Flink adventures!</p>]]></content:encoded></item><item><title><![CDATA[Accessing a Private API Gateway (AWS)]]></title><description><![CDATA[Accessing a Private API Gateway should be easy, right?]]></description><link>https://blog.ronlut.com/accessing-a-private-api-gateway-aws/</link><guid isPermaLink="false">63a7040177d1b60210b2bebe</guid><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Rony Lutsky]]></dc:creator><pubDate>Fri, 08 Feb 2019 18:31:15 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1580847097346-72d80f164702?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1580847097346-72d80f164702?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Accessing a Private API Gateway (AWS)"><p>Understanding how to access an API you just created in AWS should be easy, right?</p><p>I spent a significant amount of time figuring out how to deploy a private API Gateway that would be accessible from outside of the VPC in which it resides.</p><p>Let&apos;s start with some background:<br>You can deploy an API Gateway using one of three modes: Regional, Edge and Private. If you want your API to be open to the internet, you&apos;ll probably choose Regional or Edge. If it&apos;s an API accessed by internal clients from within your VPC, you will probably choose Private.</p><p>All clients inside the gateway&apos;s VPC should be able to access it without a problem using the invoke url (you can find the invoke url at the top of the stage page).</p><p>It gets complicated when you want clients sitting in another VPC or your colleagues from the office network to be able to access your private API Gateway.<br>In this case, they won&apos;t be able to access it using the invoke url, as this DNS works only inside the VPC.</p><p>What should you do to solve it?<br>You need to create a VPC Endpoint for the VPC your API Gateway resides in, by going to <em>Endpoints</em> in the VPC management page.<br>Then creating the endpoint, choose the <em>com.amazonaws.us-east-1.execute-api </em>service, and your VPC. Tick the <em>Enable Private DNS Name </em>checkbox and create or use an existing security group to restrict the access to the VPC endpoint to your internal IPs only (your office internal subnet for example).</p><p>Then, to access the API Gateway, you must send a special header with every request, stating the API you want to access. The reason for this being the fact that you have 1 VPC Endpoint, and potentially more than one API Gateway inside this VPC.<br>The header name is <em>x-apigw-api-id</em> and the value should be the unique ID of your API Gateway. </p><p>An example request should look like this:</p><!--kg-card-begin: markdown--><p><code>curl -X GET https://{VPC_ENDPOINT_DNS_NAME}/{STAGE_NAME}/{ENDPOINT_NAME} -H&#x2019;x-apigw-api-id:{API_GW_ID}</code></p>
<!--kg-card-end: markdown--><p>or using Python:</p><!--kg-card-begin: markdown--><p><code>requests.get(&apos;https://{VPC_ENDPOINT_DNS_NAME}/{STAGE_NAME}/{ENDPOINT_NAME}&apos;,                          headers={&apos;x-apigw-api-id&apos;: &apos;{API_GW_ID}&apos;})</code></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Fix unmuting (PulseAudio) on Xfce]]></title><description><![CDATA[<p>Unmuting doesn&apos;t work in Xfce?<br>I had this problem too after installing Debian Testing (Jessie) which <a href="http://www.omgubuntu.co.uk/2013/11/debian-8-0-switches-xfce-default">now comes with Xfce by default</a>, but you may encounter it on any system running Xfce DE.</p><p>Every time I muted the audio using the mute key on my keyboard, everything worked</p>]]></description><link>https://blog.ronlut.com/fix-unmuting-pulseaudio-on-xfce/</link><guid isPermaLink="false">63a7040177d1b60210b2bebb</guid><category><![CDATA[Technical]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Debian]]></category><dc:creator><![CDATA[Rony Lutsky]]></dc:creator><pubDate>Sat, 31 May 2014 12:54:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1483706600674-e0c87d3fe85b?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1483706600674-e0c87d3fe85b?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Fix unmuting (PulseAudio) on Xfce"><p>Unmuting doesn&apos;t work in Xfce?<br>I had this problem too after installing Debian Testing (Jessie) which <a href="http://www.omgubuntu.co.uk/2013/11/debian-8-0-switches-xfce-default">now comes with Xfce by default</a>, but you may encounter it on any system running Xfce DE.</p><p>Every time I muted the audio using the mute key on my keyboard, everything worked find - the audio was indeed muted. But when I tried to unmute using the same key, the audio indicator showed the unmuted state but audio was still muted.<br>After some investigation I understood that the only way to unmute my system was entering the command:</p><p><code>alsamixer</code></p><p>in the terminal and then press &apos;M&apos; to toggle the mute state.<br>Apparently, the Audio Mixer was muting PulseAudio AND Alsa, but unmuting ONLY Alsa.</p><p>If you run the following command, it will list all the properties under xfce4-mixer:</p><p><code>xfconf-query -lc xfce4-mixer</code></p><p>In my case, what I saw is only one mixer under /sound-cards/ entry. It was Alsa, PulseAudio just wasn&apos;t there.<br>Also, in the audio mixer I saw only the Alsa entry and couldn&apos;t even control the PulseAudio mixer as you can see here:</p><figure class="kg-card kg-image-card"><img src="http://res.cloudinary.com/dzz2ele6v/image/upload/v1535756321/before_av2zpk.png" class="kg-image" alt="Fix unmuting (PulseAudio) on Xfce" loading="lazy"></figure><p>From what I understood, <code>xfce4-mixer</code> (the Audio Mixer) was missing an optional dependency to be able to work with PulseAudio.<br>This dependency is: <strong>gstreamer0.10-pulseaudio</strong>.<br>After installing the above dependency, using (on debian):<br><code>sudo apt-get install gstreamer0.10-pulseaudio</code><br>the Audio Mixer looked a lot better:</p><figure class="kg-card kg-image-card"><img src="http://res.cloudinary.com/dzz2ele6v/image/upload/v1535756320/after_rhidyg.png" class="kg-image" alt="Fix unmuting (PulseAudio) on Xfce" loading="lazy"></figure><p>But, running:</p><p><code>xfconf-query -c xfce4-mixer -p /active-card</code></p><p>to query the active card used by xfce4-mixer still returned &quot;HDANVidiaAlsamixer&quot;.<br>So the last thing I needed to do is change the &quot;active card&quot; in xfce4-mixer too by running:</p><p><code>xfconf-query -c xfce4-mixer -p /active-card -s PlaybackBuiltinAudioAnalogStereoPulseAudioMixer</code></p><p>After that, toggling mute finally worked.</p>]]></content:encoded></item></channel></rss>